One Small Step for AI, One Giant Step for Robotics

Janani Gopalakrishnan Vikram is a technically-qualified freelance writer, editor and hands-on mom based in Chennai

2781
Advertisement

There have been immense and innumerable developments in robotics & AI in recent times—some significant, some not so. Right from form factor and flexibility to motion, sensing and interaction, every aspect of robotics has brought them closer to humans. Robots are now assisting in healthcare centres, schools, hospitals, industries, war fronts, rescue centres, homes and almost everywhere else. We must acknowledge that this has come about not merely due to mechanical developments, but mainly due to the increasing intelligence, or so-called smartness, of robots.

Smartness is a subjective thing. But in the context of robots, we can say that smartness is a robot’s ability to autonomously or semi-autonomously perceive and understand its environment, learn to do things and respond to situations, and mingle safely with humans. This means that it should be able to think and even decide to a certain extent, like we do.

Let us take you through some assorted developments from around the world that are empowering robots with these capabilities.

AI overview

Understanding by asking questions

When somebody asks us to fetch something, and we do not really understand which object to fetch or where it is, what do we do? We usually ask questions to zero in on the right object. This is exactly what researchers at Brown University, USA, want their robots to be able to do.

Stefanie Tellex of Humans to Robots Lab of Brown University is using a social approach to improve the accuracy with which robots follow human instructions. The system, called FETCH-POMDP, enables the robots to model their own confusion and solve it by asking relevant questions.

The system can understand gestures, associate these with what the human being is saying and use this to understand instructions better. Only when it is unable to do so does it start asking questions. For example, if you signal at the sink and ask the robot to fetch a bowl, and if there is only one bowl in the sink, it will fetch it without asking any questions. But if it finds more than one bowl there, it might ask questions about the size or colour of the bowl. When testing the system, the researchers expected the robot to respond faster when it had no questions to ask, but it turned out that the intelligent questioning approach managed to be faster and more accurate.

The trials also showed the system to be more intelligent than it was expected to be, because it could even understand complex instructions with lots of prepositions. For example, it could respond accurately when somebody said, “Hand me the spoon to the left of the bowl.” Although such complex phrases were not built into the language model, the robot was able to use intelligent social feedback to figure out the instruction.

Robot asks questions to clarify confusing instructions (Image courtesy: Brown University)
Robot asks questions to clarify confusing instructions (Image courtesy: Brown University)

Learning gets deeper and smaller than you thought

Deep learning is an artificial intelligence (AI) technology that is pervading all streams of life ranging from banking to baking. A deep learning system essentially uses neural networks, modelled after the human brain, to learn by itself just like a human child does. It is made of multi-layered deep neural networks that mimic the activities of the layers of neurons in the neocortex. Each layer tries to understand something more than the previous layer, thereby developing a deeper understanding of things. The resulting system is self-learning, which means that it is not restricted by what it has been taught to do. It can react according to the situation and even make decisions by itself.

Deep learning is obviously a very useful tech for robots, too. However, it usually requires large memory banks and runs on huge servers powered by advanced graphics processing units (GPUs). If only deep learning could be achieved in a form factor small enough to embed in a robot!

Micromotes developed at University of Michigan, USA, could be the answer to this challenge. Measuring one cubic millimetre, the micromotes developed by David Blaauw and his colleague Dennis Sylvester are amongst the world’s smallest computers. The duo has developed different variants of micromotes, including smart sensors and radios. Amongst these is a micromote that incorporates a deep learning processor, which can operate a neural network using just 288 microwatts.

There have been earlier attempts to reduce the size and power demands of deep learning using dedicated hardware specially designed to run these algorithms. But so far, nobody has managed to use less than 50 milliwatts of power and the size too has never been this small. Blaauw and team managed to achieve deep learning on a micromote by redesigning the chip architecture, with tweaks such as situating four processing elements within the memory (SRAM) to minimise data movement.

Tiny micromotes developed at University of Michigan can incorporate deep learning processors in them (Image courtesy: University of Michigan)
Tiny micromotes developed at University of Michigan can incorporate deep learning processors in them (Image courtesy: University of Michigan)

The team’s intention was to bring deep learning to the Internet of Things (IoT), so we can have devices like security cameras with onboard deep learning processors that can instantly differentiate between a branch and a thief lurking on the tree. But the same technology can be very useful for robots, too.

Advertisement


SHARE YOUR THOUGHTS & COMMENTS

Please enter your comment!
Please enter your name here