One Small Step for AI, One Giant Step for Robotics

Janani Gopalakrishnan Vikram is a technically-qualified freelance writer, editor and hands-on mom based in Chennai

- Advertisement -

The wearable robotic suit includes head and neck motion sensors, devices to capture arm movements and haptic gloves. The operator can control the robot’s movement using a foot pedal, and see what the robot is seeing using a virtual reality headset. The suit might also contain sensors and devices to capture brain waves.

The robot is described as a humanoid of 1.2-metre height, possibly covered with synthetic skin, with two (or more) arms ending in hands or grippers and wheeled treads for locomotion. It has cameras on its head and other sensors like infrared and ultraviolet imaging, GPS, touch, proximity, strain sensors and radiation detectors to stream data to its operator.

Something that catches everybody’s attention here is a line that says, “An operator may include a non-human animal such as a monkey… and the operator interface may be… re-sized to account for the differences between a human operator and a monkey operator.”

- Advertisement -

But what is so smart about an operator controlling a robot, even if the operator is a monkey? Well, the interesting part of this technology is that the robots will also, eventually, be able to learn from their operators and carry out the tasks autonomously. According to the patent application, device-control instructions and environment sensor information generated over multiple runs may be used to derive autonomous control information, which may be used to facilitate autonomous behaviour in an autonomous device.

Kindred hopes to do these using deep hierarchical learning algorithms like a conditional deep belief network or a conditional restricted Boltzmann machine, a type of powerful recurrent neural network.

This is what possibly links Kindred to D-Wave. The operation of D-Wave’s quantum computing system is described by these as being analogous to a restricted Boltzmann machine, and its research team is working to exploit the parallels between these architectures to substantially accelerate learning in deep, hierarchical neural networks.

In 2010, Rose also published a paper that shows how a quantum computer can be very effective at machine learning. So if Kindred succeeds in putting two and two together, we can look forward to a new wave of quantum computing in robotics.

Robots get more social

It is a well-known fact that technology can help disabled and vulnerable people to lead more comfortable lives. It can assist them to do their tasks independently without requiring another human being to help them. This improves their self-esteem.

However, Maja Matarić of University of South California, USA, believes that the technology can be more assistive if it is embodied in the form of a robot rather than a tool running on a mobile device or an invisible technology embedded somewhere in the walls or beds.

Matarić’s research has shown that the presence of human-like robots is more effective in getting people to do things, be it getting senior citizens to exercise or encouraging autistic children to interact with their peers. “The social component is the only thing that reliably makes people change behaviour. It makes people lose weight, recover faster and so on. It is possible that screens are actually making us less social. So that is where robotics can make a difference—this fundamental embodiment,” Matarić mentioned while addressing a gathering at American Association for the Advancement of Science.

Matarić is building such robots through her startup Embodied Inc. Research and trials have shown promising results. One study found that autistic children showed more autonomous behaviour upon copying the motions of socially-assistive robots.

In another study, patients recovering from stroke responded more quickly to upper-extremity exercises when prompted and motivated by socially-assistive robots.

Using MIT’s new system, supervisors can correct a robot’s mistakes by simply thinking about it (Image courtesy: MIT)
Using MIT’s new system, supervisors can correct a robot’s mistakes by simply thinking about it (Image courtesy: MIT)

Robots can mingle in crowded places, too

Movement of robots was once considered a mechanical challenge. Now, scientists have realised it has more to do with intelligence. For a robot to move comfortably in a crowded place like a school or an office, it needs to first learn things that we take for granted. It needs to learn the things that populate the space, which of these things are stationary and which ones move, understand that some of these things move only occasionally while others move frequently and suddenly, and so on. In short, it needs to autonomously learn its way around a dynamic environment. This is what a team at KTH Royal Institute of Technology in Stockholm hopes to achieve.

Rosie (yes, we know it sounds familiar) is a robot in their lab that has already learnt to perceive 3D environments, move about and interact safely in these. Rosie repeatedly visits the rooms at the university’s Robotics, Perception and Learning Lab, and maps these in detail. It uses a depth camera (RGB-D) to grab points of physical space and dump these into a database, from which 3D models of the rooms can be generated.



What's New @

Most Popular DIYs

Electronics Components

Truly Innovative Tech