Friday, March 31, 2023

One Small Step for AI, One Giant Step for Robotics

Janani Gopalakrishnan Vikram is a technically-qualified freelance writer, editor and hands-on mom based in Chennai

- Advertisement -
 

According to a news report, “The system KTH researchers use detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. This autonomous learning process enables Rosie to distinguish dynamic elements from static ones and perceive depth and distance.” This helps the robot understand where things are and negotiate physical spaces.

Just a thought can bring the robot back on track

It is one thing for robots to learn to work autonomously, it is another for them to be capable of working with humans. Some consider the latter to be more difficult. To be able to co-exist, robots must be able to move around safely with humans (as in the case of KTH’s Rosie) and also understand what humans want, even when the instruction or plan is not clearly, digitally explained to the robot.

Explaining things in natural language is never foolproof because each person has a different way of communicating. But if only robots could understand what we think, the problem would be entirely solved. As a step towards this, Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University are creating a feedback system that lets you correct a robot’s mistakes instantly by simply thinking about it.

- Advertisement -

The experiment basically involves a humanoid robot called Baxter performing an object-sorting task and a human watching it. The person watching the robot has to wear a special head gear. The system uses an electroencephalography monitor to record the person’s brain activity. A novel machine learning algorithm is applied to this data to classify brain waves in the space of 10 to 30 milliseconds. When the robot indicates its choice, the system helps it to find out whether the human agrees with the choice or notices an error. The person watching the robot does not have to gesture, nod or even blink. He or she simply needs to agree or disagree mentally with the robot’s action. This is much more natural than earlier methods of controlling robots with thoughts.

The team lead by CSAIL director Daniela Rus has managed to achieve this by focusing the system on brain signals called error-related potentials (ErrPs), which are generated whenever our brains notice a mistake. When the robot indicates the choice it is about to make, the system uses ErrPs to understand whether the human supervisor agrees with the decision.

According to the news report, “ErrP signals are extremely faint, which means that the system has to be fine-tuned enough to both classify the signal and incorporate it into the feedback loop for the human operator.”

Additionally, the team has also worked on the possibility of the system not noticing the human’s original correction, which might lead to secondary errors. In such a case, if the robot is not sure about its decision, it can trigger a human response to get a more accurate answer. Further, since ErrP signals appear to be proportional to how bad the mistake is, future systems could be extended to work for more complex multi-choice tasks.

This project, which was partly funded by Boeing and National Science Foundation, can also be useful for physically-challenged people to work with robots.

Calling robots electronic persons, is it a slip or the scary truth

Astro Teller, head of X (formerly Google X), the advanced technology lab of Alphabet, explained in a recent IEEE interview that washing machines, dishwashers, drones, smart cars and the like are robots though these might not be jazzy-looking bipeds. These are intelligent, help us do something and save us time. If you look at it that way, smart robots are really all around us.

It is easy to even build your own robot and make it smart, with simple components and open source tools. Maybe not something that looks like Rosie or Baxter, but you can surely create a quick and easy AI agent. OpenAI Universe, for example, lets you train an AI agent to use a computer like a human does. With Universe, the agent can look at screen pixels and operate a virtual keyboard and mouse. The agent can be trained to do any task that you can achieve using a computer.

Sadly, the garbage-in-garbage-out principle is true for robotics and AI, too. Train it to do something good and it will. Train it to do something bad and it will. No questions asked. Anticipating such misuse, the industry is getting together to regulate the space and implement best practices. One example is Partnership on Artificial Intelligence to Benefit People and Society, comprising companies like Google’s DeepMind division, Amazon, Facebook, IBM and Microsoft. The website speaks of best practices, open engagement and ethics, trustworthiness, reliability, robustness and other relevant issues.

SHARE YOUR THOUGHTS & COMMENTS

 
 

What's New @ Electronicsforu.com?

Truly Innovative Tech

MOst Popular Videos

Electronics Components

Tech Contests