Researchers at Carnegie Mellon University have enabled robots to learn household chores by observing home videos depicting people engaging in everyday tasks.
Current robot training methods rely on human demonstrations or simulated environments, which are time-consuming and prone to failure. Previously it was seen that robots learn by observing humans performing tasks. However, the method, known as In-the-Wild Human Imitating Robot Learning (WHIRL), necessitated task completion by humans in the same environment as the robot.
Researchers at Carnegie Mellon University have enabled robots to learn household chores by watching home videos of people performing everyday tasks. The researchers have enhanced home robot utility, enabling cooking, cleaning, and more assistance. Two robots master 12 tasks, including opening drawers, oven doors, and lids; removing pots from stoves; and handling telephones, vegetables, and cans of soup.
The latest model removes the requirement for human demonstrations and the need for the robot to function in an identical environment. Like WHIRL, the robot still needs practice to excel at a task. The team’s research demonstrated that it can acquire a new task in as little as 25 minutes. Robots can use this model to explore the world around them curiously. To instruct the robot on object interaction, the team implemented the concept of affordances. Derived from psychology, affordances pertain to the opportunities an environment presents to an individual. This notion has been expanded to encompass design and human-computer interaction, denoting the potential actions perceived by an individual.
In Virtual Robotic Behavior (VRB) context, affordances serve as guidelines for determining the location and manner in which a robot can interact with an object, drawing insights from human behaviour. For instance, when observing a human opening a drawer, the robot discerns the contact points, such as the handle, and the direction of the drawer’s movement, typically straight out from the starting position. By analysing multiple videos of humans opening drawers, the robot can acquire the ability to open any drawer. The team used video datasets like Ego4D and Epic Kitchens to research.
The researchers believe that this research has the potential to empower robots with the ability to acquire knowledge from the extensive array of Internet and YouTube videos accessible to them.