Researchers from San Diego have developed a novel algorithm that enables four-legged robots to walk and run in challenging terrain that can perform search and rescue missions or gather information in places that are too dangerous or difficult for humans
Presently, various approaches to train legged robots are dependent on proprioception or vision to walk and navigate, but both approaches are difficult to apply together. Hence, a team led by the University of California San Diego has developed a system that provides a legged robot more versatility because of the way it combines the robot’s sense of sight with another sensing modality called proprioception, which involves the robot’s sense of movement, direction, speed, location, and touch. In this case, the feel of the ground beneath its feet. This unique system of algorithms assists four-legged robots to walk and run on challenging terrain while avoiding both static and moving obstacles.
“In one case, it’s like training a blind robot to walk by just touching and feeling the ground. And in the other, the robot plans its leg movements based on sight alone. It is not learning two things at the same time,” said study senior author Xiaolong Wang, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering. “In our work, we combine proprioception with computer vision to enable a legged robot to move around efficiently and smoothly—while avoiding obstacles—in a variety of challenging environments, not just well-defined ones.”
The system developed by the team applies a special set of algorithms to integrate data from real-time images taken by a depth camera on the robot’s head with data from sensors on the robot’s legs. This was a very difficult task. “The problem is that during real-world operation, there is sometimes a slight delay in receiving images from the camera,” explained Wang, “so the data from the two different sensing modalities do not always arrive at the same time.” The team’s solution was to simulate this mismatch by randomizing the two sets of inputs—a technique the researchers call multi-modal delay randomization. The fused and randomized inputs were then used to train a reinforcement learning policy in an end-to-end manner. This approach helped the robot to make decisions quickly during navigation and anticipate changes in its environment ahead of time, so it could move and dodge obstacles faster on different types of terrains without the help of a human operator.
During experiments, the system helped a robot to move autonomously and swiftly across sandy surfaces, gravel, grass, and bumpy dirt hills covered with branches and fallen leaves without bumping into poles, trees, shrubs, boulders, benches, or people. The robot also navigated a busy office space without bumping into boxes, desks, or chairs.
Wang and his team are aiming to make legged robots more versatile so that they can conquer even more challenging terrains. “Right now, we can train a robot to do simple motions like walking, running, and avoiding obstacles. Our next goals are to enable a robot to walk up and down stairs, walk on stones, change directions and jump over obstacles.”
Click for the Published Research Paper and Video