Researchers at MIT have created a simulation tool to assist robots in learning complex fluid tasks like latte art and air manipulation.
Picture a windy picnic by a river. A paper napkin floats away, but you use a stick to create waves, guiding it back to shore. Water serves as a force-transmitting medium, allowing manipulation without direct contact. Humans interact effortlessly with fluids, but robots struggle with this task. Pouring a latte is feasible, but creating one demands nuanced skills.
Researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a simulation tool that aids robot learning in intricate fluid tasks, from latte art to air manipulation. In a versatile virtual environment, it encompasses diverse challenges involving solids, liquids, and multiple fluids. The researchers tested robot learning algorithms in FluidLab, tackling fluid system challenges. With clever optimization methods, they successfully transferred simulation learning to real-world scenarios.
Robotic manipulation research primarily emphasized rigid objects, neglecting complex fluid tasks due to safety risks and costs. However, fluid manipulation involves intricate interactions with solids, as seen in ice cream swirls, mixing, and water-based object movement. FluidLab’s simulator supports material coupling in fluid tasks, requiring precise interactions and handling. The system utilizes Taichi, a Python-embedded language, to compute gradients and optimize robot movements based on material interactions. This enables faster and more efficient solutions, distinguishing it from other simulators. The team categorized the ten tasks into using fluids for object manipulation and direct fluid manipulation. Examples included liquid separation, guiding floating objects, water jet transportation, liquid mixing, latte art, ice cream shaping, and air circulation control.\
FluidLab’s potential is promising. The current work aimed to transfer optimized trajectories from simulation to real-world tasks in an open-loop manner. The team plans to develop a closed-loop policy that performs real-time fluid manipulation using state or visual observations, transferring learned policies to real-world scenes. The publicly available platform can advance future studies in complex fluid manipulation methods. Professor Ming Lin highlights the need for robots to handle various liquids in day-to-day tasks, posing a computational challenge for real-time autonomous systems.