Tuesday, June 18, 2024

A Step Towards Reliable And Safe Flying Autopilots

- Advertisement -

Researchers at MIT have developed an innovative AI-driven methodology to effectively manage autonomous robots, addressing the frequently conflicting objectives of safety and stability.

This video shows how the researchers used their technique to effectively fly a simulated jet aircraft in a scenario where it had to stabilize to a target near the ground while maintaining a very low altitude and staying within a narrow flight corridor.

Courtesy of the researchers
This video shows how the researchers used their technique to effectively fly a simulated jet aircraft in a scenario where it had to stabilize to a target near the ground while maintaining a very low altitude and staying within a narrow flight corridor. Courtesy of the researchers

Human pilots can be prepared for challenging missions through training and assistance, but robots struggle to stabilize aircraft and prevent crashes. Due to this stabilize-avoid problem, current Artificial Intelligence (AI) techniques must be able to accomplish their goals securely.

Researchers at MIT have developed an innovative approach that outperforms current methods in handling challenging stabilize-avoid issues. Their machine-learning strategy achieves increased safety and tenfold stability improvement, ensuring the agent reaches and retains stability inside its target area. The researchers expertly controlled a simulated jet in a constrained space without colliding.

- Advertisement -

The stabilize-avoid challenge

The researchers approach the problem in two steps. Firstly, they reframe it as a constrained optimization problem to enable the agent to reach and stabilize its goal while staying within a specific region. By applying constraints, they ensure obstacle avoidance. In the second step, they reformulate the constrained optimization problem into the epigraph form and solve it using a deep reinforcement learning algorithm. This approach allows them to bypass the challenges other methods encounter when using reinforcement learning.

No points for second place

The researchers conducted control tests with various initial conditions to evaluate their strategy. In certain simulations, the autonomous agent must go to a target area while making quick maneuvers to escape approaching obstacles. Their method surpassed all baselines by stabilizing all trajectories and assuring safety. They tested it by replicating a scene from the movie “Top Gun,” in which a jet aircraft had to stabilize close to the ground within a constrained flight path at a low height. The researchers’ controller excelled, preventing collisions and stalling better than any other technique despite the intricacy of the jet model.

In the future, this technique could aid in designing controllers for dynamic robots with safety and stability requirements, such as delivery drones. It might also be incorporated into more complex systems, such as ones that activate to help a driver regain control of a car when it starts to skid on a slick road. By accounting for dynamic mismatches between the model and reality and considering uncertainty during optimisation, the researchers want to enhance their method. They also plan to test it on hardware and gauge its performance.

Reference: The work is funded, in part, by MIT Lincoln Laboratory under the Safety in Aerobatic Flight Regimes program.

Nidhi Agarwal
Nidhi Agarwal
Nidhi Agarwal is a journalist at EFY. She is an Electronics and Communication Engineer with over five years of academic experience. Her expertise lies in working with development boards and IoT cloud. She enjoys writing as it enables her to share her knowledge and insights related to electronics, with like-minded techies.

SHARE YOUR THOUGHTS & COMMENTS

Unique DIY Projects

Electronics News

Truly Innovative Tech

MOst Popular Videos

Electronics Components

Calculators

×