Robots Use Neural Motion Planning To Navigate Challenging Obstacles in Unfamiliar Environments

Humans can grab a book from a shelf with little obvious thought. But it’s a complex process for the brain that involves planning and navigating around obstacles, like other books or knickknacks. Robotics researchers have struggled to replicate this kind of human movement when their systems perform similar tasks. Known as motion planning, the process of training a robot to get an object from one point to another without hitting any obstacles takes time and resources because the robot can’t react dynamically like humans in unknown environments.

A team from Carnegie Mellon University’s Robotics Institute (RI) has developed Neural Motion Planning to help improve how robots react in new environments. The data-driven approach uses a single, versatile, artificial intelligence network to perform motion planning in various unfamiliar household environments, like cabinets, dishwashers and refrigerators.

"Sometimes when you deploy a robot, you want it to operate in unstructured or unknown settings - environments where you can’t assume that you know everything," said Murtaza Dalal , an RI doctoral candidate. "That’s where these classic motion planning methods break down. One big issue is that these algorithms are very slow because they have to do thousands, maybe even millions, of collision checks."

Neural Motion Planning was inspired by how humans gather diverse experiences to practice and gradually increase proficiency. When acquiring new skills, humans start with slow, unsure behavior and progress to fast, dynamic motions. Neural Motion Planning allows robots to be more versatile in unfamiliar environments and to generally adapt when moving objects.

Researchers simulated millions of complex environments to train Neural Motion Planning. In these simulations, robots encountered household environments - shelves, cubbies, microwaves, dishwashers, open boxes and cabinets - and sometimes had to maneuver around random objects, like a puppy or a vase. The models were trained to perform reactive and fast motion planning. This process and data were distilled in a generalist policy so when the robot was deployed in the real world it could perform tasks in different environments from what it had seen before.

"We have seen amazing successes in large-scale learning for vision and language - think ChatGPT - but not in robotics. Not yet," said Deepak Pathak , the Raj Reddy Assistant Professor in the RI. "This work is a stepping stone toward that goal. Neural Motion Planning uses the simple recipe of learning at scale in simulation to produce a large degree of generalization in the real world. It works across scenes with different backgrounds, objects, obstacles and even entire scene arrangements."

When used on a robotic arm in the lab, Neural Motion Planning successfully navigated unfamiliar environments. The robotic system was given a three-dimensional representation of the starting point in the scene, which was created using depth cameras, and presented with a goal position - where researchers wanted the robotic arm to end up. Then, Neural Motion Planning provided the joint configurations for the robotic arm to move from the starting point to the endpoint.

"It was exciting to see a single model deftly avoid diverse household obstacles including lamps, plants, bookcases and cabinet doors while moving the robot arm to complete tasks," said RI master’s student Jiahui Yang. "This feat was enabled by massively scaling up data generation, following a similar recipe to the success of machine learning in vision and language."

In addition to Dalal, Pathak and Yang, the research team included Russell Mendonca , an RI doctoral candidate; Youssef Khaky, a robotics simulation engineer at the National Robotics Engineering Center; and Ruslan Salakhutdinov , the UPMC Professor of Computer Science in CMU’s Machine Learning Department.

Learn more about the research and

Google CEO Sundar Pichai Launches the 2024-25 President’s Lecture Series at Carnegie Mellon

Carnegie Mellon Core Partner in New Center To Improve Robot Dexterity

Autonomous Aerial Robots Communicate, Prioritize Rooms in Multiroom Exploration