Researchers help robots navigate crowded spaces with new visual perception method
A team of researchers at the University of Toronto has found a way to enhance the visual perception of robotic systems by coupling two different types of neural networks. The innovation could help autonomous vehicles navigate busy streets or enable medical robots to work effectively in crowded hospital hallways. "What tends to happen in our field is that when systems don't perform as expected, the designers make the networks bigger - they add more parameters," says Jonathan Kelly , an assistant professor at the University of Toronto Institute for Aerospace Studies in the Faculty of Applied Science & Engineering. "What we've done instead is to carefully study how the pieces should fit together. Specifically, we investigated how two pieces of the motion estimation problem - accurate perception of depth and motion - can be joined together in a robust way." Researchers in Kelly's Space and Terrestrial Autonomous Robotic Systems lab aim to build reliable systems that can help humans accomplish a variety of tasks. For example, they've designed an electric wheelchair that can automate some common tasks such as navigating through doorways. More recently, they've focused on techniques that will help robots move out of the carefully controlled environments in which they are commonly used today and into the less predictable world humans are accustomed to navigating. "Ultimately, we are looking to develop situational awareness for highly dynamic environments where people operate, whether it's a crowded hospital hallway, a busy public square or a city street full of traffic and pedestrians," says Kelly.