3 Questions: Honing robot perception and mapping

Luca Carlone and Jonathan How of MIT LIDS discuss how future robots might perceive and interact with their environment. Walking to a friend's house or browsing the aisles of a grocery store might feel like simple tasks, but they in fact require sophisticated capabilities. That's because humans are able to effortlessly understand their surroundings and detect complex information about patterns, objects, and their own location in the environment. What if robots could perceive their environment in a similar way? That question is on the minds of MIT Laboratory for Information and Decision Systems (LIDS) researchers Luca Carlone and Jonathan How. In 2020, a team led by Carlone released the first iteration of Kimera , an open-source library that enables a single robot to construct a three-dimensional map of its environment in real time, while labeling different objects in view. Last year, Carlone's and How's research groups ( SPARK Lab and Aerospace Controls Lab paper associated with the project recently received this year's IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award, given to the best paper published in the journal in 2022. Carlone, who is the Leonardo Career Development Associate Professor of Aeronautics and Astronautics, and How, the Richard Cockburn Maclaurin Professor in Aeronautics and Astronautics, spoke to LIDS about Kimera-Multi and the future of how robots might perceive and interact with their environment. Q: Currently your labs are focused on increasing the number of robots that can work together in order to generate 3D maps of the environment. What are some potential advantages to scaling this system?
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience