Robots learn to handle objects, understand new places

After scanning a room, a robot points to the keyboard it was asked to locate. It
After scanning a room, a robot points to the keyboard it was asked to locate. It uses context to identify objects, such as the fact that a keyboard is usually in front of a monitor.
Infants spend their first few months learning to find their way around and manipulating objects, and they are very flexible about it: Cups can come in different shapes and sizes, but they all have handles. So do pitchers, so we pick them up the same way. Similarly, your personal robot in the future will need the ability to generalize - for example, to handle your particular set of dishes and put them in your particular dishwasher. In Cornell's Personal Robotics Laboratory, a team led by Ashutosh Saxena, assistant professor of computer science, is teaching robots to manipulate objects and find their way around in new environments. They reported two examples of their work at the 2011 Robotics: Science and Systems Conference June 27 at the University of Southern California. A common thread running through the research is "machine learning" - programming a computer to observe events and find commonalities. With the right programming, for example, a computer can look at a wide array of cups, find their common characteristics and then be able to identify cups in the future.
account creation

TO READ THIS ARTICLE, CREATE YOUR ACCOUNT

And extend your reading, free of charge and with no commitment.



Your Benefits

  • Access to all content
  • Receive newsmails for news and jobs
  • Post ads

myScience