- Computer Science - Jun 23 CIS researchers receive $2.5M NSF grant for cybersecurity
- Computer Science - Jun 23 Carnegie Mellon’s RoboTutor Advances to Global Learning XPRIZE Semifinals
- Computer Science - Jun 23 Calling all women: London’s tech sector needs you!
- Computer Science - Jun 23 Return of the Robots: Imperial revs up for nationwide UK Robotics Week 2017
- Computer Science - Jun 23 Tackling the threat of cyber- security
- Computer Science - Jun 22 Online hillforts atlas maps all 4,147 in Britain and Ireland for the first time
- Medicine - Jun 22 Smell Something, Say Something
- Life Sciences - Jun 21 Clear view on stem cell development
- Computer Science - Jun 21 Origami anything
- Physics - Jun 20 Chicago Quantum Exchange to create technologically transformative ecosystem
- Computer Science - Jun 20 ‘We have to seize this opportunity’
- Computer Science - Jun 19 Piz Daint is a world leader
- Computer Science - Jun 16 Space robot technology helps self- driving cars and drones on Earth
- Physics - Jun 16 Toward optical quantum computing
- Medicine - Jun 16 Shrinking data for surgical training
- Computer Science - Jun 16 (Video) Meet Pavarobotti: New opera performed by robots
A robot learns how to tidy up after you
Sooner than you think, we may have robots to tidy up our homes.
Researchers in Cornell’s Personal Robotics Lab have trained a robot to survey a room, identify all the objects, figure out where they belong and put them away.
Their new algorithms -- the underlying methods a computer is programmed to follow -- for identifying and placing objects are described in the May online edition of the International Journal of Robotics, and some aspects of the work were presented at the International Conference on Robotics and Automation May 14 -- 18 in St. Paul, Minn.
Previous work has dealt with placing single objects on a flat surface, said Ashutosh Saxena, assistant professor of computer science. "Our major contribution is that we are now looking at a group of objects, and this is the first work that places objects in non-trivial places," he said.
The new algorithms allow the robot to consider the nature of an object in deciding what to do with it. "It learns not to put a shoe in the refrigerator," explained graduate student Yun Jiang. And while a shoe can be placed stably on any flat surface, it should go on the floor, not on a table.
The researchers tested placing dishes, books, clothing and toys on tables and in bookshelves, dish racks, refrigerators and closets. The robot was up to 98 percent successful in identifying and placing objects it had seen before. It was able to place objects it had never seen before, but success rates fell to an average of 80 percent. Ambiguously shaped objects, such as clothing and shoes, were most often misidentified.
The robot begins by surveying the room with a Microsoft Kinect 3-D camera, originally made for video gaming but now being widely used by robotics researchers. Many images are stitched together to create an overall view of the room, which the robot’s computer divides into blocks based on discontinuities of color and shape. The robot has been shown several examples of each kind of object and learns what characteristics they have in common. For each block it computes the probability of a match with each object in its database and chooses the most likely match.
For each object the robot then examines the target area to decide on an appropriate and stable placement. Again it divides a 3-D image of the target space into small chunks and computes a series of features of each chunk, taking into account the shape of the object it’s placing. The researchers train the robot for this task by feeding it graphic simulations in which placement sites are labeled as good and bad, and it builds a model of what good placement sites have in common. It chooses the chunk of space with the closest fit to that model.
Finally the robot creates a graphic simulation of how to move the object to its final location and carries out those movements. These are practical applications of computer graphics far removed from gaming and animating movie monsters, Saxena noted.
A robot with a success rate less than 100 percent would still break an occasional dish. Performance could be improved, the researchers say, with cameras that provide higher-resolution images, and by preprogramming the robot with 3-D models of the objects it is going to handle, rather than leaving it to create its own model from what it sees. The robot sees only part of a real object, Saxena explained, so a bowl could look the same as a globe. Tactile feedback from the robot’s hand would also help it to know when the object is in a stable position and can be released.
In the future, Saxena says he’d like to add further "context," so the robot can respond to more subtle features of objects. For example, a computer mouse can be placed anywhere on a table, but ideally it should go beside the keyboard.
Last job offers
- Computer Science/Telecom - 14.6
Dozent/in mit Schwerpunkt Forschung im Bereich Augmented & Virtual Reality (AR/VR)
- Business/Economics - 14.6
Wissensch. Assistent/in LCC-Sourcing (60–100 %)
- Computer Science/Telecom - 13.6
Dozent/in mit Schwerpunkt Internet of Things (IoT)
- Microtechnics - 12.6
Material Scientist in Additive Manufacturing (AM) of metal alloys Post-Doctoral position
- Computer Science/Telecom - 6.6
Entwicklerin / Entwickler für mobile und online Dienste
- Computer Science/Telecom - 30.5
PhD in genome analysis on heterogeneous hardware
- Computer Science/Telecom - 22.6
Postdoc in Computer Science, System Dependability and Security
- Life Sciences - 21.6
Postdoc in bioinformatics for Danish DNA reference database project