Mae demonstrates her understanding of structured-English commands during a demo in Rhodes Hall. Background from left, graduate students Vasu Raman, Jim Jing and Cameron Finucane stand with Hadas Kress-Gazit.
Move over, Jetsons. A humanoid robot named Mae is traipsing around Cornell's Autonomous Systems Lab, guided by plain-English instructions and sometimes even appearing to get frustrated. Mae understands and executes English commands, thanks to algorithms and a software toolkit called Linear Temporal Logic Mission Planning (LTLMoP) being developed in the lab of Hadas Kress-Gazit, assistant professor of mechanical and aerospace engineering. According to Kress-Gazit, the future of robotics is in the ability of robots to easily understand everyday users and to act reliably in different situations. "The big picture is that we want to have anybody tell the robot what to do," explained Kress-Gazit, who studies how to create provably correct, high-level behaviors for robots. "You don't want to have a programmer who's been doing the job forever to have to write the code for every single behavior, as is currently done in the field. You want to take what someone said and automatically generate the code for the robot to successfully accomplish its task." The LTLMoP toolkit combines logic, language and control algorithms.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.