Stanford’s Robotics Legacy

An ensemble of robots Stanford researchers have developed and studied over the years. (Image credit: Eric Nyquist)

For decades, Stanford University has been inventing the future of robotics. Back in the 1960s, that future began with an Earth-bound moon rover and one of the first artificially intelligent robots, the humbly christened Shakey. At that time, many people envisioned robots as the next generation of household helpers, loading the dishwasher and mixing martinis. From those early ambitions, though, most robots moved out of the home and to the factory floor, their abilities limited by available technology and their structures too heavy and dangerous to mingle with people.

But research into softer, gentler and smarter robots continued.

Thanks in large part to advances in computing power, robotics research these days is thriving. At Stanford alone, robots scale walls, flutter like birds, wind and swim through the depths of the earth and ocean, and hang out with astronauts in space. And, with all due respect to their ancestors, they’re a lot less shaky than they used to be.

Here we look at Stanford’s robotic legacy - the robots, the faculty who make them and the students who will bring about the future of robotics.

Stanford’s robots have evolved from relatively bulky, rolling creations to nimble bots often built for specific tasks. We dug into our archives to highlight 12 of these inventions that, each in its own way, changed what the future of robots looks like. Some advanced technologies that are now widespread in robotics and elsewhere opened new frontiers, like the ocean, space or the side of a building.  You can meet these robots below, along with stories, images and even videos of them hard at work, dating back to the 1960s.

Today’s bots tend to be smarter, gentler, smaller, faster and more efficient, but their predecessors - clunky and slow though they might have been - helped make possible these more agile descendants.

The Stanford Cart

Up in the foothills above campus in the late ’70s, a CAUTION ROBOT VEHICLE sign alerted visitors to keep an eye out for the bicycle-wheeled Stanford Cart , which could sometimes be seen rolling along the road. The Stanford Cart was a long-term project that took many forms from around 1960 to 1980. It was originally designed to test what it would be like to control a lunar rover from Earth and was eventually reconfigured as an autonomous vehicle.

(Image credit: The Board of Trustees of the Leland Stanford Junior University)

More about the Stanford Cart

Shakey

Named for its wobbly structure, Shakey was the first mobile robot that could perceive its surroundings and reason about its actions. Work on Shakey began in 1966, at the Stanford Research Institute’s Artificial Intelligence Center in Menlo Park. Shakey led to advances in artificial intelligence, computer vision, natural language processing, object manipulation and pathfinding.

Stanford Arm

Known as the Stanford Arm , this robotic arm was developed in 1969 and is now on display inside the Gates Building on Stanford campus. It was one of two that were mounted to a table where researchers and students used it for research and teaching purposes for over 20 years, often focused on applications in the manufacturing industry. In 1974, a version of the Stanford Arm was able to assemble a Ford Model T water pump. It was the precursor of many manufacturing robots still in use today.

(Image credit: L.A. Cicero)

Mobi

In the main hall of the Gates Building at Stanford, a three-wheeled, human-sized robot stands sentry by the door. It’s known as Mobi  and researchers developed it in the ’80s to study how autonomous robots could move through unstructured, human-made environments. Mobi was a testbed for several kinds of sensors and navigated with stereo vision, ultrasound and bump sensors.

(Image credit: L.A. Cicero)

STAIR

Pushing back against the trend of artificially intelligent robots designed for specific tasks, the Stanford AI Robot - STAIR - was going to be multi-talented, navigating through home and office environments, cleaning up, assembling new furniture or giving visitors a tour, à la the Jetson’s Rosie.

Although STAIR came along 40 years later, the robot was very much created in the tradition of Shakey.

(Image credit: L.A. Cicero)

More about STAIR

Autonomous helicopters

Aiming to improve the capabilities of autonomous helicopters , Stanford researchers decided to build on the skills of expert human operators. The researchers developed an algorithm - called an apprenticeship learning algorithm - that enabled the autonomous helicopters to learn to fly by observing human operators flying other helicopters. The helicopters were then able to perform spectacular acrobatic feats  and the project went so well, that the researchers decided there was no further work left to do on that topic.

(Image credit: Ben Tse)

Stickybot

The lizard-like Stickybot  could scale smooth surfaces thanks to adhesive pads on its feet that mimic the sticking powers of a gecko’s toes. A robot like this could serve many purposes, including applications in search and rescue or surveillance.

Since their invention, gecko adhesives have helped drones, people and robots in space grab hold of otherwise hard to grasp surfaces.

(Image credit: L.A. Cicero)

More about Stickybot

Hedgehog

A toaster-sized cube with knobby corners may someday help us explore asteroids, comets and small moons. Dubbed Hedgehog , this tiny explorer turns the challenges of operating in low-gravity environments into an advantage. In places where there’s not enough gravity for wheeled vehicles to gain traction, Hedgehog moves in hops and flips - absorbing shocks with its bulbous corners. In June 2015, Hedgehog was tested on a parabolic flight to simulate microgravity and performed well. Next, the researchers are figuring out how to make their robot navigate autonomously.

(Image credit: Ben Hockman)

OceanOne

Shaped like a mermaid, OceanOne  is a humainoid robotic diver that can be controlled by humans from the safety of dry land. The OceanOne team has already sent their robot to the briny deep to retrieve artifacts from a shipwreck and explore a volcano.

OceanOne’s operators can precisely control the robot’s grasping through haptic feedback, which lets them feel physical attributes of objects the robot touches with its hands. OceanOne and robots like it may someday take over dangerous and treacherous tasks for humans, including ship repair and underwater mining.

(Image credit: Frederic Osada and Teddy Seguin/DRASSM)

Flapping robots

When it comes to the art of flying, nature has many tricks and techniques we’d like to borrow. Examples of this include how to better hover, fly in turbulence and stay aloft on wings that change shape. Studying the wings of bats and birds, researchers at Stanford created flapping robots  with wings that morphed passively, meaning no one controlled how they changed shape. To further their work on flapping flight, this lab has also built bird wind tunnel , studied vision stabilization in swans  and put safety goggles on a bird named Obi - so they could measure its flight in subtle detail using laser light.

Recently, the researchers have taken their work into the wider world, reconstructing sensitive force-measuring equipment in the jungle to see how wild hummingbirds and bats hover in Costa Rica.

(Image credit: Courtesy Lentink Lab)

Vinebot

It resembles step 1 of a balloon animal but the Vinebot  is an example of soft robotics. As the nickname implies, Vinebot’s mode of movement was inspired by vines that grow just from the end.

The advantage of this method of travel is that Vinebot can reach destinations without moving the bulk of its body. Because it never needs to pick itself up or drag its body, Vinebot can grow through glue and over sharp objects without getting stuck or punctured.

(Image credit: L.A. Cicero)

More about Vinebot

JackRabbot

Named for the jackrabbits often seen on campus, the round, waist-height social robot JackRabbot 2 roams around Stanford, learning how to be a responsible pedestrian. JackRabbot 2’s street smarts will come from an algorithm, which the researchers are training with pedestrian data from videos, computer simulations and manually controlled tests of the robot. These data cover many unspoken rules of human walking etiquette, such as how to move through a crowd and how to let people nearby know where you’re headed next.

JackRabbot 2 can also show its intentions with sounds, arm movements or facial expression. Researchers are hoping to soon test JackRabbot 2’s autonomous navigation by having it deliver small items on campus.

(Image credit: Amanda Law)

When they got their start, Stanford’s robot makers didn’t know how slowly the field would move. Many thought the problems in robotics would be quickly solved. Today, the masterminds behind these diving, floating, growing, flapping, climbing, searching, grasping, feeling technologies see an exciting future - and a lot of challenges - ahead.

We asked the inventors of the bird wind tunnel, Vinebot, OceanOne, Stickybot and STAIR how they got their start, where they thought robotics was heading at the time and what excites them today. Below, we share their stories, and even a few throwback photos of their early days in engineering.

David Lentink

"I have largely avoided following trends because so many of them are focused on short-term advances. Also, by defining our own area of research we can focus more on collaborating with other labs instead of competing.”

In Lentink’s  Q&A  he describes his lifelong interest in flying machines and why he avoids following research trends in his field.

Allison Okamura

"When I was a graduate student, I definitely thought that this problem of robots being able to manipulate with their hands would be solved. I thought that a robot would be able to pick up and manipulate an object, it would be able to write gracefully with a pen, and it would be able to juggle.”

In her  Q&A , Okamura discusses her first robotics project and the resurgence of creativity in robotics she’s witnessing - and influencing.

Oussama Khatib

"It took years for technologies to develop, and it is only now that we are certain that we have what it takes to deliver on those promises of decades ago - we are ready to let robots escape from their cages and move into our human environment.”

Khatib discusses a fateful bus ride that brought him to Stanford and the thesis project he’s never left behind in this  Q&A.

Mark Cutkosky

"It’s clear that robotics has grown enormously. There is huge interest from our students. Many new applications seem to be within reach that looked like science fiction a couple of decades ago.”

In his  Q&A , Cutkosky describes how his research turned from robots in manufacturing to robots that can climb walls.

Andrew Ng

"When I was in high school, I did an internship […] and I did lot of photocopying. I remember thinking that if only I could automate all of the photocopying I was doing, maybe I could spend my time doing something else. That was one source of motivation for me to figure out how to automate a lot of the more repetitive tasks.”

Ng describes how he went from robots that play chess to developing robotics technology that traveled to the International Space Station in this  Q&A.

The future of robotics is still being written. With courses on topics such as drone-based delivery, creativity in robotics and autonomous navigation, Stanford has many opportunities for burgeoning roboticists to build on our past innovations and learn from our robotics experts.

These are some of the students who will be developing the next generation of robots that will likely be transporting, helping and interacting with us in the decades to come.

The Stanford Cart in 1979, testing its ability to avoid objects. (Image credit: The Board of Trustees of the Leland Stanford Junior University)

When the Stanford Cart was first built, the control signals between the remote operator and the cart could imitate the 2.6-second delay that occurred in radio communications between the Earth and the Moon. Given the challenges of steering the cart with such a delay, the researchers added a predictive system that showed the operators where the cart would be when it started to act on the next command. Despite this, the experiments made clear that a lunar vehicle would need to move extremely slowly to be steered reliably.

About a decade after it was built, the cart was reborn as an autonomous vehicle. Brought outside, it rolled along at a consistent walking pace, following a white line. This tracking worked sometimes but inconsistencies in lighting, visual interference from other objects or an abrupt curve could all throw the cart off its course.

In its final form, the cart was equipped with 3D vision capabilities. It was also reconfigured to pause after each meter of movement and take 10-15 minutes to reassess its surroundings and reevaluate its decided path. In 1979, this cautious version of the cart successfully made its way 20 meters through a chair-strewn room in five hours without human intervention.

In 1987, the Stanford Cart was included in the Robot Theater at the Digital Computer Museum in Boston, alongside Shakey and the Stanford Arm. It is now on display at the Computer History Museum in Mountain View, California.

Go to the web site to

Footage from 1966 of the Stanford Cart, including a test of its ability to autonomously follow a white line (starting at 1:53). For outtakes,

EXTRA: On his personal website , Lester Earnest, who was executive officer at SAIL while the cart was in development, recalls posting the custom "CAUTION ROBOT VEHICLE" sign. He said it became a surprisingly costly item because it was stolen so often.

The Stanford Cart in 1979, testing its ability to avoid objects. (Image credit: The Board of Trustees of the Leland Stanford Junior University)

When the Stanford Cart was first built, the control signals between the remote operator and the cart could imitate the 2.6-second delay that occurred in radio communications between the Earth and the Moon. Given the challenges of steering the cart with such a delay, the researchers added a predictive system that showed the operators where the cart would be when it started to act on the next command. Despite this, the experiments made clear that a lunar vehicle would need to move extremely slowly to be steered reliably.

About a decade after it was built, the cart was reborn as an autonomous vehicle. Brought outside, it rolled along at a consistent walking pace, following a white line. This tracking worked sometimes but inconsistencies in lighting, visual interference from other objects or an abrupt curve could all throw the cart off its course.

In its final form, the cart was equipped with 3D vision capabilities. It was also reconfigured to pause after each meter of movement and take 10-15 minutes to reassess its surroundings and reevaluate its decided path. In 1979, this cautious version of the cart successfully made its way 20 meters through a chair-strewn room in five hours without human intervention.

In 1987, the Stanford Cart was included in the Robot Theater at the Digital Computer Museum in Boston, alongside Shakey and the Stanford Arm. It is now on display at the Computer History Museum in Mountain View, California.

Go to the web site to

Footage from 1966 of the Stanford Cart, including a test of its ability to autonomously follow a white line (starting at 1:53). For outtakes,

EXTRA: On his personal website , Lester Earnest, who was executive officer at SAIL while the cart was in development, recalls posting the custom "CAUTION ROBOT VEHICLE" sign. He said it became a surprisingly costly item because it was stolen so often. On its wheeled base, Shakey had cat whisker bump sensors and a push bar. Its center was a box of electronics that included a camera control unit and on-board computer. On top, the nearly 5-foot robot had a TV camera, an infrared triangulating range finder and an antenna for two-way radio communication.

Shakey’s operators sent it to carry out tasks via teleprinted instructions. Juddering through its "playroom," the robot had to avoid running into walls and blocks, and navigate up ramps and through doorways. The robot had an incomplete map of its room and kept track of its position by counting wheel revolutions, which was fairly precise. When revolution-counting did not bring Shakey to its intended location or when the robot found itself somewhere unexpected, it used its TV camera and range finder to scan landmarks and objects - all of which were painted either red or black so Shakey’s camera could see them.

Go to the web site to

This video narrated by Nils Nilsson, now the Kumagai Professor in the School of Engineering, Emeritus, at Stanford, was produced in 1972. The video inspired the Association for the Advancement of Artificial Intelligence to name their AI video competition awards "Shakeys.”

Given a task, such as "push the block off the platform," Shakey could accomplish its goal without the step-by-step instructions required by other robots. It did this by building up a repertoire of simple actions - such as roll, turn and tilt or pan the camera - and strung those together to perform more complex tasks. Shakey also remembered multi-step plans it had executed in the past and could recall or adjust them for future tasks.

The Stanford Research Institute formally separated from Stanford University in 1970 and the Shakey project ended in 1972 when funders required more direct applications. In 1977, the Stanford Research Institute was renamed SRI International. Shakey can still be seen on display at the Computer History Museum in Mountain View, California.

EXTRA: Shakey was originally hardwired to receive and send information back to its operator. To avoid twisting these wires, the researchers programmed Shakey’s software to occasionally interrupt whatever else was going on so the robot could unwind. Even once the robot became wireless, it would still sometimes perform this now-functionless dance. In this video of an SRI International symposium on Shakey, computer scientist and Shakey developer Bertram Raphael likened it to Shakey recalling a "previous life." On its wheeled base, Shakey had cat whisker bump sensors and a push bar. Its center was a box of electronics that included a camera control unit and on-board computer. On top, the nearly 5-foot robot had a TV camera, an infrared triangulating range finder and an antenna for two-way radio communication.

Shakey’s operators sent it to carry out tasks via teleprinted instructions. Juddering through its "playroom," the robot had to avoid running into walls and blocks, and navigate up ramps and through doorways. The robot had an incomplete map of its room and kept track of its position by counting wheel revolutions, which was fairly precise. When revolution-counting did not bring Shakey to its intended location or when the robot found itself somewhere unexpected, it used its TV camera and range finder to scan landmarks and objects - all of which were painted either red or black so Shakey’s camera could see them.

Go to the web site to

This video narrated by Nils Nilsson, now the Kumagai Professor in the School of Engineering, Emeritus, at Stanford, was produced in 1972. The video inspired the Association for the Advancement of Artificial Intelligence to name their AI video competition awards "Shakeys.”

Given a task, such as "push the block off the platform," Shakey could accomplish its goal without the step-by-step instructions required by other robots. It did this by building up a repertoire of simple actions - such as roll, turn and tilt or pan the camera - and strung those together to perform more complex tasks. Shakey also remembered multi-step plans it had executed in the past and could recall or adjust them for future tasks.

The Stanford Research Institute formally separated from Stanford University in 1970 and the Shakey project ended in 1972 when funders required more direct applications. In 1977, the Stanford Research Institute was renamed SRI International. Shakey can still be seen on display at the Computer History Museum in Mountain View, California.

EXTRA: Shakey was originally hardwired to receive and send information back to its operator. To avoid twisting these wires, the researchers programmed Shakey’s software to occasionally interrupt whatever else was going on so the robot could unwind. Even once the robot became wireless, it would still sometimes perform this now-functionless dance. In this video of an SRI International symposium on Shakey, computer scientist and Shakey developer Bertram Raphael likened it to Shakey recalling a "previous life."

STAIR’s first major achievement came in 2006 when researchers led by Andrew Ng, who was an assistant professor of computer science at the time, developed an algorithm that taught STAIR to identify five objects - a coffee cup, pencil, brick, book and martini glass - and the best way to pick each one up. With this knowledge, the robot could make its own decisions about how to handle unknown objects. For example, it decided to hold a roll of duct tape in a fashion that combined how it grasped a cup handle and a book.

Eighteen months into the project, the researchers also collaborated with the Personal Robotics Program, led by  Kenneth Salisbury , a professor of Computer Science and of Surgery (now emeritus) to produce a personal robot prototype. It looked like a large torso with two fabric-covered arms, each of which could hold about ten pounds but also provide a gentle touch. At this point, the robot was remotely controlled but the researchers were working on a platform to help it, STAIR and other robots be capable of more independent operations.

Members of the STAIR project and the Personal Robotics Program pursued many tasks, including unloading a dishwasher, preparing simple meals and fetching an object from another room, prompted by voice command. Although those tasks were fun, the main mission of STAIR was to integrate aspects of artificial intelligence research that were often worked on in isolation, such as language processing, machine vision, machine learning and decision analysis.

The STAIR project ended in 2009 but its legacy lives on in the Robot Operating System developed by the team over the course of the project. ROS is an open-source framework for writing robot software used by robotics labs all over the world - and beyond. It ran on Robonaut 2 aboard the International Space Station in 2014.

STAIR’s first major achievement came in 2006 when researchers led by Andrew Ng, who was an assistant professor of computer science at the time, developed an algorithm that taught STAIR to identify five objects - a coffee cup, pencil, brick, book and martini glass - and the best way to pick each one up. With this knowledge, the robot could make its own decisions about how to handle unknown objects. For example, it decided to hold a roll of duct tape in a fashion that combined how it grasped a cup handle and a book.

Eighteen months into the project, the researchers also collaborated with the Personal Robotics Program, led by  Kenneth Salisbury , a professor of Computer Science and of Surgery (now emeritus) to produce a personal robot prototype. It looked like a large torso with two fabric-covered arms, each of which could hold about ten pounds but also provide a gentle touch. At this point, the robot was remotely controlled but the researchers were working on a platform to help it, STAIR and other robots be capable of more independent operations.

Members of the STAIR project and the Personal Robotics Program pursued many tasks, including unloading a dishwasher, preparing simple meals and fetching an object from another room, prompted by voice command. Although those tasks were fun, the main mission of STAIR was to integrate aspects of artificial intelligence research that were often worked on in isolation, such as language processing, machine vision, machine learning and decision analysis.

The STAIR project ended in 2009 but its legacy lives on in the Robot Operating System developed by the team over the course of the project. ROS is an open-source framework for writing robot software used by robotics labs all over the world - and beyond. It ran on Robonaut 2 aboard the International Space Station in 2014.

A gecko’s toes are made of a series of microscopic flaps - which are themselves constructed of even smaller hair-like structures that are bundles of even smaller nanostructures. When in very close contact with a surface, those flaps create a molecular attraction called van der Waals forces.

With these forces in place, a gecko can support its whole body with one toe. The gecko-inspired adhesive is a simplified version of what geckos have but works the same way. It therefore does not require any squeezing and leaves no residue because it is not gooey or tacky.

Making the adhesive grip is as simple as pushing the flaps in the right direction. The Stickybot team activated the robot’s grip by adding a tail, which pulled the flaps into their "on" position.

Go to the web site to

The adhesive that brought a  lab to new heights also picked up basketballs, wallets and burritos  and helped tiny drones tug objects 40 times their weight. It even made its way to the International Space Station and a parabolic airplane flight - which simulates zero-G conditions. Now, the researchers are hoping the adhesive can venture outside the space station (to eventually help clean up space debris) and into consumer goods.

A gecko’s toes are made of a series of microscopic flaps - which are themselves constructed of even smaller hair-like structures that are bundles of even smaller nanostructures. When in very close contact with a surface, those flaps create a molecular attraction called van der Waals forces.

With these forces in place, a gecko can support its whole body with one toe. The gecko-inspired adhesive is a simplified version of what geckos have but works the same way. It therefore does not require any squeezing and leaves no residue because it is not gooey or tacky.

Making the adhesive grip is as simple as pushing the flaps in the right direction. The Stickybot team activated the robot’s grip by adding a tail, which pulled the flaps into their "on" position.

Go to the web site to

The adhesive that brought a  lab to new heights also picked up basketballs, wallets and burritos  and helped tiny drones tug objects 40 times their weight. It even made its way to the International Space Station and a parabolic airplane flight - which simulates zero-G conditions. Now, the researchers are hoping the adhesive can venture outside the space station (to eventually help clean up space debris) and into consumer goods.

Graduate students Joseph Greer, left, and Laura Blumenschein, right, work with Elliot Hawkes, a visiting assistant professor from the University of California, Santa Barbara, on a prototype of the Vinebot. (Image credit: L.A. Cicero)

Thanks to its squishy, air-filled form, Vinebot can squeeze through tight spaces, wrap around obstacles and grow up into the air. The lab  that developed Vinebot has several versions. Some navigate somewhat spontaneously - when these encounter an obstacles, they simply grow around it - and others have predetermined shapes or pulley systems that direct them.

Go to the web site to

The researchers envision many roles for Vinebot. It could help with search and rescue operations, extending into the sky as a makeshift radio tower or growing through and propping up rubble. Topped with a camera, it could help locate buried people - and artifacts. Vinebot could make its way to a fire, fill with water and then burst upon arrival at the flame. The lab is also considering medical applications. Unlike a tube pushed through the body, this type of soft robot could grow without dragging along delicate structures.

Graduate students Joseph Greer, left, and Laura Blumenschein, right, work with Elliot Hawkes, a visiting assistant professor from the University of California, Santa Barbara, on a prototype of the Vinebot. (Image credit: L.A. Cicero)

Thanks to its squishy, air-filled form, Vinebot can squeeze through tight spaces, wrap around obstacles and grow up into the air. The lab  that developed Vinebot has several versions. Some navigate somewhat spontaneously - when these encounter an obstacles, they simply grow around it - and others have predetermined shapes or pulley systems that direct them.

Go to the web site to

The researchers envision many roles for Vinebot. It could help with search and rescue operations, extending into the sky as a makeshift radio tower or growing through and propping up rubble. Topped with a camera, it could help locate buried people - and artifacts. Vinebot could make its way to a fire, fill with water and then burst upon arrival at the flame. The lab is also considering medical applications. Unlike a tube pushed through the body, this type of soft robot could grow without dragging along delicate structures.


This site uses cookies and analysis tools to improve the usability of the site. More information. |