’Truly autonomous systems won’t be around any time soon’

The Mercator rover at the AMADEE-20 Mars simulation of the Austrian Space Forum
The Mercator rover at the AMADEE-20 Mars simulation of the Austrian Space Forum in Israel; Gerad Steinbauer-Wagner from the Institute of Software Technology at Graz University of Technology. Image souce: Florian Voggeneder - ÖWF/Thomas Meier - FF TU Graz

Gerald Steinbauer-Wagner has been researching robots for almost 25 years and they are now able to solve complicated tasks. However, they are not autonomous, says the researcher, as they are still unable to actually understand their environment.

News+Stories: Your LinkedIn profile says that you are looking for the truly autonomous robot. Don’t they exist yet?

Gerald Steinbauer-Wagner: That depends on the task and the context. What a robot can do and how much of it is autonomous depends on the environment in which the robot operates. I have been working in this field for almost 25 years. We used to be happy when the robot moved down a corridor in the lab. Now robots are also moving around outside, some perhaps even in traffic. The next goal is to use them in unstructured environments, in ’nature’, if you like. In other words, you have to gradually familiarise yourself with different environments when working with robots. And real autonomy only comes into play when you confront the robot with a task in which it is not yet able to perform certain tasks. And when it enters an environment that it doesn’t know everything about. Then it has to think: What don’t I know and what do I need to learn?

Does this mean that robots currently labelled as ’autonomous’ already have all the necessary information about the environment and their tasks in advance so that they can work flawlessly?

Steinbauer-Wagner: Yes, there are hardly any surprises when it comes to the tasks and environments of such robots. Truly autonomous systems will not be around in this form any time soon. At most for very simple tasks. But we probably won’t need them at all. What we certainly need are semi-autonomous systems that are aware of their limits and can co-operate directly with humans. To achieve this, the robot needs self-assessment skills for situations that it cannot solve because it has not been adequately prepared for them. Such situations can then be resolved through communication and cooperation with humans.

How capable are ’autonomous’ robots at the moment?

Steinbauer-Wagner: In a research project with the fire service, we saw that even opening a door can be a major challenge for robots: This is because there are hundreds of door variants, sizes and locking mechanisms. Fully autonomous systems to support the emergency services are currently not an option for two reasons: the emergency services do not accept them and the robots are not yet technically advanced enough to be used safely. This is also evident from how long autonomous driving has been promised and how many mistakes are still being made in this area.

What do robots need to be able to navigate in unknown terrain?

You can’t assume that a robot can orientate itself in the same way as humans. To do this, it would have to understand the environment - i.e. it needs to understand where it can and cannot climb onto and how it can interact with elements in the environment. This doesn’t work without prior information. For example, when we send our robots on off-road missions, we use processed satellite data so that the robot already has a rough picture of the environment. It works in pretty much the same way with rovers that explore Mars - they are fed with orbiter data. And when it comes to autonomous driving in road traffic, the developers use high-precision maps, otherwise the whole thing won’t work.

What is currently the biggest obstacle on the path to robot autonomy? The sensors to perceive the environment, fast data processing or decision-making?

Steinbauer-Wagner: All of these. But the interpretation or understanding of what the robot perceives is particularly difficult. When a robot stands in front of a cabinet, it has to understand that it can be opened and that there can be all kinds of things inside. Robots develop an understanding of how the world works with all’its rules, but only very slowly.

Decision-making is also a challenge. There are different levels - at the very bottom, the decisions are closely linked to the hardware: How does the robot walk, or how does it drive? Above that are the navigation decisions. And the top level is about the mission: how does the robot divide the mission into individual actions and how does it achieve an overall result? A lot of research is still needed before a robot can make such decisions.

What are your main areas of research at TU Graz?

Steinbauer-Wagner: One important area is the interaction between robots and humans, which involves factors such as transparency, explainability and trust. For trust, it is extremely important that people have an understanding of what is happening in the machine. Or that they at least believe they understand the machine. However, robots and their algorithms are now so complex that it is difficult to find out why a robot has done or not done something. This is why the diagnosis and explainability of errors is an important part of our work.

Basically, however, we always look at the entire cycle: from perception to decision-making to execution. This cycle must be organised as intelligently and reliably as possible.
The use of robots in the outdoor sector has grown the most in recent years. We always have one or two projects running in this area. One aspect is analysing whether the robot can navigate the terrain - in other words, what data (whether global with satellites or flights with laser sensors or local via sensors) can the robot use to determine whether it can navigate certain areas of the terrain or not. This is not so easy in off-road terrain.

What is particularly difficult?

Steinbauer-Wagner: Vegetation is always a pain! Our robots mainly use laser scanners, which provide a geometric image of the world. Whether the robot is standing in front of dense tall grass or a wall makes little difference in the laser image. In some projects, our robots drive in the forest and have to determine whether the ground is hard or too soft to drive on. Different weather conditions are also challenging when navigating off-road and analysing the surface. We are currently working on such issues.

How do you and your team improve the robot’s perception, orientation and decision-making in such cases?

Steinbauer-Wagner: We are currently testing different types of sensors. We are experimenting with reflection values from laser scanners and with various camera systems such as infrared cameras, which are promising for recognising vegetation. Radar technology could also be interesting. These are the current approaches to improving vegetation analysis and weather independence.

Another approach we will be pursuing is carefully driving with contact in unsafe situations and analysing the consequences. Based on satellite images or images from a drone flight, a route is planned with the expected time and energy expenditure. These assumptions are of course very rough and sometimes incorrect. When the robot moves along the route, strong vibrations may occur and the robot may slip or get stuck. All of this is measured by internal sensors and we can then adapt the terrain models that the robot uses to orientate itself. If we classify the terrain environment in this way, we can transfer the knowledge gained to other regions. So I only have to drive the robot into tall grass once to recognise that it is possible to continue driving. Or that the robot will start to slip on a certain combination of slope angle and moisture. I can integrate such findings and classifications into the terrain models and thus increase the robot’s autonomy in future situations.

You can find more research news on Planet Research. Monthly updates from the world of science at Graz University of Technology are available via the research newsletter TU Graz research monthly.