World’s first ethical guidelines for automated driving

“Programmers should not decide who lives and who dies”

What and how will tomorrow’s self-driving cars be allowed to decide for themselves? Tasked by the German government, an ethics commission has now drawn up guidelines to regulate these questions. Prof. Christoph Lütge, business ethicist at the Technical University of Munich (TUM) and member of the commission, explains in which situations technology should not differentiate between children and adults, and why no one should be forced to give up control of their car.

Woman in an automated car
When is the driver in control and when technology? Christoph Lütge calls for clear regulations of responsibility. (Image: chombosan / istockphoto.com)

Germany’s Federal Transport Minister, Alexander Dobrindt, is keen to stress that the commission’s report is the world’s first set of ethical guidelines for automated driving. A group of fourteen philosophers, lawyers, theologians, engineers and consumer protection advocates spent ten months discussing self-driving cars and also trying them out for themselves. Christoph Lütge, professor of business ethics at TUM, is one member of this commission.

Professor Lütge, let’s imagine a situation where a collision with a person is inevitable. However, the car could hit either a child or an older person. What decision should the self-driving car make here?

Self-driving cars should not make decisions based on a person’s characteristics, whether age, physical condition or sex. Human dignity is inviolable. Which is why vehicles cannot be programmed along the lines of: “If in doubt, hit the man with the walking frame”.

Even though most drivers would probably make that decision?

The decision is not being made by a human being with a moral framework and the capacity to make a choice. Instead, we are looking at how a system can be programmed to deal with future scenarios. Imagine this situation: A car is on a narrow path with a cliff face on the right and a sharp drop to the left. Suddenly, a child appears up ahead and the car cannot brake in time. Should the car drive into the child or off the road and into the abyss? Programmers cannot make the decision to sacrifice the driver. The only option is to brake as effectively as possible.

"Neutral organization could test technologies"

But shouldn’t the system be able to calculate the number of victims and base its decisions on that?

This was a topic of much debate in the commission but we came to the conclusion that one can justify a reduction in the number of casualties.

Doesn’t this contradict the ruling made by the German Federal Constitutional Court? The Court ruled that an airplane hijacked by terrorists cannot be shot down, even if it is heading towards a target where there is a significantly higher number of people.

There is an important ethical difference here: Nobody can decide to bring about the death of an individual. The plane in this scenario contains real people who we can identify. In the case of automated driving, we are talking about general programming to reduce casualties without knowing who the victims are or classifying them beforehand.

Apart from that it’s not just a question of numbers. You have to factor in the severity of the damage. If you are faced with an either/or situation where a car can merely graze several people, then it shouldn’t choose to fatally injure one individual.

But what about the thousands of scenarios between these extremes? One manufacturer will choose one outcome while another makes opts for a different one.

I believe there should a neutral body that manages a catalog of scenarios with universally accepted standards. This organization could also test the technologies before manufacturers take them to market.

"Every journey should be documented in a black box"

Is it ethically acceptable at all to shift the responsibilities that we as humans bear over to technology?

This responsibility is not being shifted to technology per se but to the manufacturers and operators of the technology. We want regulations that clearly set out when the driver is in control and when technology is in control – and who is liable. Furthermore, we don’t want a situation where the system suddenly hands over control to the driver for whatever reason. And as responsibility can change between the car and the driver, every journey should be documented in a black box. International standards have to be developed for these scenarios.

What if I don’t want to hand over responsibility?

In the commission, we were told by engineers that driving becomes less safe when humans intervene. However, humans have a basic right not to be obliged to submit to technology. In other words, it must be possible to deactivate automated control.

There are still many cases where the human response is better, anyway.


It is only ethically acceptable to allow automated driving if it will cause less damage than a human being behind the wheel. We assume that this will be possible in the near future – to such an extent that it will lead to a significant ethical improvement in driving. Our aim is to contribute to this development through these guidelines.

Christoph Lütge:

Prof. Christoph Lütge became the first holder of the Peter Löscher Chair of Business Ethics at TUM in 2010. Last year, he became a founding member of the TUM School of Governance, a department that focuses its research and teaching activities on the interplay between technology and politics.  Lütge studied philosophy and business information systems. He now researches ethical behavior within the economic and social framework of globalization. The ethics of technology is one of his core focus areas. Lütge was recently appointed to the Executive Committee of the International Society of Business, Economics, and Ethics (ISBEE).

More Information:

Contact:

Prof. Dr. Christoph Lütge
Technical University of Munich
Peter Löscher Chair of Business Ethics
Phone: +49 89 289 25130
luetgespam prevention@tum.de

Technical University of Munich

Corporate Communications Center

Back to list

News about the topic

HSTS