• 10/7/2019
  • Reading time 4 min.

Research on ethical questions in artificial intelligence

TUM Institute for Ethics in Artificial Intelligence officially opened

In attendance of Minister of State Dorothee Bär, Federal Government Commissioner for Digital Affairs, the Technical University of Munich (TUM) officially opened the TUM Institute for Ethics in Artificial Intelligence today. At the event the new institute's first research projects at the interface of ethics and artificial intelligence (AI) were presented – in areas ranging from AI in autonomous vehicles to regulatory issues.

Minister of State Dorothee Bär, Federal Government Commissioner for Digital Affairs (center), Prof. Thomas F. Hofmann, President of TUM (left) and Prof. Christoph Lütge Andreas Heddergott / TUM
Minister of State Dorothee Bär, Federal Government Commissioner for Digital Affairs, Prof. Thomas F. Hofmann, President of TUM (left) and Prof. Christoph Lütge at the opening conference of the TUM Institute for Ethics in Artificial Intelligence.

TUM has been studying the complex interactions of science, technology and society since 2012 through the work of the Munich Center for Technology in Society (MCTS), which was established under the 2012 Excellence Initiative. As part of the MCTS, the TUM Institute for Ethics in Artificial Intelligence (IEAI) will focus on ethical implications of artificial intelligence. The US company Facebook is supporting this TUM initiative by a 6.5 million euro donation not subject to any conditions or expectations.

At today's opening symposium for the Institute for Ethics in Artificial Intelligence (IEAI) at TUM, Dorothee Bär, the Federal Government Commissioner for Digital Affairs, said: “To some extent, machine learning algorithms are already playing a role in choosing the news articles we read. But the possible applications extend far beyond that, for example into such areas as medical diagnostics. These far-reaching technological changes raise many ethical issues. It is a good thing that TUM is getting involved in addressing these issues.”

Creating trustworthy AI

With the IEAI, TUM aims to combine its traditional strengths in science and technology fields with the humanities and social sciences, creating a force to shape the kind of AI technologies that will earn trust and acceptance in society. “As a technical university, we can effectively contribute to social progress only if we align our technological innovations with the values, needs and expectations of people,” said TUM president Prof. Thomas F. Hofmann. “This guiding principle of human-centered engineering permeates TUM's future agenda for research, innovation and the education of our students.” To that end, the IEAI will bring together talented researchers in medicine, natural sciences and engineering to collaborate in interdisciplinary teams with partners from the fields of social sciences and ethics. With total funding of approximately 2.3 million euros, TUM is now kicking off the first research projects

The aim of the project is to translate ethical theories into algorithms. These will be integrated into software to adapt the direction and speed of autonomous vehicles to real-time situations, for example to make ethically reasonable decisions in case a collision with a person cannot be avoided. The expanded software will be tested and evaluated in a simulator on the basis of new and familiar situations.

 

This project will explore whether it is possible and advisable to use machine learning-based approaches to help doctors with important decisions in everyday clinical practice, for example on whether or not to prescribe a certain drug.

 

Negative information such as hate speech and fake news sometimes spreads like firestorms in social media. The AI algorithms used by the various platforms probably play an important role in this phenomenon. The aim of this project is to create mathematical models to arrive at a better understanding of the dynamics of this opinion-forming process and study where the responsibility for these “firestorms” lies as well as possible monitoring mechanisms.

This project will study how automatic, AI-driven communications targeting the individuals who spread "fake news" might persuade them to rethink their behavior. This will include the analysis of ethical and psychological questions as well as such issues as data protection and possible violations of the private sphere of the targeted individuals.

The project will study proposed solutions for regulating and certifying AI-based software based on its compatibility with social and technical norms. The results will be used to formulate specific recommendations for society and policy makers. They will also serve as the basis for technological specifications to give society more control and foster greater trust.

The data generated by networked manufacturing operations (Industry 4.0) facilitate the real-time operation of production processes. For employees, however, this can create a sense of constant surveillance. This project will investigate aspects of AI applications that raise ethical concerns and attempt to develop algorithms that will optimize production processes for workers, with their strengths, weaknesses and needs, instead of subordinating humans to the technical needs of production processes.

Technical University of Munich

Corporate Communications Center

Back to list

News about the topic

HSTS