Prof. Lütge, you have teamed up with researchers at such institutions as the University of Tokyo, New York University and the University of Cambridge to found an Artificial Intelligence Ethics Consortium. Why?
Countless AI and Big Data research projects are springing up all over the world. These projects have the potential to influence political decisions and could shape the healthcare systems of the future. We see a danger of short-term decisions being made amid the urgency of the current crisis having a long-term impact on our world. In many cases, ethical questions arising with the use of new technologies have not even been recognized – let alone answered.
Could you give us an example?
One issue relates to questions of privacy and data protection in software intended to track the spread of epidemics. Such software is already in use in various countries and is currently being developed for the EU as well.
What is the goal of the Global AI Ethics Consortium?
We need ethical standards as a basis for the political decisions on artificial intelligence and the related software development. These standards cannot stand in the way of innovation or the fight against epidemics, but must block negative effects of AI right from the start.
What exactly will you be doing in the coming months?
The consortium will make its expertise available to other research teams and will also launch its own projects. At the Institute for Ethics in Artificial Intelligence at TUM, for example, we will collect proposals in May for new interdisciplinary research projects related to the covid-19 pandemic and provide up to one year of funding for the best ones. Because sharing ideas is crucial to success, we will of course ensure that the project teams are networked with the other members of the consortium. In addition, we will create a repository for all research results on ethics in artificial intelligence in the context of the covid-19 crisis and make it accessible.