NewIn: Marcello Ienca

Why we need Neuroethics

New findings in the fields of artificial intelligence (AI) and brain research are accelerating each other. The emerging technologies will profoundly change our lives. However, we are at a point where we can shape these developments, says Marcello Ienca. In this episode of "NewIn," the Professor of Ethics of AI and Neuroscience talks about the potential and risks of current developments.

Display external content

At this point content of an external provider (source: www.xyz.de) is integrated. When displaying, data may be transferred to third parties or cookies may be stored, therefore your consent is required.

You can find more information and the possibility to revoke your consent at www.tum.de/datenschutz.

"We need to act now to bring neurotechnologies in line with human rights and ethical principles," says Marcello Ienca, Professor of Ethics in AI and Neuroscience at TUM.

Software such as Chat GPT gives computers human-like abilities. At the same time, there have been recent breakthroughs in neurotechnology – most prominently AI-based implants allowing people to speak or walk again. Is there a connection?

Absolutely. We are currently experiencing a scientific revolution. Advances in AI and neurology are spurring each other on. They form a virtuous circle – the opposite of the vicious circle.

That sounds very optimistic. Even if you look solely at AI, there are many dangers: Lack of transparency, discrimination, flawed decisions...

Of course, there are dangers. For example, it is conceivable that AI can derive sensitive information, such as a person's sexual orientation, from neurodata. This raises new questions about self-determination. Our brain is no longer that fortress that is not available to the digital world. We increasingly have access to the neurological basis of thought processes. As a society, we must consider what we want and where we draw red lines.

And that's why we need ethics?

Yes, but for me, ethics does not just mean avoiding dangers and risks. It’s also about doing good. We do this by developing the technologies that can help the hundreds of millions of people with neurological and psychiatric disorders. Especially if we incorporate ethical considerations from the outset through human-centered technology development.

Isn't it already too late for that?

For neurotechnology: No. Regarding AI, we only acted reactively and responded to existing technologies. This time, we are acting proactively. For example, in 2019, OECD, the International Organization for Economic Cooperation and Development, established guidelines for the responsible development of neurotechnologies. I contributed to these guidelines myself. Currently, the Council of Europe and UNESCO are also developing principles on this topic.

Are the guidelines also relevant for companies? Elon Musk's company Neuralink recently announced that it had implanted a brain implant in a patient. Apple has secured a patent for measuring brain waves with AirPod headphones in 2023. That sounds rather worrying to me.

In some cases, the private sector resembles the Old Wild West. Neuralink is an example of a company seemingly having no interest in ethics. On the other hand, many other companies have set up ethics councils. Companies were also actively involved in developing the OECD guidelines. We must - and can - ensure that a majority of neurotech companies cultivate a culture of responsible innovation.

Does this mean that new laws are not necessary?

We can pass laws that regulate which products can be sold in Europe. However, compulsion is not always the only solution. It is also in the interests of companies to prevent a scandal like Cambridge Analytica in the coming years. That would have a devastating impact on the entire field.

Apart from working on the guidelines, what are you currently researching yourself?

Many things. For example, as part of a collaborative international project we are working with people with brain implants to incorporate their views (e.g. on ethical aspects) into the development of future implants. Another example: we are working with colleagues from the computer science department at TUM on the development of transparent AI for neurotechnology and privacy-preserving neural data processing.

Further information and links

Marcello Ienca:

“I was born in 1988 and grew up in the 1990s – a time when computers were increasingly becoming part of everyday life," says Marcello Ienca. "Even as a child, I found intelligent systems fascinating: artificial intelligence as well as the human brain. That's why I initially studied both: philosophy, computer science and psychology with a focus on cognitive science. I then combined the two in my master's and doctorate."

After studying in Rome, Berlin, New York, and Leuven, Marcello Ienca completed his doctorate at the University of Basel in 2018. After further research activities at ETH Zurich and the University of Oxford, he founded the Intelligent Systems Ethics Group at EPFL.  He was appointed Professor for Ethics of AI & Neuroscience at TUM in 2023.

All episodes of the NewIn video series

Technical University of Munich

Corporate Communications Center

Contacts to this article:

Prof. Dr. Marcello Ienca
Technical University of Munich (TUM)
Professorship of Ethics of AI and Neuroscience
Tel.: +49 89 4140 4041
marcello.iencaspam prevention@tum.de

Back to list

News about the topic

HSTS