• 8/10/2023
  • Reading time 4 min.

One Topic, One Loop: Aldo Faisal

The importance of a transdisciplinary understanding of AI

Four countries, four universities, four perspectives: In the global discourse series "One Topic, One Loop", four professors discuss a current topic in research and teaching. Aldo Faisal, professor for AI and neuroscience at Imperial College London, responds to the question from Enkelejda Kasneci: How can interdisciplinary approaches, and in particular insights from neuroscience, help to distinguish between the outputs and utterances produced by an AI and those produced by humans?

Aldo Faisal, professor for AI and neuroscience at Imperial College London, speaks with John Jerry Kponyo, professor of telecomunnications engineering at Kwame Nkrumah' University of Science and Technology Triplesense Reply
Aldo Faisal, professor at Imperial College London, on AI and neuroscience. In response to his final question on the topic, John Jerry Kponyo, Professor of Telecommunications Engineering at Kwame Nkrumah' University of Science and Technology, answers.

Neuroscientific approaches have played an important role in AI research from the very beginning. Leaving aside the confusions of logic-based AI, ideas about the reliability of thinking machines and their implementation by means of neurons have been put forward by Alan Turing and Frank Rosenblatt since the 1940s and 1950s.

We now have AI systems, such as Bard or ChatGPT, which have en passant mastered the Turing test. They show human traits precisely in their errors: e.g. when an AI "hallucinates" knowledge - as computer scientists currently call it, when ChatGPT freely invents plausible-sounding answers to questions. In cognitive neuroscience, we describe the equivalent phenomenon in humans as "confabulation". These AI systems are rapidly evolving and will continue to evolve on their own. So we've been in a competition for quite some time between developers developing AI systems to enable more and more human-like communication and those building better and better (AI) systems to recognize AI utterances.

In neuroscience, we see features of our biological intelligence that cannot (yet) be experienced by a machine. There are the personal sensory experiences that are not digital, but analog, and are captured directly by our senses: The human conversation, the concert or the theater visit will become more important and hopefully more prominent. Another important aspect considered essential to our human intelligence is embodiment - the fact that our intelligence develops in a body and can only interact with the world through the body. This embodiment imposes physical and temporal limits on our intelligence that shape  the products we make.

Knowledge builds trust

With each development, the question of which activities should be automated and which should not becomes more virulent for society. In the field of medicine, AI systems have proven to be better, faster, and more accurate than human experts in diagnostics when properly trained. With our AI Clinician, we are working on implementing this in digital therapeutics as well. We see the future of such systems as members of a team in which humans and AI complement each other, freeing up time and capacity.

Trust is important for all of this - and this is also a mandate for teaching: Only those who understand how a technology works, who know its limits and possibilities, can ultimately trust it. This means that we need to start introducing AI literacy education in schools (e.g., covering topics like AI fairness and bias). Simultaneously, AI content will need to be integrated into all university disciplines. I feel fortunate to have initiated a master's program in AI over the years, offering advanced studies for students from all disciplines.

One of the big questions we address in these courses is that of responsible AI - and I'd like to turn this over to Jerry John Kponyo: How can an AI learn to be responsible, and which disciplines are particularly needed?

Global discourse series "One Topic, One Loop"

Four people from four different countries and four different universities discuss a current topic in research and teaching. The series begins with an initial question to which the first person responds and asks the next person another question on the same topic. The series ends with the first person answering the last question and reflecting on all previous answers. The topic of the first season is Large Language Models and their impact on research and teaching.

Our authors are: Enkelejda Kasneci, Head of the Chair for Human-Centered Technologies for Learning at the TUM School of Social Sciences and Technology, Aldo Faisal, Professor of AI & Neuroscience at Imperial College London, Jerry John Kponyo, Associate Professor of Telecomunnications Engineering at Kwame Nkrumah' University of Science and Technology and Sune Lehmann Jørgensen, Professor at the Department of Applied Mathematics and Computer Science at Technical University of Denmark.

Further information and links
Back to list

News about the topic

HSTS