• 4/3/2023
  • Reading time 6 min.

Prof. Urs Gasser on support for an AI development moratorium

"A freeze in training artificial intelligence won't help"

The development of artificial intelligence (AI) is out of control, in the opinion of approximately 3,000 signatories of an open letter published by business leaders and scientists. The signatories call for a temporary halt to training especially high-performance AI systems. Prof. Urs Gasser, expert on the governance of digital technologies, examines the important questions from which the letter deflects attention, talks about why an "AI technical inspection agency" would make good sense and looks at how far the EU has come compared to the USA in terms of regulation.

Professor Urs Gasser Andreas Heddergott / TUM
Prof. Urs Gasser calls for comprehensive risk management instead of a pause in development.

Professor Gasser, do you support the emergency measures called for in the letter?

The open letter consumes much attention which would be better directed to other issues in the AI debate. What rings true is that today probably nobody knows how to train extremely powerful AI systems in such a way that they will always be reliable, helpful, honest and harmless. However, a pause in AI training will not contribute to this goal. It would be impossible to enforce such a moratorium on a global level and  to implement the regulations called for within period of only six months, among other reasons. What's needed instead is an iterative advancement of technologies while developing and adjusting appropriate safeguards.

What issues should preferably be receiving the attention instead of a moratorium?

First, the open letter once again invokes the specter of Artificial General Intelligence. This distracts from a nuanced discussion of the risks and opportunities associated with the technologies that currently enter the market. Second, the letter refers to successor models of GPT-4. This deflects from the fact that GPT-4's predecessor, ChatGPT, already comes with significant challenges that we urgently need to address – for example misinformation and bias which the application replicates and scales up. And third, the attention-grabbing demands made in the letter distract us from the instruments available that we can already use to govern the development and use of AI.

What would such regulations be oriented towards, what instruments do we have?

Over the past few years we have witnessed a flourishing of ethical principles which can guide the development and use of AI. These norms have been supplemented by technical standards and best practices. Specifically, the OECD Principles on Artificial Intelligence are connected to more than 400 implementation tools. The US National Institute of Standards and Technology (NIST), to take another example, issued a 70-page guideline on how bias in AI systems can be detected and managed. In the area of safety in large AI models, we're seeing new methods such as Constitutional AI, in which an AI system "learns" principles of good conduct from humans and can then be used to monitor the outputs of another AI application. Substantial progress has been made in terms of safety, transparency and privacy, and there are even specialized auditing organizations.

Now the essential question is whether such instruments are actually implemented or not, and if so how. Returning to the example of ChatGPT: Will the chat logs of users be included in the model for iterative training? Are plug-ins allowed which can record user interactions, contacts and other personal data? The interim ban of and the opening of an investigation against the developers of ChatGPT by the Italian data protection authorities are signs that much remains unclear here.

The open letter demands that no further development of AI systems should take place until one can be confident that the AI systems will have positive effects and their risks are manageable. At what point in development would it be possible to predict the impacts of an AI system so well that this kind of regulation would make sense?

The history of technology teaches us that it is difficult to predict the "good" or "bad" use of technologies, that technologies often entail both aspects, and that negative impacts can often be unintentional. Instead of focusing on a particular  point for the forecast, we have to do two things: First, we have to ask ourselves which applications we as a society do not want, even if technically possible. We need clear red lines and prohibitions. Here I'm thinking of autonomous weapons systems as an example. Second, we need comprehensive risk management, spanning the full life cycle from the development all the way to the use of AI. The respective requirements need to increase as the magnitude of the potential risks to people and the environment posed by a given application grow. The European legislature is correct in taking this approach.

According to the proposal, independent experts should assess the risks of AI.

Independent audits are a very important instrument, especially when it comes to applications that can have a considerable impact on human beings. This is of course not a new idea: We are familiar with audits, reviews, and inspection procedures and instances across a wide variety of areas of life, ranging from automobile inspections to general technical equipment testing and financial auditing. However, the challenge is disproportionally greater with certain AI methods and applications, because some systems are dynamic and evolve as they are used. And it's also important to acknowledge that experts are not in a position to comprehensively assess all societal implications. We also need innovative mechanisms which for example empower disadvantaged people and underrepresented groups to participate in discussions on the consequences of AI. This is no easy task, one I wish was attracting more attention.

The authors also address the political sector. Politics would be responsible for anchoring such an "AI technical inspection agency" in the system.

We do indeed need clear legal rules for artificial intelligence. At the EU level, the AI Act is currently being finalized which is intended to ensure that AI technologies are safe and comply with fundamental rights. The bill provides for the classification of AI technologies according to their risk levels, with the possible consequence of prohibition or transparency obligations, among other safeguards. For example, plans include a ban of comprehensive social scoring systems as we are currently seeing in China. In the USA, Congress is stuck in gridlock when it comes to effective AI legislation. It would be helpful if the prominent letter writers would put pressure on US federal legislators to take action instead of calling for a pause of technological development.

About Urs Gasser:

Prof. Dr. Urs Gasser has headed the Technical University of Munich (TUM) Chair of Public Policy, Governance and Innovative Technology since 2021. He is Dean of the TUM School of Social Sciences and Technology and Rector of the Munich School of Politics and Public Policy at TUM. Before joining TUM he was Executive Director of the Berkman Klein Center for Internet & Society at Harvard University and a professor at Harvard Law School.

Technical University of Munich

Corporate Communications Center

Contacts to this article:

Prof. Dr. Urs Gasser
Technical University of Munich (TUM)
Chair of Public Policy, Governance and Innovative Technology
Phone: +49 89 907793 270
urs.gasserspam prevention@tum.de

Back to list

News about the topic

HSTS