Skip to content
Longterm Wiki
Back

The UK AI Safety Summit Opened a New Chapter in AI Diplomacy

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Carnegie Endowment

Carnegie Endowment commentary analyzing the UK AI Safety Summit (Bletchley Park, Nov 2023) as a diplomatic milestone, covering the 28-nation joint commitment on pre-deployment testing and the establishment of AI Safety Institutes — relevant to AI governance and international coordination on frontier AI risks.

Metadata

Importance: 52/100opinion piececommentary

Summary

This commentary by Mariano-Florentino Cuéllar argues that the UK AI Safety Summit achieved a significant diplomatic breakthrough by securing commitments from 28 governments and major AI companies on pre-deployment safety testing of frontier models. It highlights the creation of UK and US AI Safety Institutes and frames the summit as the beginning of a long process to build an international AI safety regime. The piece also notes the urgency driven by rapid capability growth and potential risks from frontier models expected between 2025–2030.

Key Points

  • 28 governments including China, the US, EU, and major developing nations signed a joint commitment on pre-deployment safety testing of advanced AI models.
  • The UK and US each announced new AI Safety Institutes as part of an emerging international AI safety architecture.
  • Computing power used in AI training has expanded ~55 million times over the past decade, with next-gen models potentially 10x GPT-4's compute.
  • Private sector experts warned that between 2025–2030, AI systems could exhibit rogue behaviors difficult for humans to control.
  • The author argues that diplomatic skill and institutional design (like aviation safety frameworks) are essential complements to technical solutions.

Cached Content Preview

HTTP 200Fetched Apr 11, 202615 KB
{
 "authors": [
 "Mariano-Florentino (Tino) Cuéllar"
 ],
 "type": "commentary",
 "centerAffiliationAll": "",
 "centers": [
 "Carnegie Endowment for International Peace"
 ],
 "collections": [
 "Artificial Intelligence"
 ],
 "englishNewsletterAll": "",
 "nonEnglishNewsletterAll": "",
 "primaryCenter": "Carnegie Endowment for International Peace",
 "programAffiliation": "",
 "programs": [],
 "projects": [],
 "regions": [
 "North America",
 "United States",
 "Western Europe",
 "United Kingdom"
 ],
 "topics": [
 "Democracy",
 "Global Governance",
 "Technology",
 "AI"
 ]
} Source : Getty

 Commentary The UK AI Safety Summit Opened a New Chapter in AI Diplomacy

 Driven to action by the rapid advancements in AI, summit delegates began to map the long road to balancing risk management with innovation in machine learning.

 Link Copied By Mariano-Florentino (Tino) Cuéllar Published on Nov 9, 2023 No, the British didn’t come close to solving every policy problem involving artificial intelligence (AI) during the UK AI Safety Summit last week. But as delegates from all over the world gathered outside London to discuss the policy implications of major advances in machine learning and AI, UK officials engineered a major diplomatic breakthrough, setting the world on a path to reduce the risks and securing greater benefits from this fast-evolving technology. 

 Hosted by Prime Minister Rishi Sunak, the summit beat the odds on several fronts. UK leaders gathered senior government officials, executives of major AI companies, and civil society leaders in a first-of-its-kind meeting to lay the foundations for an international AI safety regime. The result was a joint commitment by twenty-eight governments and leading AI companies subjecting advanced AI models to a battery of safety tests before release, as well as the announcement of a new UK-based AI Safety Institute and a major push to support regular, scientist-led assessments of AI capabilities and safety risks.

 The discussion also began to map the long and winding road ahead. Neither technical breakthroughs nor summit agreements will be enough to achieve a sensible balance between risk management and innovation. Crafty diplomacy and pragmatic design of institutional arrangements (such as the international aviation safety process) are also necessary to take on global challenges. Getting either of these in sufficient quantity is a daunting prospect, particularly when both are in short supply and major crises in Ukraine and the Middle East are raging.

 Despite the hallway conversations about these geopolitical problems, summit delegates were driven to action by a shared recognition that the most advanced AI systems are improving at startling speeds . The amount of computing power used in training AI systems has expanded over the past decade by a factor of 55 million . The next generation of so-called frontier models, using perhaps ten times as much compute for training as OpenAI’s GPT-4, could pose new risks for

... (truncated, 15 KB total)
Resource ID: 4f214658820111e9