Skip to content
Longterm Wiki
Back

Pause Giant AI Experiments: An Open Letter (Wikipedia)

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

Landmark public advocacy moment in AI safety history; useful reference for understanding the 2023 AI governance debate and the gap between safety calls and industry practice.

Metadata

Importance: 62/100wiki pagereference

Summary

Wikipedia article covering the March 2023 Future of Life Institute open letter calling for a 6-month pause on training AI systems more powerful than GPT-4, signed by over 30,000 people including prominent researchers and executives. The letter cited risks including AI propaganda, job automation, human obsolescence, and loss of societal control, and called for increased safety research and government regulation. Despite widespread attention, no pause materialized and AI development accelerated.

Key Points

  • Published by the Future of Life Institute in March 2023, one week after GPT-4's release, calling for a 6-month pause on training AI systems more powerful than GPT-4.
  • Received 30,000+ signatures from figures including Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, and Yuval Noah Harari.
  • Cited risks such as AI-generated propaganda, extreme job automation, human obsolescence, and societal loss of control in a race-to-the-bottom dynamic.
  • Called for government regulation, independent audits, tracking of powerful AI systems, and robust public funding for AI safety research.
  • Despite generating renewed governmental urgency around AI governance, AI companies continued accelerating development with vast infrastructure investments.

Cited by 1 page

PageTypeQuality
Future of Life Institute (FLI)Organization46.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202612 KB
Pause Giant AI Experiments: An Open Letter - Wikipedia 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Jump to content 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 From Wikipedia, the free encyclopedia 
 
 
 
 
 
 2023 letter calling for a pause on AI system training 
 Part of a series on Artificial intelligence (AI) 
 Major goals 
 Artificial general intelligence 

 Intelligent agent 

 Recursive self-improvement 

 Planning 

 Computer vision 

 General game playing 

 Knowledge representation 

 Natural language processing 

 Robotics 

 AI safety 
 
 
 Approaches 
 Machine learning 

 Symbolic 

 Deep learning 

 Bayesian networks 

 Evolutionary algorithms 

 Hybrid intelligent systems 

 Systems integration 

 Open-source 

 AI data centers 
 
 
 Applications 
 Bioinformatics 

 Deepfake 

 Earth sciences 

 Finance 

 Generative AI 
 Art 

 Audio 

 Music 
 

 Government 

 Healthcare 
 Mental health 
 

 Industry 

 Software development 

 Translation 

 Military 

 Physics 

 Projects 
 
 
 Philosophy 
 AI alignment 

 Artificial consciousness 

 The bitter lesson 

 Chinese room 

 Friendly AI 

 Ethics 

 Existential risk 

 Turing test 

 Uncanny valley 

 Human–AI interaction 
 
 
 History 
 Timeline 

 Progress 

 AI winter 

 AI boom 

 AI bubble 
 
 
 Controversies 
 Deepfake pornography 
 Taylor Swift deepfake pornography controversy 

 Grok sexual deepfake scandal 
 

 Google Gemini image generation controversy 

 It's the Most Terrible Time of the Year 

 Pause Giant AI Experiments 

 Removal of Sam Altman from OpenAI 

 Statement on AI Risk 

 Tay (chatbot) 

 Théâtre D'opéra Spatial 

 Voiceverse NFT plagiarism scandal 
 
 
 Glossary 
 Glossary 
 
 v 
 t 
 e 
 
 Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 ", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. [ 1 ] It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio , Stuart Russell , Elon Musk , Steve Wozniak and Yuval Noah Harari . [ 1 ] [ 2 ] [ 3 ] 

 
 Motivations

 [ edit ] 
 The publication occurred a week after the release of OpenAI 's large language model GPT-4 . It asserts that current large language models are "becoming human-competitive at general tasks", referencing a paper about early experiments of GPT-4, described as having "Sparks of AGI ". [ 4 ] AGI is described as posing numerous important risks, especially in a context of race-to-the-bottom dynamics in which some AI labs may be incentivized to overlook security to deploy products more quickly. [ 5 ] 

 It asks to refocus AI research on making powerful AI systems "more ac

... (truncated, 12 KB total)
Resource ID: 4fc41c1e8720f41f | Stable ID: sid_rEZeyFFN92