Skip to content
Longterm Wiki
Back

PauseAI - Movement to Pause Advanced AI Development

web
pauseai.info·pauseai.info/

PauseAI represents a prominent activist wing of the AI safety movement; useful for understanding the 'pause' strategic perspective and current advocacy efforts, though distinct from technical alignment research approaches.

Metadata

Importance: 52/100homepage

Summary

PauseAI is an advocacy movement calling for an international pause on the development of advanced AI systems until adequate safety measures and governance frameworks are in place. The organization coordinates activists, provides educational resources, and lobbies policymakers to take urgent action on AI risk. It represents a direct-action approach to AI safety that prioritizes preventing catastrophic outcomes over accelerating beneficial AI.

Key Points

  • Advocates for an immediate international pause or moratorium on frontier AI development until safety is assured
  • Organizes grassroots activism, protests, and policy lobbying campaigns targeting governments and AI labs
  • Frames advanced AI development as an existential risk requiring urgent coordinated global action
  • Provides resources for volunteers to get involved in advocacy, outreach, and local organizing
  • Represents a 'pauser' strategic position in the AI safety community distinct from alignment research approaches

Cited by 5 pages

PageTypeQuality
Should We Pause AI Development?Crux47.0
Worldview-Intervention MappingAnalysis62.0
Pause AIOrganization59.0
Pause / MoratoriumConcept72.0
Pause AdvocacyApproach91.0

1 FactBase fact citing this source

EntityPropertyValueAs Of
PauseAIWebsitehttps://pauseai.info/

Cached Content Preview

HTTP 200Fetched Apr 7, 20263 KB
We need to Pause AI 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 Top

 PauseAI's largest ever protest will be on Saturday February 28th in London. Sign up now!   Close HELP US PROTECT STATE SOVEREIGNTY ON AI REGULATION | ACT NOW »   Close 🎄 Holiday Matching Campaign! Help fund volunteer stipends for PauseAI
 advocates. Join the Little Helpers campaign → Close Brussels, Feb 23 - Join us outside the European Parliament to call for a global treaty
 to pause frontier AI development. Take action → Close Don't let AI companies
gamble away our future

 Get involved Donate Latest

 Loading news... We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in Statement on Superintelligence

 110,000+ signatories including AI researchers, political, faith and industry leaders, artists and media celebrities

 
 Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Statement on AI Risk

 Signed by hundreds of experts, including the top AI labs and scientists

 
 If you take the existential risk seriously, as I now do, it might be quite sensible to just stop developing these things any further. Geoffrey Hinton

 Nobel Prize winner & "Godfather of AI"

 
 The development of full artificial intelligence could spell the end of the human race. Stephen Hawking

 Theoretical physicist and cosmologist

 
 It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… At some stage therefore, we should have to expect the machines to take control. Alan Turing

 Inventor of the modern computer

 
 If we pursue [our current approach], then we will eventually lose control over the machines. Stuart Russell

 Writer of the leading textbook on AI

 
 Rogue AI may be dangerous for the whole of humanity. Banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start. Yoshua Bengio

 AI Turing Award winner

 
 "> See all quotes BlueSky Info

 About Us Contact Us Press / Media Partnerships Privacy policy Safeguarding Legal Info 
 Stichting PauseAI 
 (kvk 92951031) Risks

 Risks overview AI Outcomes Existential risk Psychology of x-risk AI takeover Cybersecurity Dangerous capabilities State of the art Urgency Take Action

 Join PauseAI How you can help Communities Donate Merchandise   Vacancies Email builder Lobby tips Other

 auto Theme Withdraw consent Edit page  on GitHub   All pages RSS License: CC-BY 4.0   Submit feedback
Resource ID: a8fda81d4a00ec7c | Stable ID: sid_QDZhJPMY7K