Skip to content
Longterm Wiki
Back

Forethought — Navigating Explosive AI Progress

web
forethought.org·forethought.org

Forethought is a nonprofit researching how to prepare for an intelligence explosion, focusing on grand challenges like AI-enabled coups, weapons proliferation, and autocracy risks that could arise during rapid AI-driven technological progress.

Metadata

Importance: 72/100homepage

Summary

Forethought is a nonprofit research organization studying how humanity can prepare for an 'intelligence explosion' in which AI systems accelerate scientific and technological progress dramatically. Their research covers grand challenges including AI-enabled coups, new weapons of mass destruction, AI autocracies, and digital beings. They argue that AGI preparedness requires more than alignment — it requires proactive preparation for a disorienting range of rapid developments.

Key Points

  • Forethought researches how to navigate an 'intelligence explosion' where AI could compress a century of technological progress into a few years.
  • Key research includes 'Preparing for the Intelligence Explosion' by MacAskill & Moorhouse, identifying 'grand challenges' requiring urgent preparation.
  • A major focus is AI-enabled coups: small groups or individuals could use advanced AI to seize power even in established democracies.
  • AGI preparedness is framed as broader than alignment — it includes governance, power concentration, WMD proliferation, and quality-of-life opportunities.
  • Policy recommendations include auditing AI for secret loyalties, sharing frontier AI with multiple stakeholders, and establishing government AI use principles.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 9, 20268 KB
Forethought How should we navigate explosive AI progress?

 AI is already accelerating innovation, and may soon become as capable as human scientists. 
 If that happens, many new technologies could arise in quick succession : new miracle drugs and new bioweapons; automated companies and automated militaries; superhuman prediction and superhuman persuasion. 
 We are a nonprofit researching what can we do, now, to prepare. AI is already accelerating innovation, and may soon become as capable as human scientists. 
 If that happens, many new technologies could arise in quick succession : new miracle drugs and new bioweapons; automated companies and automated militaries; superhuman prediction and superhuman persuasion. 
 We are a nonprofit researching what can we do, now, to prepare. Featured Research

 Preparing for the Intelligence Explosion

 William MacAskill & Fin Moorhouse March 2025 AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges . 
 These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. 
 We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are aligned: we should be preparing, now, for the disorienting range of developments an intelligence explosion would bring. Read more AI-Enabled Coups: How a Small Group Could Use AI to Seize Power

 Tom Davidson, Lukas Finnveden & Rose Hadshar April 2025 The development of AI that is more broadly capable than humans will create a new and serious threat: AI-enabled coups . An AI-enabled coup could be staged by a very small group, or just a single person, and could occur even in established democracies. Sufficiently advanced AI will introduce three novel dynamics that significantly increase coup risk. Firstly, military and government leaders could fully replace human personnel with AI systems that are singularly loyal to them, eliminating the need to gain human supporters for a coup. Secondly, leaders of AI projects could deliberately build AI systems that are secretly loyal to them, for example fully autonomous military robots that pass security tests but later execute a coup when deployed in military settings. Thirdly, senior officials within AI projects or the government could gain exclusive access to superhuman capabilities in weapons development, strategic planning, persuasion, and cyber offense, and use these to increase their power until they can stage a coup. To address these risks,

... (truncated, 8 KB total)
Resource ID: f3dc86fbe8e93aa8 | Stable ID: sid_vrnz6940zz