Skip to content

Should We Pause AI Development?

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:47 (Adequate)⚠️
Importance:42 (Reference)
Last edited:2026-01-30 (2 days ago)
Words:2.3k
Structure:
📊 9📈 1🔗 0📚 5622%Score: 11/15
LLM Summary:Comprehensive synthesis of the AI pause debate showing moderate expert support (35-40% of 2,778 researchers) and high public support (72%) but very low implementation feasibility, with all major labs continuing development despite 33,000+ FLI letter signatures. Alternative approaches like RSPs have seen actual adoption while pause proposals remain politically rejected (US Senate vote 99-1 against moratorium).
Critical Insights (4):
  • ClaimThe AI pause debate reveals a fundamental coordination problem with many more actors than historical precedents—including US labs (OpenAI, Google, Anthropic), Chinese companies (Baidu, ByteDance), and global open-source developers, making verification and enforcement orders of magnitude harder than past moratoriums like Asilomar or nuclear treaties.S:3.5I:4.5A:4.0
  • ClaimThe most promising alternatives to full pause may be 'responsible scaling policies' with if-then commitments—continue development but automatically implement safeguards or pause if dangerous capabilities are detected—which Anthropic is already implementing.S:3.5I:4.0A:4.5
  • DebateSeveral major AI researchers hold directly opposing views on existential risk itself—Yann LeCun believes the risk 'isn't real' while Eliezer Yudkowsky advocates 'shut it all down'—suggesting the pause debate reflects deeper disagreements about fundamental threat models rather than just policy preferences.S:4.0I:4.0A:3.5
Issues (2):
  • QualityRated 47 but structure suggests 73 (underrated by 26 points)
  • Links33 links could use <R> components
See also:EA Forum
Key Crux

The AI Pause Debate

QuestionShould we pause/slow development of advanced AI systems?
Catalyst2023 FLI open letter signed by 30,000+ people
StakesTrade-off between safety preparation and beneficial AI progress

In March 2023, the Future of Life Institute published an open letter calling for a 6-month pause on training AI systems more powerful than GPT-4. The letter garnered over 33,000 signatures, including Turing Award winners Yoshua Bengio and prominent figures like Elon Musk and Steve Wozniak. It ignited fierce debate: Is pausing AI development necessary for safety, or counterproductive and infeasible?

DimensionAssessmentEvidence
Expert SupportModerate (35-40%)2023 AI Impacts survey: ≈35% of 2,778 AI researchers favor slower development
Public SupportHigh (65-70%)AIPI poll: 72% of Americans prefer slowing AI development
FeasibilityVery LowNo pause implemented despite 33,000+ signatories; major labs continued development
International CoordinationVery LowNo binding agreements; China interest but no commitments
Alternative AdoptionMediumRSPs adopted by Anthropic, OpenAI, Google DeepMind; EU AI Act proceeding
Historical PrecedentMixedAsilomar 1975 succeeded; nuclear/climate coordination partial
Current Status (2025)Pause rejected; regulation fragmentedUS Senate rejected 10-year moratorium 99-1; 1,000+ state AI bills in 2025
Loading diagram...

Pause advocates call for:

  • Moratorium on training runs beyond current frontier (GPT-4 level)
  • Time to develop safety standards and evaluation frameworks
  • International coordination on AI governance
  • Only resume when safety can be ensured

Duration proposals vary:

(7 perspectives)

Range of views from accelerate to indefinite pause

Effective Accelerationists (e/acc)
High confidence

Eliezer Yudkowsky
High confidence

Max Tegmark (FLI)
High confidence

Most AI Labs (OpenAI, Google, Anthropic)
High confidence

Stuart Russell
High confidence

Yann LeCun (Meta)
High confidence

Yoshua Bengio
Medium confidence

Key Questions (4)
  • Is a multilateral pause achievable?
  • Will we get warning signs before catastrophe?
  • How much safety progress can happen during a pause?
  • How significant is the China concern?

Many propose middle grounds between full pause and unconstrained racing:

ApproachMechanismAdoption StatusEffectivenessVerification Difficulty
Responsible Scaling PoliciesIf-then commitments: if dangerous capabilities detected, pause or add safeguardsAnthropic (ASL system), OpenAI (Preparedness Framework), Google DeepMind (Frontier Safety Framework)Medium—depends on evaluation qualityMedium—relies on internal assessments
Compute GovernanceLimit training compute through export controls or compute thresholdsUS export controls (Oct 2022, expanded 2023-2024); EU AI Act thresholdsMedium—slows frontier developmentLow—chip sales are trackable
Safety TaxRequire 10-20% of compute/budget on safety researchProposed but not mandatedLow-Medium—difficult to verify meaningful safety workHigh—“safety” is vaguely defined
Staged DeploymentDevelop models but delay release for safety testingCommon practice at major labsMedium—delays harm but allows capability developmentLow—deployment is observable
International RegistryRegister large training runs with international bodySeoul AI Summit commitments (2024)Low—visibility without enforcementMedium—relies on self-reporting
Threshold-Based PausePause only when specific dangerous capabilities emergeProposed in RSPs; no regulatory mandatePotentially high if thresholds are well-definedHigh—requires robust capability evaluation

Responsible Scaling Policies (RSPs)

  • Continue development but with if-then commitments
  • If dangerous capabilities detected, implement safeguards or pause
  • Anthropic’s approach uses AI Safety Levels (ASL-1 through ASL-4+)
  • As of May 2025, Anthropic activated ASL-3 for Claude Opus 4 due to CBRN concerns

Compute Governance

  • Limit training compute through regulation or voluntary agreement
  • US export controls restrict advanced AI chips to China and ~150 other countries
  • The EU AI Act defines “high-risk” based on compute thresholds (10^25 FLOP)
  • Easier to verify than complete pause—chip production is concentrated in few fabs

Safety Tax

  • Require safety work proportional to capabilities
  • E.g., spend 20% of compute on safety research
  • Maintains progress while prioritizing safety
  • No mandatory implementation; relies on voluntary commitment

Staged Deployment

  • Develop models but delay deployment for safety testing
  • Allows research while preventing premature release

International Registry

  • Register large training runs with international body
  • Creates visibility without stopping work
  • Foundation for future coordination
  • Seoul AI Summit (2024) established voluntary commitments for 16 AI companies

Threshold-Based Pause

  • Continue until specific capability thresholds (e.g., autonomous replication)
  • Then pause until safeguards developed
  • Clear criteria, only activates when needed

Why is coordination so hard? Analysis of AI governance challenges suggests coordination failure is the default outcome absent strong institutional mechanisms.

Actor CategoryExamplesEstimated AI Investment (2024)Pause Incentive
US Frontier LabsOpenAI, Anthropic, Google DeepMind, Meta$50-100B+ combinedVery Low—first-mover advantage
Chinese LabsBaidu, ByteDance, Alibaba, Tencent$15-30B estimatedVery Low—strategic competition
European LabsMistral, Aleph Alpha$2-5BLow-Medium—regulatory pressure
Open SourceMeta (Llama), HuggingFace, communityDistributedNone—decentralized development
GovernmentsUS, China, EU, UKRegulatory roleMixed—security vs. innovation

Verification challenges:

  • Training runs are secret—only ~10-20 organizations can train frontier models
  • Compute usage is hard to monitor without chip-level tracking
  • Open source development involves 100,000+ contributors globally
  • PauseAI protests in 13 countries (May 2024) had minimal policy impact

Incentive misalignment:

  • First to AGI gains enormous advantage—estimated $1-10T+ value capture
  • Defecting from pause very tempting—6-12 month lead could be decisive
  • Short-term vs long-term tradeoffs favor short-term action
  • National security concerns: US-China AI competition frames pause as “unilateral disarmament”

Precedents suggest pessimism:

PrecedentOutcomeLessons for AI
Asilomar 1975Voluntary pause worked (≈1 year)Smaller field (≈140 scientists); clearer risks; easier verification
Nuclear Non-ProliferationPartial success (9 nuclear states)Slower timelines (decades); clear existential threat; fewer actors
Climate (Paris Agreement)Minimal binding successDiffuse actors; long timelines; enforcement failed
Biological Weapons ConventionNear-universal (187 states) but weakNo verification mechanism; concerns about compliance persist

But some hope:

  • All parties may share existential risk concern—70% of AI researchers want more safety prioritization
  • Industry may support regulation to avoid liability and level playing field
  • Compute is traceable—TSMC and Samsung produce 90%+ of advanced chips; ASML is sole EUV lithography supplier
  • China has expressed interest in international coordination: “only with joint efforts of the international community can we ensure AI technology’s safe and reliable development”

What Would Need to Be True for a Pause to Work?

Section titled “What Would Need to Be True for a Pause to Work?”

For a pause to be both feasible and beneficial:

ConditionCurrent StatusFeasibility Assessment
Multilateral buy-inNo formal US-China-EU agreementVery Low—geopolitical competition; no active negotiations
VerificationChip tracking possible but not implementedMedium—TSMC/ASML choke points exist; software tracking hard
EnforcementNo international AI enforcement bodyVery Low—would require new institutions
Clear timelineFLI proposed 6 months; Yudkowsky proposes indefiniteLow—no consensus on when “safety solved”
Safety progress70% of researchers want more safety prioritizationMedium—unclear if pause enables progress
AllowancesNot specified in most proposalsMedium—“narrow AI” vs “frontier” line is fuzzy
Political will72% US public supports slowing AIMedium—public support but industry opposition

Current reality: Few of these conditions are met. As FLI noted on the letter’s one-year anniversary, AI companies have instead directed “vast investments in infrastructure to train ever-more giant AI systems.”

The pause debate has evolved significantly since the 2023 letter:

DateDevelopmentImpact on Pause Debate
Nov 2023Bletchley Declaration signed by 28 countriesAcknowledged risks but no pause provisions
May 2024Seoul AI Summit: 16 companies sign voluntary commitmentsRSPs preferred over pause; thresholds remain vague
Feb 2025International AI Safety Report led by Yoshua Bengio100 experts; calls for governance but not pause
Jul 2025US Senate rejects 10-year AI moratorium 99-1Federal pause rejected; 1,000+ state bills instead
Aug 2025EU AI Act general-purpose AI obligations take effectRegulation over pause; no “grace period”

PauseAI, founded in May 2023 by Dutch software entrepreneur Joep Meindertsma, has organized protests across 13+ countries. Their goals include:

  • Temporary pause on training the most powerful general AI systems
  • International AI safety agency similar to IAEA
  • Democratic control over AI development

Despite ongoing activism, no country has implemented binding pause legislation.

Comparison of Technology Governance Precedents

Section titled “Comparison of Technology Governance Precedents”
CaseDurationSuccessKey Success FactorsApplicability to AI
Asilomar 1975≈1 year moratoriumHighSmall field (≈140 scientists); scientists initiated; clear biological hazardsLow—AI has millions of practitioners; unclear hazard
Nuclear Test BanOngoing since 1963MediumSeismic verification; mutual existential threat; few actors (5-9 nuclear states)Low—more AI actors; no mutual destruction threat
Montreal Protocol1987-presentVery HighClear ozone hole evidence; available CFC substitutes; verifiable productionLow—no AI substitute; benefits are diffuse
Germline Editing2015-presentMediumLow economic stakes; clear ethical violation (He Jiankui prosecuted)Low—AI has massive economic stakes
Biological Weapons Convention1972-presentLow187 states parties but no verification mechanismMedium—similar verification challenges

Asilomar Conference on Recombinant DNA (1975):

  • Scientists voluntarily paused research on genetic engineering for approximately one year
  • ~140 biologists, lawyers, and physicians developed safety guidelines at Pacific Grove, California
  • Moratorium was “universally observed” in academic and industrial research centers
  • Led to NIH Recombinant DNA Advisory Committee and safety protocols still in use today
  • Key difference: Scientists controlled the technology; AI development involves thousands of companies and millions of developers

Nuclear Test Ban Treaties:

  • Partial Test Ban Treaty (1963): banned atmospheric testing—verified by detection networks
  • Comprehensive Test Ban Treaty (1996): signed by 187 states but not ratified by US, China, or others
  • Verification via seismology is feasible; 9 states now possess nuclear weapons
  • Key difference: Decades-long timeline allowed governance to develop; AI timelines may be 5-15 years

Ozone Layer (Montreal Protocol):

  • Successfully phased out CFCs globally—ozone hole now recovering
  • Required finding chemical substitutes (HFCs) and industry buy-in
  • Key difference: Clear, measurable environmental indicator; AI risks are speculative and contested

Moratorium on Human Germline Editing:

  • Mostly holding after He Jiankui’s 2018 violation (3-year prison sentence in China)
  • Low economic stakes compared to AI; clear ethical consensus across cultures
  • Key difference: AI development has estimated $1-10T+ in value at stake

The Case for “Slowdown” Rather Than “Pause”

Section titled “The Case for “Slowdown” Rather Than “Pause””

Many find middle ground more palatable. Yoshua Bengio, Turing Award winner and lead author of the International AI Safety Report, has advocated for “red lines” that AI systems should never cross rather than a blanket pause:

  • Autonomous replication or improvement
  • Dominant self-preservation and power seeking
  • Assisting in weapon development
  • Cyberattacks and deception

Slowdown means:

  • Deliberate rather than maximize speed
  • Investment in safety alongside capabilities
  • Coordination with other labs
  • Voluntary agreements where possible

More achievable because:

  • Doesn’t require stopping completely
  • Maintains progress on benefits
  • Reduces but doesn’t eliminate competition
  • Easier political sell

Examples of slowdown mechanisms:


ExpertAffiliationPositionKey Quote
Eliezer YudkowskyMIRIIndefinite shutdown”Shut it all down” (TIME, 2023)
Yoshua BengioMila, Turing laureateInternational governance + red lines”We succeeded in regulating nuclear weapons… we can reach a similar agreement for AI”
Max TegmarkMIT, FLI6-month pauseOrganized FLI letter; continues advocacy
Dario AmodeiAnthropic CEORSPs, not pauseSupports conditional pauses if capabilities exceed safeguards
Sam AltmanOpenAI CEOOpposed to pauseAdvocates international governance but continued development
Yann LeCunMeta AIStrongly opposedPublic opposition to pause as “counterproductive”

Most disagreement reduces to different assessments of:

QuestionPause SupportersPause Opponents
Current risk levelASL-3/high-risk thresholds being crossedRisks are speculative; benefits concrete
Coordination feasibilityAsilomar precedent shows it’s possibleChina won’t agree; enforcement impossible
Safety progress during pauseTime enables governance developmentSafety research requires frontier systems
Competitive dynamicsMisaligned AI is worse than losing raceCeding advantage to China unacceptable
Alternative effectivenessRSPs are “safety-washing”; insufficientRSPs provide proportional protection