Skip to content

Pause / Moratorium

📋Page Status
Page Type:ResponseStyle Guide →Intervention/response page
Quality:72 (Good)⚠️
Importance:78.5 (High)
Last edited:2025-01-28 (12 months ago)
Words:2.1k
Structure:
📊 20📈 2🔗 6📚 265%Score: 15/15
LLM Summary:Comprehensive analysis of pause/moratorium proposals finding they would provide very high safety benefits if implemented (buying time for safety research to close the growing capability-safety gap) but face critical enforcement and coordination challenges with zero current adoption by major labs. The FLI 2023 open letter garnered 30,000+ signatures but resulted in no actual slowdown, highlighting severe tractability issues despite theoretical effectiveness.
Issues (3):
  • QualityRated 72 but structure suggests 100 (underrated by 28 points)
  • Links15 links could use <R> components
  • StaleLast edited 369 days ago - may need review
See also:EA Forum

Pause and moratorium proposals represent the most direct governance intervention for AI safety: deliberately slowing or halting frontier AI development to allow safety research, governance frameworks, and societal preparation to catch up with rapidly advancing capabilities. These proposals range from targeted pauses on specific capability thresholds to comprehensive moratoria on all advanced AI development, with proponents arguing that the current pace of development may be outstripping humanity’s ability to ensure safe deployment.

The most prominent call for a pause came in March 2023, when the Future of Life Institute (FLI) published an open letter calling for a six-month pause on training AI systems more powerful than GPT-4. Released just one week after GPT-4’s launch, the letter garnered over 30,000 signatures, including prominent AI researchers such as Yoshua Bengio and Stuart Russell, as well as technology leaders like Elon Musk and Steve Wozniak. The letter cited risks including AI-generated propaganda, extreme automation of jobs, and a society-wide loss of control. However, no major AI laboratory implemented a voluntary pause, and the letter’s six-month timeline passed without meaningful slowdown in frontier development. As MIT Technology Review noted six months later, AI companies instead directed “vast investments in infrastructure to train ever-more giant AI systems.”

The fundamental logic behind pause proposals is straightforward: if AI development is proceeding faster than our ability to make it safe, slowing development provides time for safety work. As Bengio et al. wrote in Science in May 2024, “downside artificial intelligence risks must be managed effectively and urgently if posited AI benefits are to be realized safely.” However, implementation faces severe challenges including competitive dynamics between nations and companies, enforcement difficulties, and concerns that pauses might push development underground or to jurisdictions with fewer safety constraints. These proposals remain controversial even within the AI safety community, with some arguing they are essential for survival and others viewing them as impractical or counterproductive.

DimensionAssessmentRationaleConfidence
Safety UpliftHigh (if implemented)Would buy time for safety researchHigh
Capability UpliftNegativeExplicitly slows capability developmentHigh
Net World SafetyUnclearCould help if coordinated; could backfire if unilateralMedium
Lab IncentiveNegativeLabs strongly opposed; competitive dynamicsHigh
Research Investment$1-5M/yrAdvocacy organizations; FLI, PauseAIMedium
Current AdoptionNoneAdvocacy only; no major labs pausedHigh
Loading diagram...
ArgumentDescriptionStrength
Safety-Capability GapSafety research not keeping pace with capabilitiesStrong if gap is real
IrreversibilitySome AI risks may be impossible to reverse once realizedStrong for existential risks
Precautionary PrincipleBurden of proof should be on developers to show safetyPhilosophically contested
Coordination SignalDemonstrates seriousness; creates space for governanceModerate
Research TimeEnables catch-up on interpretability, alignmentStrong
ArgumentDescriptionStrength
EnforcementUnenforceable without international agreementStrong
DisplacementDevelopment moves to less cautious actorsModerate-Strong
Lost BenefitsDelays positive AI applicationsModerate
Talent DispersionSafety researchers may leave paused organizationsModerate
False SecurityPause without progress creates complacencyModerate
Definition ProblemsHard to define what to pauseStrong
AspectDetail
ScopeTraining systems more powerful than GPT-4
DurationSix months (renewable)
Signatories30,000+ including Yoshua Bengio, Elon Musk, Stuart Russell, Steve Wozniak, Yuval Noah Harari
Labs’ ResponseNo major lab paused; development continued
OutcomeRaised awareness; generated renewed urgency within governments; no implementation

Notable critiques: AI researcher Andrew Ng argued that “there is no realistic way to implement a moratorium” without government intervention, which would be “anti-competitive” and “awful innovation policy.” Reid Hoffman criticized the letter as “virtue signaling” that would hurt the cause by alienating the AI developer community needed to achieve safety goals.

AspectDetail
FoundedMay 2023 in Utrecht, Netherlands by software entrepreneur Joep Meindertsma
GoalInternational moratorium on frontier AI development until safety is ensured
StructureNetwork of local organizations; US chapter led by Holly Elmore, UK by Joseph Miller (Oxford PhD)
ApproachGrassroots activism, protests at AI labs (OpenAI Feb 2024, Anthropic Nov 2024), policy advocacy
Policy AsksGlobal pause enforced through international treaty; democratic control over AI development
Key ActionsInternational protests in May 2024 timed to Seoul AI Safety Summit; protests held in San Francisco, New York, Berlin, Rome, Ottawa, London
ProposalScopeMechanism
Compute CapsLimit training computeHardware governance
Capability GatesPause at defined capability thresholdsEval-based triggers
Conditional PausePause if safety benchmarks not metRSP-like framework
Research MoratoriaPause specific capability researchTargeted restrictions
ChallengeDescriptionSeverityPotential Solution
International CompetitionUS-China dynamics; neither wants to pause firstCriticalTreaty with verification
Corporate CompetitionFirst-mover advantages; defection incentivesHighRegulatory mandate
VerificationHow to confirm complianceHighCompute monitoring
DefinitionWhat counts as “frontier” AIHighClear technical thresholds
MechanismFeasibilityEffectivenessNotes
Voluntary ComplianceLowVery LowNo incentive to comply
National RegulationMediumMediumJurisdictional limits
International TreatyLow-MediumHigh if achievedRequires major power agreement
Compute RestrictionsMediumMedium-HighPhysical infrastructure trackable
Social PressureMediumLowInsufficient against strong incentives
ConsequenceLikelihoodSeverityMitigation
Development DisplacementHighHighInternational coordination
Underground DevelopmentMediumVery HighCompute monitoring
Safety Researcher ExodusMediumMediumContinued safety funding
Competitive DisadvantageHighVariableCoordinated action
Delayed BenefitsHighMediumRisk-benefit analysis
DomainInterventionOutcomeLessons
Nuclear WeaponsVarious moratoria and treatiesPartial success; proliferation continuedVerification essential
Human CloningResearch moratoriaGenerally effectiveNarrow scope helps
Gain-of-FunctionResearch pause (2014-2017)Temporary; research resumedPressure to resume
Recombinant DNAAsilomar conference (1975)Self-regulation worked initiallyCommunity buy-in crucial
CFCsMontreal ProtocolHighly successfulClear harm identification
  • Narrow scope is more enforceable than broad moratoria
  • Verification mechanisms are essential for compliance
  • International coordination requires identifying mutual interests
  • Community buy-in from researchers enables voluntary compliance
  • Clear triggering conditions help define when restrictions apply
DimensionAssessmentRationale
International ScalabilityUnknownDepends on coordination
Enforcement ScalabilityPartialCompute monitoring possible
SI ReadinessYes (if works)Would prevent reaching SI until prepared
Deception RobustnessN/AExternal policy; doesn’t address model behavior
ConditionImportanceCurrent Status
International AgreementCriticalVery limited
Clear TriggersHighUndefined
Verification MethodsHighUnderdeveloped
Alternative PathwayMediumSafety research ongoing
Industry Buy-InMedium-HighVery low
AlternativeRelationship to PauseTradeoffs
Differential ProgressAccelerate safety, not slow capabilitiesCompetitive with capabilities
Responsible Scaling PoliciesConditional pauses at thresholdsVoluntary; lab-controlled
Compute GovernanceIndirect slowdown through resource controlMore enforceable
International CoordinationFramework for coordinated pauseSlower to achieve
DimensionRatingNotes
TractabilityLowSevere coordination and enforcement challenges; no major lab has voluntarily paused
EffectivenessVery High (if implemented)Would directly address timeline concerns by buying time for safety research
NeglectednessMediumActive advocacy (FLI, PauseAI); major gap in implementation and enforcement mechanisms
Current MaturityEarly AdvocacyFLI letter catalyzed debate but no binding commitments achieved
Time HorizonImmediate-Long TermCould theoretically be implemented quickly but requires international coordination
Key ProponentsFLI, PauseAI, Yoshua BengioGrassroots movements and prominent AI researchers
Key OpponentsMajor AI Labs, Andrew NgCompetitive dynamics and concerns about practicality

If implemented effectively, pause/moratorium would address:

RiskMechanismEffectiveness
Racing DynamicsEliminates competitive pressureVery High
Safety-Capability GapTime for safety researchVery High
Governance LagTime for policy developmentHigh
Societal PreparationTime for adaptationHigh
Misalignment PotentialPrevents deployment of unaligned systemsVery High (during pause)
  • Enforcement Infeasibility: No mechanism to enforce global compliance
  • Competitive Dynamics: Unilateral pause disadvantages safety-conscious actors
  • Displacement Risk: Development may move to less cautious jurisdictions
  • Definition Challenges: Unclear what should be paused
  • Political Unreality: Insufficient political will for meaningful implementation
  • Temporary Nature: Pauses must eventually end; doesn’t solve underlying problem

While a full pause has not been achieved, international efforts toward AI governance have accelerated since the 2023 open letter:

Loading diagram...
InitiativeDateOutcomeLimitations
UK AI Safety SummitNov 2023Bletchley Declaration; AI Safety Institute network launchedNon-binding; no enforcement
International AI Safety Report2024-2025100 AI experts contributed; comprehensive risk synthesisAdvisory only
Seoul AI Safety SummitMay 202416 companies signed voluntary safety commitmentsNo binding pause agreement
UN AI Governance2024-2025International Scientific Panel and Global Dialogue establishedEarly stage coordination
SourceTypeKey Contribution
FLI Open LetterOpen LetterOriginal pause proposal with 30,000+ signatories
MIT Tech Review AnalysisJournalismSix-month retrospective on letter’s impact
Bengio et al. in ScienceAcademic Paper”Managing extreme AI risks amid rapid progress” (May 2024)
International AI Safety ReportGovernment Report30-nation synthesis of AI safety evidence
PauseAIAdvocacy OrgGrassroots organizing and protest coordination
OrganizationRolePosition
Future of Life InstituteAdvocacy, fundingStrong pause advocate; published open letter
PauseAIGrassroots activismInternational moratorium advocacy
GovAIResearchPolicy analysis and internationalization frameworks
Major AI LabsDevelopmentOpposed to pause; signed voluntary commitments only
ResourceDescription
Yoshua Bengio’s Blog”Reasoning through arguments against taking AI safety seriously”
EA Forum AI Pause DebateCommunity discussion of pause arguments
TIME Interview with Bengio”We’re Not Ready for AI’s Risks”
Carnegie Endowment Analysis”The AI Governance Arms Race”

Pause/moratorium proposals affect the Ai Transition Model through timeline modification:

FactorParameterImpact
Ai Capabilities TrajectoryDevelopment speedWould directly slow capability advancement
Safety-Capability GapGap widthBuys time for safety research to close gap
Racing DynamicsCompetitive pressureEliminates racing if universally implemented

A successfully implemented pause would fundamentally alter AI development timelines, providing potentially crucial time for safety research and governance development. However, partial or unilateral implementation may worsen outcomes by shifting development to less safety-conscious actors.