Pause AI
- Links4 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment |
|---|---|
| Founded | May 2023 in Utrecht, Netherlands |
| Founder | Joep Meindertsma |
| Current Leadership | Maxime Fournes (CEO), Joep Meindertsma (Board Chair) |
| Core Proposal | International pause on frontier AI training until safety proven |
| Structure | Global grassroots network with national chapters |
| Public Support | ≈70% of Americans support pausing AI development1 |
| Policy Wins | None documented to date |
| Major Activities | Protests, lobbying, public education |
Overview
Section titled “Overview”Pause AI is a volunteer-led advocacy movement that emerged in May 2023 calling for an indefinite international pause on the development of frontier artificial intelligence systems until safety can be guaranteed and democratic control established.2 Founded by software entrepreneur Joep Meindertsma in Utrecht, Netherlands, the organization argues that AI alignment research is lagging dangerously behind capability development, creating existential risks including potential human extinction.3
The movement’s core proposal is straightforward: stop building the most powerful AI systems until we understand how to keep them safe.4 Pause AI advocates for international cooperation to ensure no company or country develops unsafe AI, suggesting implementation mechanisms such as regulating the semiconductor chips necessary for training powerful models and establishing computing power thresholds above which training runs would be prohibited.4
Despite growing from a single-person protest to a global network with national chapters across multiple continents, Pause AI has not yet achieved documented policy successes in the form of enacted pauses, binding treaties, or liability laws.5 The movement operates primarily through public demonstrations, policymaker outreach, and community building, positioning itself as a grassroots counterweight to rapid AI development by major technology companies.
History and Key Milestones
Section titled “History and Key Milestones”Pause AI was founded in May 2023 when Joep Meindertsma, a software engineer and CEO of a software firm, put his job on hold to launch the movement.3 Meindertsma’s concerns about existential risks from artificial general intelligence (AGI) were first sparked by reading philosopher Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies.3 After years of following AI safety research, he concluded that alignment work was falling dangerously behind capability advances and organized direct action accordingly.
The movement’s first public action was a protest outside Microsoft’s Brussels lobbying office during an AI event featuring Sam Altman in May 2023.3 This initial demonstration established Pause AI’s strategy of high-visibility protests targeting major AI companies and policy venues.
Key milestones in the organization’s growth include:
- November 2023: Protested at the inaugural AI Safety Summit at Bletchley Park, where the movement criticized the resulting Bletchley Declaration as insufficient and called for binding international treaties comparable to the Montreal Protocol.3
- February 2024: Organized a gathering outside OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100’s headquarters in San Francisco, with protesters carrying signs reading “When in doubt, pause” and “Quit your job at OpenAI.”3
- May 2024: Coordinated simultaneous protests across thirteen countries, including the US, UK, Brazil, Germany, Australia, and Norway, ahead of the AI Seoul Summit.2
- February 2025: Expanded protest coordination to include Kenya and the Democratic Republic of Congo for demonstrations surrounding the French AI Summit.2
- June 2025: Led what they describe as their largest protest outside Google DeepMindLabGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100’s London office and hosted the first PauseCon training event for activists.2
Throughout this period, Pause AI evolved from a single-country initiative to a global network with national organizations including Pause AI US, each running independent campaigns while coordinating internationally.2
Leadership and Organization
Section titled “Leadership and Organization”Joep Meindertsma established the organization after becoming convinced that humanity faces potential extinction “in a short frame of time” if AI development continues without adequate safety measures.3 With a background in databases and programming and 5-8 years of engagement with AI safety literature, Meindertsma left his role as CEO of a software firm to dedicate himself to the pause movement.3
Maxime Fournes was appointed CEO of Pause AI Global after serving as Director of Pause AI France.6 Fournes, a former machine learning engineer, joined the movement in November 2023 following a sabbatical prompted by concerns about AI risks after the release of GPT-3.5 in 2022.6 Under his leadership of the French chapter, the organization gained significant media visibility, with protests featured in approximately 30 French news publications, and Fournes became a prominent voice through television appearances and podcasts.6
The organization operates through national chapters led by country-specific coordinators, including Holly Elmore (United States), Joseph Miller (United Kingdom), Benjamin Schmidt (Germany), Nicolas Lacombe (Canada), Aman Agarwal (India), and Mark Brown (Australia).7
Core Proposals and Implementation
Section titled “Core Proposals and Implementation”Pause AI’s central demand is a verifiable, publicly announced global pause on training frontier AI systems—those at or beyond current state-of-the-art capabilities.8 If international coordination proves impossible, the movement advocates for unilateral government moratoriums in individual countries as a fallback position.8
The pause proposal includes several supporting measures:
- Pre-deployment evaluations: Mandatory testing for dangerous capabilities before any powerful AI system can be released.8
- Copyright protections: Banning the training of AI systems on copyrighted material without explicit permission.8
- Liability frameworks: Holding model creators legally liable for crimes and harms enabled by their AI systems.8
For implementation, Pause AI proposes leveraging the concentrated nature of AI chip supply chains to enable oversight, arguing that individual countries—particularly the United States or California—could act first to establish precedent.8 The movement emphasizes that approximately 64-69% of Americans support pausing AI development in polling, though these polls are undated in available sources.8
The organization positions its proposals as a precautionary response to existential risk, arguing that the potential for catastrophic outcomes justifies halting development even in the absence of certainty about when dangerous capabilities might emerge. Pause AI explicitly rejects the framing that such measures would constitute “over-regulation,” comparing the situation to early warnings about climate change or tobacco harms that were initially dismissed.9
Relationship to AI Safety and Alignment Research
Section titled “Relationship to AI Safety and Alignment Research”Pause AI’s advocacy is directly grounded in concerns emerging from the fields of AI safety and AI alignment research. The movement views the rapid scaling of AI capabilities without solved alignment as fundamentally reckless, increasing the probability of existential catastrophe.10
AI alignment—the subfield focused on ensuring AI systems’ goals and behaviors match human values and intentions—faces formidable technical challenges including outer alignment (correctly specifying goals) and inner alignment (ensuring systems robustly adopt those goals).11 High-capability systems may develop emergent power-seeking or deceptive behaviors, and current alignment techniques like reinforcement learning from human feedback (RLHF) and red-teaming are widely viewed as insufficient for superintelligent systems.1112
Prominent AI safety researchers have expressed varying levels of support for pause proposals. Eliezer YudkowskyResearcherEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views (>90% ...Quality: 35/100, a prominent figure in the LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 community, has advocated for even stronger measures than Pause AI, arguing that “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down.”13 Other researchers in the alignment community take more nuanced positions, with some supporting temporary pauses to enable safety breakthroughs while warning against indefinite halts that could be co-opted for economic rather than safety reasons.14
Organizations like OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100, AnthropicLabAnthropicComprehensive profile of Anthropic tracking its rapid commercial growth (from $1B to $7B annualized revenue in 2025, 42% enterprise coding market share) alongside safety research (Constitutional AI...Quality: 51/100, and DeepMindLabGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100 maintain dedicated alignment teams working on scalable oversight, interpretability, and preference learning, but continue capability development in parallel with safety research—a practice Pause AI views as dangerously optimistic about the pace of alignment progress.12
Community Reception and Debate
Section titled “Community Reception and Debate”Within the effective altruism and rationalist communities where AI safety concerns are prominent, Pause AI’s proposals have received mixed reception. Multiple discussions on LessWrong and the EA Forum over the past 2-3 years reflect both support and skepticism.13
Strong advocates like Yudkowsky argue that pausing is insufficient and advocate for complete shutdown of frontier AI development.13 However, other community members have raised concerns about indefinite pauses, arguing they could be “substantially worse than a brief pause” and potentially net-negative if hijacked for economic or political reasons unrelated to safety.14 These critics point to examples like Germany’s 2011 nuclear moratorium, which was extended for reasons beyond the original safety rationale, as cautionary tales about how pause mechanisms might be misused.14
Some in the AI safety community prioritize capability evaluation and control research over broad development halts, arguing that continued work on interpretability, oversight, and shutdown resistance is essential—and that pausing might slow safety research more than capability development if the pause is not carefully designed.15
The movement has gained endorsements from thousands of AI researchers and industry leaders, including Yoshua Bengio, Stuart Russell, and Elon Musk, who signed an open letter supporting Pause AI’s mission.2 However, this March 2023 open letter calling for a six-month pause on systems more powerful than GPT-4 did not result in voluntary industry action.16
Criticisms and Challenges
Section titled “Criticisms and Challenges”Implementation Feasibility
Section titled “Implementation Feasibility”Pause AI faces the fundamental challenge that no documented policy successes have resulted from their advocacy to date.5 Despite years of protests and public engagement, no government has implemented binding pauses, and the related March 2023 open letter did not produce voluntary industry action.16
Critics argue that uncoordinated pauses could fail catastrophically, harming early-adopting countries or companies competitively while allowing others to continue development. This creates a coordination problem requiring either international treaties (which are difficult to negotiate and enforce) or acceptance of competitive disadvantages by pause-implementing jurisdictions.8
Industry and Policy Opposition
Section titled “Industry and Policy Opposition”By mid-2025, U.S. policy had shifted decisively away from “Pause AI” rhetoric toward a pro-innovation “Build” stance emphasizing open-source development and competition with China.9 Industry voices frequently characterize pause proposals as innovation-limiting over-regulation, comparing them to early fears about the internet that they view as misguided.9
The Trump administration’s December 2025 executive order explicitly promoted U.S. AI leadership by limiting state regulations and creating an AI Litigation Task Force, framing pause advocacy as economically harmful.17 This represents a significant headwind for the movement’s policy goals in the United States.
Technical Objections
Section titled “Technical Objections”Some AI safety researchers argue that pause proposals rest on questionable assumptions about the relationship between compute thresholds and dangerous capabilities. The January 2025 release of DeepSeek’s R1 model—which achieved competitive performance at significantly lower training costs than Western competitors—demonstrated that capability advances may not be reliably predictable from compute alone.18 This complicates proposals to regulate based on computing power thresholds.
Additionally, critics note that static policies like development pauses may be insufficient for the dynamic nature of AI risks, pointing to California’s 2026 legislation requiring runtime behavioral safeguards for AI systems as a more adaptive approach.19
Scope Concerns
Section titled “Scope Concerns”Some observers argue that Pause AI’s focus on frontier systems may miss important risks from more accessible AI technologies. Concerns about deepfakes, autonomous weapons, surveillance, and labor displacement require governance frameworks beyond development pauses.20 In this view, categorical pauses may distract from building nuanced regulatory capacity.
Impact and Effectiveness
Section titled “Impact and Effectiveness”Pause AI’s primary measurable impact has been organizational growth: the movement expanded from a single founder to a global network with national chapters across multiple continents within two years.2 The organization has successfully coordinated increasingly large protests, with their June 2025 London demonstration representing their largest to date.2
However, the movement has achieved no documented policy victories in terms of enacted pauses, binding international agreements, or liability frameworks as of early 2026.5 While Pause AI cites public opinion polling showing that approximately 70% of Americans support pausing AI development, this public support has not translated into legislative or regulatory action.4
The movement’s role may be primarily in shifting the Overton window—making discussions of strong AI regulation more politically acceptable—rather than achieving immediate policy implementation. Some policy observers note that movements like Pause AI can create political space for more moderate regulations even when their core demands are not met.
The organization also serves as a community hub for individuals concerned about AI existential risk who want to take direct action beyond research or technical work. This mobilization function may prove significant if a discrete triggering event (such as a high-profile AI accident or capability breakthrough) creates a policy window for stronger interventions.
Financial Sustainability
Section titled “Financial Sustainability”Available sources provide no information about Pause AI’s funding, budget, or donors. The organization is described as volunteer-driven and grassroots, suggesting it operates on a limited budget relying primarily on unpaid activists.28
This funding opacity makes it difficult to assess the organization’s financial sustainability or potential for scaling operations. The lack of documented major philanthropic backing may reflect donor uncertainty about the movement’s theory of change or effectiveness, or may simply reflect the organization’s early stage and grassroots nature.
For comparison, Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. announced a “soft pause” on most longtermist funding (including AI risk) in November 2022, which was lifted approximately one month later after establishing a higher funding bar.21 This suggests that major funders in the space may be taking a selective approach rather than broadly supporting pause-focused advocacy.
Key Uncertainties
Section titled “Key Uncertainties”Key Questions (6)
- Can pause proposals gain sufficient international coordination to avoid competitive dynamics that undermine implementation?
- What specific triggering events or capability demonstrations might shift policy opinion toward supporting development pauses?
- How do pause proposals interact with AI safety research—would a pause accelerate alignment breakthroughs or slow the field overall?
- Can pause mechanisms be designed to avoid mission creep or co-option for economic protectionism rather than safety?
- What governance capacity and verification systems would be required to enforce an effective pause?
- How should pause thresholds be set given uncertainty about the relationship between compute, capabilities, and risk?
The fundamental uncertainty underlying Pause AI’s mission is whether artificial intelligence development poses existential risks on timelines short enough to justify preventive action despite competitive and economic costs. This requires judgments about both AI capabilities timelines and the difficulty of alignment problems—both highly uncertain domains where expert opinion varies widely.
Even among those who accept high existential risk from advanced AI, significant uncertainty remains about whether pauses are the optimal intervention compared to alternatives like capability evaluation regimes, safety research acceleration, or targeted regulations on specific dangerous applications.
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
The International PauseAI Protest: Activism under uncertainty - EA Forum ↩ ↩2 ↩3
-
Meet our new CEO, Maxime Fournes - Pause AI Substack ↩ ↩2 ↩3
-
Toward a Global Regime for Compute Governance - arXiv ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9
-
How Big Tech Lobbying Stopped US AI Regulation in 2025 ↩ ↩2 ↩3
-
Open Problems and Fundamental Limitations of RLHF - Alignment Forum ↩ ↩2
-
Pausing AI Developments Isn’t Enough. We Need to Shut it All Down - LessWrong ↩ ↩2 ↩3
-
The possibility of an indefinite AI pause - EA Forum ↩ ↩2 ↩3
-
AI Pause Open Letter Stokes Fear and Controversy - IEEE Spectrum ↩ ↩2
-
Misrepresentations of California’s AI safety bill - Brookings ↩
-
Open Philanthropy no longer pausing longtermist funding - EA Forum ↩