Skip to content

ControlAI

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:63 (Good)
Importance:75 (High)
Last edited:2026-02-01 (5 days ago)
Words:2.5k
Structure:
📊 2📈 0🔗 15📚 5231%Score: 11/15
LLM Summary:ControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct institutional approaches to preventing AI superintelligence development through binding regulation. The organization represents a significant shift toward democratic governance approaches in AI safety, though faces skepticism about the feasibility of global coordination on AI development restrictions.
DimensionAssessment
TypeAI safety advocacy organization (501(c)(4) nonprofit)
Founded2023 (emerged from Conjecture)
LocationLondon, England
Primary FocusPreventing artificial superintelligence (ASI) development through policy advocacy and lawmaker engagement
Key AchievementBriefed 150+ lawmakers; secured support from 100+ UK parliamentarians for binding AI regulation
ApproachPolicy briefs, public campaigns, grassroots outreach, media engagement
FundingRaising £1M (expected in 1-2 months from late 2025/early 2026); no major funders disclosed
SourceLink
Official Websitecontrolai.com
Wikipediaen.wikipedia.org

ControlAI is a UK-based organization focused on AI safety and policy advocacy, with the mission to prevent the development of artificial superintelligence (ASI) and ensure humanity retains control over advanced AI systems.1 The organization operates primarily through campaigns, policy proposals, and public engagement rather than technical research, emphasizing the need for democratic control over transformative AI development.

Founded in 2023 as an offshoot of Conjecture, ControlAI has positioned itself as one of the most professionalized AI activist groups, producing high-quality media campaigns and policy briefs targeted at lawmakers and the general public.2 The organization’s core tagline—“Fighting to keep humanity in control”—specifically targets control over deepfakes, AI scaling, foundation models, and AI overall.3

ControlAI’s primary theory of change centers on the “Direct Institutional Plan” (DIP), launched in March 2025, which promotes safe-by-design AI engineering, metrology of intelligence, and human-controlled transformative AI.4 The organization warns that no current methods exist to contain systems more intelligent than all humanity combined, echoing warnings from AI scientists, world leaders, and CEOs about potential human extinction risks.5

ControlAI was founded in 2023 by Andrea Miotti, emerging as an offshoot of Conjecture, an AI startup led by Connor Leahy.6 The organization was established in the lead-up to the AI Safety Summit at Bletchley Park, UK, where it made a notable splash by hiring a blimp to fly over the summit as part of its advertising campaigns.7

Andrea Miotti, who holds a PhD in machine learning robustness and previously worked at Palantir and BCG, founded the organization after leading communications and policy efforts at Conjecture.8 The organization operates as a nonprofit “private company limited by guarantee” in the UK and as a 501(c)(4) nonprofit in the US.9

From its inception through 2024, ControlAI ran several major campaigns:

  • October 2023: Prevented international endorsement of scaling policies at the AI Safety Summit10
  • November-December 2023: Opposed exemptions for foundation models in the EU AI Act11
  • December 2023 - June 2024: Ran a major campaign against deepfakes12

The organization has evolved from a think tank model to focus on grassroots outreach and direct engagement with policymakers, transitioning to prevent ASI development via direct work.13

In March 2025, ControlAI launched “The Direct Institutional Plan” as their comprehensive strategy for achieving binding regulation on advanced AI systems.14 The UK pilot campaign, running from November 2024 through May 2025, demonstrated significant success: the organization briefed 84 cross-party UK parliamentarians (4 in 10 MPs, 3 in 10 Lords, and 2 in 10 from devolved legislatures), with over 20 publicly supporting their campaign for binding regulation.15

By December 2025, a huge coalition of lawmakers had called for binding regulation on powerful AI systems, representing a major milestone for the organization’s advocacy efforts.16 As of early 2026, the organization continues scaling the DIP to the UK executive branch and expanding to the US and other countries.17

AM
Andrea Miotti
Founder, Executive Director, and CEO
CL
Connor Leahy
Advisor
GA
Gabriel Alfour
Advisor

Andrea Miotti serves as the organization’s public face, featured in media outlets and podcasts discussing AI extinction risks.18 The organization reportedly has 9 employees as of 2024.19

ControlAI’s flagship initiative is the Direct Institutional Plan, a three-phase policy framework (Safety, Stability, Flourishing) that uses computing power as a proxy for AI capabilities.20 The plan advocates for:

  • Bans on superintelligence development: Prohibition of systems more intelligent than all humanity combined
  • Dangerous capability restrictions: Preventing automated AI research, advanced hacking capabilities, and recursive self-improvement
  • Pre-deployment demonstrations: Requiring developers to prove system safety before release
  • AI development licensing: Establishing regulatory frameworks for advanced AI development
  • Mandatory kill switches: Requiring emergency shutdown capabilities for advanced systems
  • Compute cluster monitoring: Tracking large-scale AI training infrastructure21

The DIP is designed as a collaborative framework open to citizens and organizations worldwide, emphasizing independent participation rather than exclusive partnerships.22 ControlAI has developed country-specific policy briefs and offers advice to influential individuals and organizations via a dedicated partners page.23

The organization’s “A Narrow Path” policy paper underwent systematic evaluation through a policy sprint red-teamed by Apart Research in July 2025.24 The sprint evaluated six policies with code released for reproducibility, demonstrating scalable monitoring of capability acquisition via phase transitions and agent dynamics simulations across eight sectors (from enterprise to critical infrastructure).25

ControlAI has created tools enabling citizens to contact lawmakers, executives, civil service, media, and civil society in their jurisdictions to advocate for superintelligence risk policies.26 These tools have facilitated over 150,000 messages sent to lawmakers.27

ControlAI’s most significant achievement has been its success in engaging policymakers on AI extinction risks:

  • Briefed 150+ lawmakers on AI extinction risk
  • Secured support from 100+ UK parliamentarians for their campaign
  • Achieved public endorsement from over 20 cross-party UK parliamentarians (more than 1 in 3 briefed)
  • Drafted and presented 1 AI bill to the UK Prime Minister’s office28

The organization’s cold-email campaign to British MPs and Lords engaged 70 representatives, with 31 publicly opposing ASI development—a remarkable conversion rate that defied initial predictions of resistance to strong extinction-risk messaging.29

ControlAI has commissioned multiple YouGov polls demonstrating strong UK public support for AI safety measures:

January 2025 YouGov Poll (UK):

  • 73% favor halting rapid superintelligence development
  • 74% support empowering the Artificial Intelligence Safety Institute (AISI) as regulator
  • 87% support safety regime for AI development
  • 76% favor monitoring large compute clusters
  • 82% support mandatory AISI testing and company accountability30

Additional Public Engagement:

  • 79% support for a UK AI regulator
  • 87% want developers to prove safety before release
  • 150+ million views on AI risk content
  • 150,000+ messages sent to lawmakers via their tools31

ControlAI has achieved significant media coverage, with mentions in:

  • The Spectator (January 29, 2025) on DeepSeek stakes
  • Newsweek (January 31, 2025) on AI extinction race
  • Financial Times (September 12, 2024) on OpenAI bioweapon risks
  • The Guardian (December 8, 2025) on parliamentarians calling for regulation
  • City A.M. (December 8, 2025) on MPs pushing for stricter AI rules
  • The Guardian (January 28, 2025) on former OpenAI researcher warnings
  • New York Times (March 14, 2024) on powerful AI preparedness
  • The Times (December 6, 2024) on scheming ChatGPT32

ControlAI has launched several targeted projects:

  • “Artificial Guarantees” (January 2025): Documenting inconsistencies by AI companies, highlighting shifting statements on risks and broken promises33
  • “What leaders say about AI” (September 2024): Compilation of warnings from AI leaders and researchers34
  • Rational Animations collaboration: Video “What if AI just keeps getting smarter?” garnered 1.4 million views, warning of superintelligent, self-improving AI leading to extinction via indifference35

Critique of Open Philanthropy’s Approach

Section titled “Critique of Open Philanthropy’s Approach”

ControlAI has positioned itself in opposition to Open Philanthropy’s approach to AI safety, arguing that the funder’s strategy is “undemocratic” and centralizes control in a small group of “trusted” actors.36 The organization’s “Direct Institutional Plan” dedicates over 500 words to criticizing Open Philanthropy (now Coefficient Giving) as the main funder in AI safety, highlighting:

  • Over $80 million provided to establish the Center for Security and Emerging Technology (CSET), which placed fellows in the US Department of Commerce and White House
  • Funding for the Horizon Institute supporting placements in US congressional offices and executive agencies
  • Grants to OpenAI in exchange for a board seat for Holden Karnofsky
  • Acting as “sole arbiter” of trustworthiness in AGI control strategy37

ControlAI argues that Open Philanthropy’s approach of building influence through strategic placements and supporting “responsible actors” building superintelligence (a view associated with figures like Holden Karnofsky and Will MacAskill) is fundamentally flawed compared to their civic engagement model emphasizing democratic processes.38

The organization has been vocal in criticizing frontier AI companies for what it characterizes as systematically undermining alignment research and regulation to race toward AGI. ControlAI argues that companies are driven by “utopian beliefs” in AGI ushering in an ideal world rather than prioritizing safety.39

Specific criticisms include:

  • Insufficient investment in alignment (only $200 million and a handful of researchers working on problems requiring decades and trillions)
  • Companies collaborating to unlock resources like chips and power while ignoring governance
  • Shifting baseline tactics and broken promises documented in their “Artificial Guarantees” project
  • Racing to ASI despite warnings, downplaying risks even while acknowledging issues like bioweapon misuse40

Within the effective altruism and AI safety communities, ControlAI’s approach has received mixed reception:

Positive Views:

  • Described as the “most x-risk-focused” 501(c)(4) organization
  • Praised for concrete campaigns with tangible results (31 public commitments from MPs/Lords)
  • Collaboration with Rational Animations characterized as “really great”41

Criticisms:

  • Donors and community members express skepticism that global “pause AI” regulations are feasible due to coordination challenges
  • Concerns that detection without enforcement is insufficient—companies could ignore reports
  • Debates over impact: videos effective for awareness but less successful at converting views to actions like emails or calls to action
  • Tension with EA leadership favoring cautious superintelligence development over outright bans42

Some critics have characterized ControlAI as a group that “dramatically warns of AI’s purported extinction risk,” potentially sensationalizing risks.43 However, CEO Andrea Miotti has responded that critics often nitpick experimental setups but should focus on trends in AI behaviors like self-preservation, resistance to shutdown, and strategic deception.44

ControlAI operates primarily in the AI policy and advocacy space rather than technical alignment research. The organization’s approach is grounded in the assessment that alignment is fundamentally intractable with current resources:

  • Solving alignment would require decades of research and trillions in investment to address issues like identifying human values, reconciling contradictions, and predicting side-effects
  • Currently only $200 million is invested, mostly in patching issues rather than solving core problems
  • Progress is resource-limited rather than insight-limited, leading to opaque, rapidly advancing systems where experts fail to predict new skills or internal workings45

The organization emphasizes that AI control is not just a technical problem but requires institutional rules and democratic governance.46 This positions ControlAI distinctly from technical alignment organizations like Anthropic, OpenAI’s alignment teams, or Redwood Research, which focus on developing technical solutions for controlling AI systems.

ControlAI’s warnings align with broader concerns in the AI safety community about fundamental challenges in controlling superintelligent systems:

  • Self-modifying code and learning from unanticipatable patterns make control potentially inherently insoluble
  • Increasing AI capability reduces controllability; self-improving AI may resist goal changes and pursue instrumental goals like resource acquisition
  • Verification is extremely difficult due to AI’s software nature, enabling hiding of modifications47

Effective Altruism and Rationalist Communities

Section titled “Effective Altruism and Rationalist Communities”

Discussions on the EA Forum and LessWrong portray ControlAI as strongly x-risk-focused but reveal debates about the feasibility of their approach:

Support:

  • Evolution from think tank to concrete grassroots campaigns praised
  • High-quality content production acknowledged
  • Donor support for their regulation efforts despite skepticism about global enforcement48

Skepticism:

  • Questions about whether moderate regulations or alignment research will succeed
  • Concerns that weaker systems can’t oversee stronger ones, with no known methods for superintelligent oversight
  • Broader EA critiques that subhuman systems are inadequate for superintelligent oversight, requiring superhuman capability for proper alignment
  • Unresolved issues around stability under reflection and steering stronger systems49

The organization’s positioning against prominent EA figures’ views on “responsible actors” building superintelligence has created some tension with EA leadership.50

Several important questions remain about ControlAI’s approach and impact:

  1. Scalability of Success: Can the organization replicate its UK success in other countries, particularly the US where regulatory dynamics differ significantly?

  2. Enforcement Mechanisms: How would proposed bans on superintelligence development be enforced internationally, given coordination challenges and incentive structures?

  3. Technical Feasibility of Proposals: Are the organization’s proposed capability thresholds and monitoring systems technically viable, and can they keep pace with rapid AI progress?

  4. Relationship to Technical Safety Work: How does ControlAI’s advocacy-first approach complement or conflict with technical alignment research efforts?

  5. Long-term Funding Sustainability: With no major disclosed funders and only £1M expected in fundraising, can the organization sustain operations at the scale needed for global impact?

  6. Impact on AI Development: Will the organization’s campaigns lead to meaningful policy changes, or primarily serve to raise awareness without shifting development trajectories?

  7. Alternative Approaches: Is preventing superintelligence development the optimal strategy, or should resources focus on alignment research, differential development, or other interventions?

  1. ControlAI Overview

  2. A Brief Guide to Anti-AI Activist Groups

  3. ControlAI Homepage

  4. London Futurists - A Narrow Path to a Good Future with AI

  5. ControlAI Homepage

  6. ControlAI About Page

  7. Transformer News - Anti-AI Activist Groups

  8. London Futurists Interview

  9. ControlAI About Page

  10. ControlAI Past Campaigns

  11. ControlAI Past Campaigns

  12. ControlAI Past Campaigns

  13. EA Forum - Overview of AI Safety Outreach Grassroots Orgs

  14. ControlAI DIP

  15. ControlAI Engagement Learnings

  16. ControlAI News - Lawmaker Coalition

  17. ControlAI DIP

  18. London Futurists Interview

  19. RocketReach - Control AI Profile

  20. London Futurists Interview

  21. ControlAI DIP

  22. ControlAI DIP

  23. ControlAI Partners Page

  24. Apart Research - Red Teaming A Narrow Path

  25. Apart Research - Red Teaming A Narrow Path

  26. ControlAI DIP

  27. ControlAI Homepage

  28. ControlAI Homepage

  29. EA Forum - Overview of AI Safety Outreach Grassroots Orgs

  30. ControlAI Polls

  31. ControlAI Homepage

  32. ControlAI Media Coverage

  33. ControlAI Projects

  34. ControlAI Projects

  35. EA Forum - RA x ControlAI Video

  36. ControlAI - Designing the DIP

  37. ControlAI - Designing the DIP

  38. ControlAI - Designing the DIP

  39. ControlAI Risks Page

  40. ControlAI Engagement Learnings

  41. EA Forum - Overview of AI Safety Outreach Grassroots Orgs

  42. ControlAI - Designing the DIP

  43. Futurism - AI Models Survival Drive

  44. Futurism - AI Models Survival Drive

  45. ControlAI Risks Page

  46. ControlAI News - Avoiding Extinction with Andrea Miotti

  47. Claire Berlinski Substack - Is the AI Control Problem Insoluble?

  48. LessWrong - Where I Am Donating in 2025

  49. EA Forum - Some Quick Thoughts on AI Is Easy to Control

  50. ControlAI - Designing the DIP