ControlAI
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment |
|---|---|
| Type | AI safety advocacy organization (501(c)(4) nonprofit) |
| Founded | 2023 (emerged from ConjectureLab ResearchConjectureConjecture is a 30-40 person London-based AI safety org founded 2021, pursuing Cognitive Emulation (CoEm) - building interpretable AI from ground-up rather than aligning LLMs - with $30M+ Series A ...Quality: 37/100) |
| Location | London, England |
| Primary Focus | Preventing artificial superintelligence (ASI) development through policy advocacy and lawmaker engagement |
| Key Achievement | Briefed 150+ lawmakers; secured support from 100+ UK parliamentarians for binding AI regulation |
| Approach | Policy briefs, public campaigns, grassroots outreach, media engagement |
| Funding | Raising £1M (expected in 1-2 months from late 2025/early 2026); no major funders disclosed |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | controlai.com |
| Wikipedia | en.wikipedia.org |
Overview
Section titled “Overview”ControlAI is a UK-based organization focused on AI safety and policy advocacy, with the mission to prevent the development of artificial superintelligence (ASI) and ensure humanity retains control over advanced AI systems.1 The organization operates primarily through campaigns, policy proposals, and public engagement rather than technical research, emphasizing the need for democratic control over transformative AI development.
Founded in 2023 as an offshoot of Conjecture, ControlAI has positioned itself as one of the most professionalized AI activist groups, producing high-quality media campaigns and policy briefs targeted at lawmakers and the general public.2 The organization’s core tagline—“Fighting to keep humanity in control”—specifically targets control over deepfakesRiskDeepfakesComprehensive overview of deepfake risks documenting $60M+ in fraud losses, 90%+ non-consensual imagery prevalence, and declining detection effectiveness (65% best accuracy). Reviews technical capa...Quality: 50/100, AI scaling, foundation models, and AI overall.3
ControlAI’s primary theory of change centers on the “Direct Institutional Plan” (DIP), launched in March 2025, which promotes safe-by-design AI engineering, metrology of intelligence, and human-controlled transformative AI.4 The organization warns that no current methods exist to contain systems more intelligent than all humanity combined, echoing warnings from AI scientists, world leaders, and CEOs about potential human extinction risks.5
History
Section titled “History”Founding and Early Development
Section titled “Founding and Early Development”ControlAI was founded in 2023 by Andrea Miotti, emerging as an offshoot of Conjecture, an AI startup led by Connor LeahyResearcherConnor LeahyBiography of Connor Leahy, CEO of Conjecture AI safety company, who transitioned from co-founding EleutherAI (open-source LLMs) to focusing on interpretability-first alignment. He advocates for ver...Quality: 19/100.6 The organization was established in the lead-up to the AI Safety Summit at Bletchley Park, UK, where it made a notable splash by hiring a blimp to fly over the summit as part of its advertising campaigns.7
Andrea Miotti, who holds a PhD in machine learning robustness and previously worked at Palantir and BCG, founded the organization after leading communications and policy efforts at Conjecture.8 The organization operates as a nonprofit “private company limited by guarantee” in the UK and as a 501(c)(4) nonprofit in the US.9
Evolution of Strategy
Section titled “Evolution of Strategy”From its inception through 2024, ControlAI ran several major campaigns:
- October 2023: Prevented international endorsement of scaling policies at the AI Safety Summit10
- November-December 2023: Opposed exemptions for foundation models in the EU AI ActPolicyEU AI ActComprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. ...Quality: 55/10011
- December 2023 - June 2024: Ran a major campaign against deepfakes12
The organization has evolved from a think tank model to focus on grassroots outreach and direct engagement with policymakers, transitioning to prevent ASI development via direct work.13
Recent Developments (2025-2026)
Section titled “Recent Developments (2025-2026)”In March 2025, ControlAI launched “The Direct Institutional Plan” as their comprehensive strategy for achieving binding regulation on advanced AI systems.14 The UK pilot campaign, running from November 2024 through May 2025, demonstrated significant success: the organization briefed 84 cross-party UK parliamentarians (4 in 10 MPs, 3 in 10 Lords, and 2 in 10 from devolved legislatures), with over 20 publicly supporting their campaign for binding regulation.15
By December 2025, a huge coalition of lawmakers had called for binding regulation on powerful AI systems, representing a major milestone for the organization’s advocacy efforts.16 As of early 2026, the organization continues scaling the DIP to the UK executive branch and expanding to the US and other countries.17
Leadership and Team
Section titled “Leadership and Team”Andrea Miotti serves as the organization’s public face, featured in media outlets and podcasts discussing AI extinction risks.18 The organization reportedly has 9 employees as of 2024.19
Policy Approach and Campaigns
Section titled “Policy Approach and Campaigns”The Direct Institutional Plan (DIP)
Section titled “The Direct Institutional Plan (DIP)”ControlAI’s flagship initiative is the Direct Institutional Plan, a three-phase policy framework (Safety, Stability, Flourishing) that uses computing power as a proxy for AI capabilities.20 The plan advocates for:
- Bans on superintelligence development: Prohibition of systems more intelligent than all humanity combined
- Dangerous capability restrictions: Preventing automated AI research, advanced hacking capabilities, and recursive self-improvement
- Pre-deployment demonstrations: Requiring developers to prove system safety before release
- AI development licensing: Establishing regulatory frameworks for advanced AI development
- Mandatory kill switches: Requiring emergency shutdown capabilities for advanced systems
- Compute cluster monitoring: Tracking large-scale AI training infrastructure21
The DIP is designed as a collaborative framework open to citizens and organizations worldwide, emphasizing independent participation rather than exclusive partnerships.22 ControlAI has developed country-specific policy briefs and offers advice to influential individuals and organizations via a dedicated partners page.23
”A Narrow Path” Policy Framework
Section titled “”A Narrow Path” Policy Framework”The organization’s “A Narrow Path” policy paper underwent systematic evaluation through a policy sprint red-teamed by Apart Research in July 2025.24 The sprint evaluated six policies with code released for reproducibility, demonstrating scalable monitoring of capability acquisition via phase transitions and agent dynamics simulations across eight sectors (from enterprise to critical infrastructure).25
Advocacy Tools and Resources
Section titled “Advocacy Tools and Resources”ControlAI has created tools enabling citizens to contact lawmakers, executives, civil service, media, and civil society in their jurisdictions to advocate for superintelligence risk policies.26 These tools have facilitated over 150,000 messages sent to lawmakers.27
Impact and Achievements
Section titled “Impact and Achievements”Lawmaker Engagement
Section titled “Lawmaker Engagement”ControlAI’s most significant achievement has been its success in engaging policymakers on AI extinction risks:
- Briefed 150+ lawmakers on AI extinction risk
- Secured support from 100+ UK parliamentarians for their campaign
- Achieved public endorsement from over 20 cross-party UK parliamentarians (more than 1 in 3 briefed)
- Drafted and presented 1 AI bill to the UK Prime Minister’s office28
The organization’s cold-email campaign to British MPs and Lords engaged 70 representatives, with 31 publicly opposing ASI development—a remarkable conversion rate that defied initial predictions of resistance to strong extinction-risk messaging.29
Public Opinion and Reach
Section titled “Public Opinion and Reach”ControlAI has commissioned multiple YouGov polls demonstrating strong UK public support for AI safety measures:
January 2025 YouGov Poll (UK):
- 73% favor halting rapid superintelligence development
- 74% support empowering the Artificial Intelligence Safety Institute (AISI) as regulator
- 87% support safety regime for AI development
- 76% favor monitoring large compute clusters
- 82% support mandatory AISI testing and company accountability30
Additional Public Engagement:
- 79% support for a UK AI regulator
- 87% want developers to prove safety before release
- 150+ million views on AI risk content
- 150,000+ messages sent to lawmakers via their tools31
Media Presence
Section titled “Media Presence”ControlAI has achieved significant media coverage, with mentions in:
- The Spectator (January 29, 2025) on DeepSeek stakes
- Newsweek (January 31, 2025) on AI extinction race
- Financial Times (September 12, 2024) on OpenAI bioweapon risks
- The Guardian (December 8, 2025) on parliamentarians calling for regulation
- City A.M. (December 8, 2025) on MPs pushing for stricter AI rules
- The Guardian (January 28, 2025) on former OpenAI researcher warnings
- New York Times (March 14, 2024) on powerful AI preparedness
- The Times (December 6, 2024) on schemingRiskSchemingScheming—strategic AI deception during training—has transitioned from theoretical concern to observed behavior across all major frontier models (o1: 37% alignment faking, Claude: 14% harmful compli...Quality: 74/100 ChatGPT32
Special Projects
Section titled “Special Projects”ControlAI has launched several targeted projects:
- “Artificial Guarantees” (January 2025): Documenting inconsistencies by AI companies, highlighting shifting statements on risks and broken promises33
- “What leaders say about AI” (September 2024): Compilation of warnings from AI leaders and researchers34
- Rational Animations collaboration: Video “What if AI just keeps getting smarter?” garnered 1.4 million views, warning of superintelligent, self-improving AI leading to extinction via indifference35
Criticisms and Controversies
Section titled “Criticisms and Controversies”Critique of Open Philanthropy’s Approach
Section titled “Critique of Open Philanthropy’s Approach”ControlAI has positioned itself in opposition to Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information.’s approach to AI safety, arguing that the funder’s strategy is “undemocratic” and centralizes control in a small group of “trusted” actors.36 The organization’s “Direct Institutional Plan” dedicates over 500 words to criticizing Open Philanthropy (now Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100) as the main funder in AI safety, highlighting:
- Over $80 million provided to establish the Center for Security and Emerging TechnologyCsetCSET is a $100M+ Georgetown center with 50+ staff conducting data-driven AI policy research, particularly on U.S.-China competition and export controls. The center conducts hundreds of annual gover...Quality: 43/100 (CSET), which placed fellows in the US Department of Commerce and White House
- Funding for the Horizon Institute supporting placements in US congressional offices and executive agencies
- Grants to OpenAI in exchange for a board seat for Holden KarnofskyResearcherHolden KarnofskyHolden Karnofsky directed $300M+ in AI safety funding through Open Philanthropy, growing the field from ~20 to 400+ FTE researchers and developing influential frameworks like the 'Most Important Ce...Quality: 40/100
- Acting as “sole arbiter” of trustworthiness in AGI control strategy37
ControlAI argues that Open Philanthropy’s approach of building influence through strategic placements and supporting “responsible actors” building superintelligence (a view associated with figures like Holden Karnofsky and Will MacAskill) is fundamentally flawed compared to their civic engagement model emphasizing democratic processes.38
Industry Relationship Critiques
Section titled “Industry Relationship Critiques”The organization has been vocal in criticizing frontier AI companies for what it characterizes as systematically undermining alignment research and regulation to race toward AGI. ControlAI argues that companies are driven by “utopian beliefs” in AGI ushering in an ideal world rather than prioritizing safety.39
Specific criticisms include:
- Insufficient investment in alignment (only $200 million and a handful of researchers working on problems requiring decades and trillions)
- Companies collaborating to unlock resources like chips and power while ignoring governance
- Shifting baseline tactics and broken promises documented in their “Artificial Guarantees” project
- Racing to ASI despite warnings, downplaying risks even while acknowledging issues like bioweapon misuse40
Skepticism About Plan Feasibility
Section titled “Skepticism About Plan Feasibility”Within the effective altruism and AI safety communities, ControlAI’s approach has received mixed reception:
Positive Views:
- Described as the “most x-risk-focused” 501(c)(4) organization
- Praised for concrete campaigns with tangible results (31 public commitments from MPs/Lords)
- Collaboration with Rational Animations characterized as “really great”41
Criticisms:
- Donors and community members express skepticism that global “pause AIPause AiPause AI is a grassroots advocacy movement founded May 2023 calling for international pause on frontier AI development until safety proven, growing to multi-continental network but achieving zero d...Quality: 59/100” regulations are feasible due to coordination challenges
- Concerns that detection without enforcement is insufficient—companies could ignore reports
- Debates over impact: videos effective for awareness but less successful at converting views to actions like emails or calls to action
- Tension with EA leadership favoring cautious superintelligence development over outright bans42
Sensationalism Concerns
Section titled “Sensationalism Concerns”Some critics have characterized ControlAI as a group that “dramatically warns of AI’s purported extinction risk,” potentially sensationalizing risks.43 However, CEO Andrea Miotti has responded that critics often nitpick experimental setups but should focus on trends in AI behaviors like self-preservation, resistance to shutdown, and strategic deception.44
Relationship to AI Safety and Alignment
Section titled “Relationship to AI Safety and Alignment”ControlAI operates primarily in the AI policy and advocacy space rather than technical alignment research. The organization’s approach is grounded in the assessment that alignment is fundamentally intractable with current resources:
- Solving alignment would require decades of research and trillions in investment to address issues like identifying human values, reconciling contradictions, and predicting side-effects
- Currently only $200 million is invested, mostly in patching issues rather than solving core problems
- Progress is resource-limited rather than insight-limited, leading to opaque, rapidly advancing systems where experts fail to predict new skills or internal workings45
The organization emphasizes that AI controlSafety AgendaAI ControlAI Control is a defensive safety approach that maintains control over potentially misaligned AI through monitoring, containment, and redundancy, offering 40-60% catastrophic risk reduction if align...Quality: 75/100 is not just a technical problem but requires institutional rules and democratic governance.46 This positions ControlAI distinctly from technical alignment organizations like AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100, OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100’s alignment teams, or Redwood ResearchRedwood ResearchA nonprofit AI safety and security research organization founded in 2021, known for pioneering AI Control research, developing causal scrubbing interpretability methods, and conducting landmark ali...Quality: 78/100, which focus on developing technical solutions for controlling AI systems.
ControlAI’s warnings align with broader concerns in the AI safety community about fundamental challenges in controlling superintelligent systems:
- Self-modifying code and learning from unanticipatable patterns make control potentially inherently insoluble
- Increasing AI capability reduces controllability; self-improving AI may resist goal changes and pursue instrumental goals like resource acquisition
- Verification is extremely difficult due to AI’s software nature, enabling hiding of modifications47
Community Reception
Section titled “Community Reception”Effective Altruism and Rationalist Communities
Section titled “Effective Altruism and Rationalist Communities”Discussions on the EA Forum and LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 portray ControlAI as strongly x-risk-focused but reveal debates about the feasibility of their approach:
Support:
- Evolution from think tank to concrete grassroots campaigns praised
- High-quality content production acknowledged
- Donor support for their regulation efforts despite skepticism about global enforcement48
Skepticism:
- Questions about whether moderate regulations or alignment research will succeed
- Concerns that weaker systems can’t oversee stronger ones, with no known methods for superintelligent oversight
- Broader EA critiques that subhuman systems are inadequate for superintelligent oversight, requiring superhuman capability for proper alignment
- Unresolved issues around stability under reflection and steering stronger systems49
The organization’s positioning against prominent EA figures’ views on “responsible actors” building superintelligence has created some tension with EA leadership.50
Key Uncertainties
Section titled “Key Uncertainties”Several important questions remain about ControlAI’s approach and impact:
-
Scalability of Success: Can the organization replicate its UK success in other countries, particularly the US where regulatory dynamics differ significantly?
-
Enforcement Mechanisms: How would proposed bans on superintelligence development be enforced internationally, given coordination challenges and incentive structures?
-
Technical Feasibility of Proposals: Are the organization’s proposed capability thresholds and monitoring systems technically viable, and can they keep pace with rapid AI progress?
-
Relationship to Technical Safety Work: How does ControlAI’s advocacy-first approach complement or conflict with technical alignment research efforts?
-
Long-term Funding Sustainability: With no major disclosed funders and only £1M expected in fundraising, can the organization sustain operations at the scale needed for global impact?
-
Impact on AI Development: Will the organization’s campaigns lead to meaningful policy changes, or primarily serve to raise awareness without shifting development trajectories?
-
Alternative Approaches: Is preventing superintelligence development the optimal strategy, or should resources focus on alignment research, differential development, or other interventions?