ControlAI
ControlAI
ControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct institutional approaches to preventing AI superintelligence development through binding regulation. The organization represents a significant shift toward democratic governance approaches in AI safety, though faces skepticism about the feasibility of global coordination on AI development restrictions.
Quick Assessment
| Dimension | Assessment |
|---|---|
| Type | AI safety advocacy organization (501(c)(4) nonprofit) |
| Founded | 2023 (emerged from Conjecture) |
| Location | London, England |
| Primary Focus | Preventing artificial superintelligence (ASI) development through policy advocacy and lawmaker engagement |
| Key Achievement | Briefed 150+ lawmakers; secured support from 100+ UK parliamentarians for binding AI regulation |
| Approach | Policy briefs, public campaigns, grassroots outreach, media engagement |
| Funding | Raising £1M (expected in 1-2 months from late 2025/early 2026); no major funders disclosed |
Key Links
| Source | Link |
|---|---|
| Official Website | controlai.com |
| Wikipedia | en.wikipedia.org |
Overview
ControlAI is a UK-based organization focused on AI safety and policy advocacy, with the mission to prevent the development of artificial superintelligence (ASI) and ensure humanity retains control over advanced AI systems.1 The organization operates primarily through campaigns, policy proposals, and public engagement rather than technical research, emphasizing the need for democratic control over transformative AI development.
Founded in 2023 as an offshoot of Conjecture, ControlAI has positioned itself as one of the most professionalized AI activist groups, producing high-quality media campaigns and policy briefs targeted at lawmakers and the general public.2 The organization's core tagline—"Fighting to keep humanity in control"—specifically targets control over deepfakes, AI scaling, foundation models, and AI overall.3
ControlAI's primary theory of change centers on the "Direct Institutional Plan" (DIP), launched in March 2025, which promotes safe-by-design AI engineering, metrology of intelligence, and human-controlled transformative AI.4 The organization warns that no current methods exist to contain systems more intelligent than all humanity combined, echoing warnings from AI scientists, world leaders, and CEOs about potential human extinction risks.5
History
Founding and Early Development
ControlAI was founded in 2023 by Andrea Miotti, emerging as an offshoot of Conjecture, an AI startup led by Connor Leahy.6 The organization was established in the lead-up to the AI Safety Summit at Bletchley Park, UK, where it made a notable splash by hiring a blimp to fly over the summit as part of its advertising campaigns.7
Andrea Miotti, who holds a PhD in machine learning robustness and previously worked at Palantir and BCG, founded the organization after leading communications and policy efforts at Conjecture.8 The organization operates as a nonprofit "private company limited by guarantee" in the UK and as a 501(c)(4) nonprofit in the US.9
Evolution of Strategy
From its inception through 2024, ControlAI ran several major campaigns:
- October 2023: Prevented international endorsement of scaling policies at the AI Safety Summit10
- November-December 2023: Opposed exemptions for foundation models in the EU AI Act11
- December 2023 - June 2024: Ran a major campaign against deepfakes12
The organization has evolved from a think tank model to focus on grassroots outreach and direct engagement with policymakers, transitioning to prevent ASI development via direct work.13
Recent Developments (2025-2026)
In March 2025, ControlAI launched "The Direct Institutional Plan" as their comprehensive strategy for achieving binding regulation on advanced AI systems.14 The UK pilot campaign, running from November 2024 through May 2025, demonstrated significant success: the organization briefed 84 cross-party UK parliamentarians (4 in 10 MPs, 3 in 10 Lords, and 2 in 10 from devolved legislatures), with over 20 publicly supporting their campaign for binding regulation.15
By December 2025, a huge coalition of lawmakers had called for binding regulation on powerful AI systems, representing a major milestone for the organization's advocacy efforts.16 As of early 2026, the organization continues scaling the DIP to the UK executive branch and expanding to the US and other countries.17
Leadership and Team
Andrea Miotti serves as the organization's public face, featured in media outlets and podcasts discussing AI extinction risks.18 The organization reportedly has 9 employees as of 2024.19
Policy Approach and Campaigns
The Direct Institutional Plan (DIP)
ControlAI's flagship initiative is the Direct Institutional Plan, a three-phase policy framework (Safety, Stability, Flourishing) that uses computing power as a proxy for AI capabilities.20 The plan advocates for:
- Bans on superintelligence development: Prohibition of systems more intelligent than all humanity combined
- Dangerous capability restrictions: Preventing automated AI research, advanced hacking capabilities, and recursive self-improvement
- Pre-deployment demonstrations: Requiring developers to prove system safety before release
- AI development licensing: Establishing regulatory frameworks for advanced AI development
- Mandatory kill switches: Requiring emergency shutdown capabilities for advanced systems
- Compute cluster monitoring: Tracking large-scale AI training infrastructure21
The DIP is designed as a collaborative framework open to citizens and organizations worldwide, emphasizing independent participation rather than exclusive partnerships.22 ControlAI has developed country-specific policy briefs and offers advice to influential individuals and organizations via a dedicated partners page.23
"A Narrow Path" Policy Framework
The organization's "A Narrow Path" policy paper underwent systematic evaluation through a policy sprint red-teamed by Apart Research in July 2025.24 The sprint evaluated six policies with code released for reproducibility, demonstrating scalable monitoring of capability acquisition via phase transitions and agent dynamics simulations across eight sectors (from enterprise to critical infrastructure).25
Advocacy Tools and Resources
ControlAI has created tools enabling citizens to contact lawmakers, executives, civil service, media, and civil society in their jurisdictions to advocate for superintelligence risk policies.26 These tools have facilitated over 150,000 messages sent to lawmakers.27
Impact and Achievements
Lawmaker Engagement
ControlAI's most significant achievement has been its success in engaging policymakers on AI extinction risks:
- Briefed 150+ lawmakers on AI extinction risk
- Secured support from 100+ UK parliamentarians for their campaign
- Achieved public endorsement from over 20 cross-party UK parliamentarians (more than 1 in 3 briefed)
- Drafted and presented 1 AI bill to the UK Prime Minister's office28
The organization's cold-email campaign to British MPs and Lords engaged 70 representatives, with 31 publicly opposing ASI development—a remarkable conversion rate that defied initial predictions of resistance to strong extinction-risk messaging.29
Public Opinion and Reach
ControlAI has commissioned multiple YouGov polls demonstrating strong UK public support for AI safety measures:
January 2025 YouGov Poll (UK):
- 73% favor halting rapid superintelligence development
- 74% support empowering the Artificial Intelligence Safety Institute (AISI) as regulator
- 87% support safety regime for AI development
- 76% favor monitoring large compute clusters
- 82% support mandatory AISI testing and company accountability30
Additional Public Engagement:
- 79% support for a UK AI regulator
- 87% want developers to prove safety before release
- 150+ million views on AI risk content
- 150,000+ messages sent to lawmakers via their tools31
Media Presence
ControlAI has achieved significant media coverage, with mentions in:
- The Spectator (January 29, 2025) on DeepSeek stakes
- Newsweek (January 31, 2025) on AI extinction race
- Financial Times (September 12, 2024) on OpenAI bioweapon risks
- The Guardian (December 8, 2025) on parliamentarians calling for regulation
- City A.M. (December 8, 2025) on MPs pushing for stricter AI rules
- The Guardian (January 28, 2025) on former OpenAI researcher warnings
- New York Times (March 14, 2024) on powerful AI preparedness
- The Times (December 6, 2024) on scheming ChatGPT32
Special Projects
ControlAI has launched several targeted projects:
- "Artificial Guarantees" (January 2025): Documenting inconsistencies by AI companies, highlighting shifting statements on risks and broken promises33
- "What leaders say about AI" (September 2024): Compilation of warnings from AI leaders and researchers34
- Rational Animations collaboration: Video "What if AI just keeps getting smarter?" garnered 1.4 million views, warning of superintelligent, self-improving AI leading to extinction via indifference35
Criticisms and Controversies
Critique of Coefficient Giving's Approach
ControlAI has positioned itself in opposition to Coefficient Giving's approach to AI safety, arguing that the funder's strategy is "undemocratic" and centralizes control in a small group of "trusted" actors.36 The organization's "Direct Institutional Plan" dedicates over 500 words to criticizing Coefficient Giving (now Coefficient Giving) as the main funder in AI safety, highlighting:
- Over $80 million provided to establish the Center for Security and Emerging Technology (CSET), which placed fellows in the US Department of Commerce and White House
- Funding for the Horizon Institute supporting placements in US congressional offices and executive agencies
- Grants to OpenAI in exchange for a board seat for Holden Karnofsky
- Acting as "sole arbiter" of trustworthiness in AGI control strategy37
ControlAI argues that Coefficient Giving's approach of building influence through strategic placements and supporting "responsible actors" building superintelligence (a view associated with figures like Holden Karnofsky and Will MacAskill) is fundamentally flawed compared to their civic engagement model emphasizing democratic processes.38
Industry Relationship Critiques
The organization has been vocal in criticizing frontier AI companies for what it characterizes as systematically undermining alignment research and regulation to race toward AGI. ControlAI argues that companies are driven by "utopian beliefs" in AGI ushering in an ideal world rather than prioritizing safety.39
Specific criticisms include:
- Insufficient investment in alignment (only $200 million and a handful of researchers working on problems requiring decades and trillions)
- Companies collaborating to unlock resources like chips and power while ignoring governance
- Shifting baseline tactics and broken promises documented in their "Artificial Guarantees" project
- Racing to ASI despite warnings, downplaying risks even while acknowledging issues like bioweapon misuse40
Skepticism About Plan Feasibility
Within the effective altruism and AI safety communities, ControlAI's approach has received mixed reception:
Positive Views:
- Described as the "most x-risk-focused" 501(c)(4) organization
- Praised for concrete campaigns with tangible results (31 public commitments from MPs/Lords)
- Collaboration with Rational Animations characterized as "really great"41
Criticisms:
- Donors and community members express skepticism that global "pause AI" regulations are feasible due to coordination challenges
- Concerns that detection without enforcement is insufficient—companies could ignore reports
- Debates over impact: videos effective for awareness but less successful at converting views to actions like emails or calls to action
- Tension with EA leadership favoring cautious superintelligence development over outright bans42
Sensationalism Concerns
Some critics have characterized ControlAI as a group that "dramatically warns of AI's purported extinction risk," potentially sensationalizing risks.43 However, CEO Andrea Miotti has responded that critics often nitpick experimental setups but should focus on trends in AI behaviors like self-preservation, resistance to shutdown, and strategic deception.44
Relationship to AI Safety and Alignment
ControlAI operates primarily in the AI policy and advocacy space rather than technical alignment research. The organization's approach is grounded in the assessment that alignment is fundamentally intractable with current resources:
- Solving alignment would require decades of research and trillions in investment to address issues like identifying human values, reconciling contradictions, and predicting side-effects
- Currently only $200 million is invested, mostly in patching issues rather than solving core problems
- Progress is resource-limited rather than insight-limited, leading to opaque, rapidly advancing systems where experts fail to predict new skills or internal workings45
The organization emphasizes that AI control is not just a technical problem but requires institutional rules and democratic governance.46 This positions ControlAI distinctly from technical alignment organizations like Anthropic, OpenAI's alignment teams, or Redwood Research, which focus on developing technical solutions for controlling AI systems.
ControlAI's warnings align with broader concerns in the AI safety community about fundamental challenges in controlling superintelligent systems:
- Self-modifying code and learning from unanticipatable patterns make control potentially inherently insoluble
- Increasing AI capability reduces controllability; self-improving AI may resist goal changes and pursue instrumental goals like resource acquisition
- Verification is extremely difficult due to AI's software nature, enabling hiding of modifications47
Community Reception
Effective Altruism and Rationalist Communities
Discussions on the EA Forum and LessWrong portray ControlAI as strongly x-risk-focused but reveal debates about the feasibility of their approach:
Support:
- Evolution from think tank to concrete grassroots campaigns praised
- High-quality content production acknowledged
- Donor support for their regulation efforts despite skepticism about global enforcement48
Skepticism:
- Questions about whether moderate regulations or alignment research will succeed
- Concerns that weaker systems can't oversee stronger ones, with no known methods for superintelligent oversight
- Broader EA critiques that subhuman systems are inadequate for superintelligent oversight, requiring superhuman capability for proper alignment
- Unresolved issues around stability under reflection and steering stronger systems49
The organization's positioning against prominent EA figures' views on "responsible actors" building superintelligence has created some tension with EA leadership.50
Key Uncertainties
Several important questions remain about ControlAI's approach and impact:
-
Scalability of Success: Can the organization replicate its UK success in other countries, particularly the US where regulatory dynamics differ significantly?
-
Enforcement Mechanisms: How would proposed bans on superintelligence development be enforced internationally, given coordination challenges and incentive structures?
-
Technical Feasibility of Proposals: Are the organization's proposed capability thresholds and monitoring systems technically viable, and can they keep pace with rapid AI progress?
-
Relationship to Technical Safety Work: How does ControlAI's advocacy-first approach complement or conflict with technical alignment research efforts?
-
Long-term Funding Sustainability: With no major disclosed funders and only £1M expected in fundraising, can the organization sustain operations at the scale needed for global impact?
-
Impact on AI Development: Will the organization's campaigns lead to meaningful policy changes, or primarily serve to raise awareness without shifting development trajectories?
-
Alternative Approaches: Is preventing superintelligence development the optimal strategy, or should resources focus on alignment research, differential development, or other interventions?
Sources
Footnotes
-
Claim reference cr-41ee (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-fa86 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-ab9a (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-dbe2 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-0a18 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-f239 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-e36a (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-b936 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-5fbc (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-5767 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-fa1a (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-e582 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-6a6a (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-b22b (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-0533 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-1534 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-838c (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-07d4 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-5a30 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-758a (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-1f92 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-62ca (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-1819 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-4b14 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-740f (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-40f4 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-4e9b (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-e801 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-5d4e (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-958e (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-95bb (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-47cd (data unavailable — rebuild with wiki-server access) ↩
-
ControlAI - Designing the DIP — ControlAI - Designing the DIP ↩
-
Claim reference cr-cdcc (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-2036 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-a6a4 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-912e (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-4caa (data unavailable — rebuild with wiki-server access) ↩
-
ControlAI - Designing the DIP — ControlAI - Designing the DIP ↩
-
Claim reference cr-4f70 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-d7ae (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-fce4 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-3168 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-b401 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-38b5 (data unavailable — rebuild with wiki-server access) ↩
-
Claim reference cr-aea8 (data unavailable — rebuild with wiki-server access) ↩
-
ControlAI - Designing the DIP — ControlAI - Designing the DIP ↩
References
“The findings demonstrate broad public backing for comprehensive safety measures, from mandatory testing of advanced AI models by the AISI (82% supporting) to legal frameworks holding companies accountable for AI-related harms (82% supporting).”
“Then, we launched a pilot campaign focused on UK lawmakers that validated our approach. In less than three months, over 20 cross-party UK parliamentarians publicly supported our campaign.”
WRONG DATE FABRICATED DETAILS
“In less than three months, over 20 cross-party UK parliamentarians publicly supported our campaign. This amounts to more than one in three lawmakers we brief recognizing extinction risks from AI and supporting binding regulation for the most powerful AI systems.”
WRONG NUMBERS: The source does not mention December 2025 or early 2026. OVERCLAIMS: The source does not mention a 'huge coalition of lawmakers'. It mentions 'over 20 cross-party UK parliamentarians publicly supported our campaign.' OVERCLAIMS: The source does not explicitly state that the lawmakers' support represents a 'major milestone' for the organization's advocacy efforts.
“Everyone can take action by reaching out to lawmakers, executive branch, civil service, media, civil society in their jurisdiction, and making the case for the risks from superintelligence and what can be done about them.”
unsupported: The source does not mention that the tools have facilitated over 150,000 messages sent to lawmakers.
“One influential undemocratic plan is that of Open Philanthropy , the main Effective Altruism (EA) donor.”
“By its very nature, the overall Open Philanthropy plan for AGI is undemocratic: it centralizes control over the main AGI projects in the hands of a small group of “trusted” actors, where Open Philanthropy leadership is the sole arbiter of what constitutes “trustworthiness”.”
“By its very nature, the overall Open Philanthropy plan for AGI is undemocratic: it centralizes control over the main AGI projects in the hands of a small group of “trusted” actors, where Open Philanthropy leadership is the sole arbiter of what constitutes “trustworthiness”.”
“OCT 2023 Campaign to prevent an international endorsement of further scaling At the AI Safety Summit, we successfully campaigned against the Summit formally giving its approval to Responsible Scaling Policies.”
“NOV - DEC 2023 Campaign against exemptions for Foundation Models in the EU AI Act In December 2023 the European Parliament settled on an EU AI Act that placed special regulations upon foundation models that have been trained with computational resources beyond a certain threshold.”
“DEC 2023 - Jun 2024 Campaign against deepfakes Deepfakes are a growing threat to society, and governments must act.”
“Andrea Miotti Founder & CEO Andrea Miotti is the founder and CEO of ControlAI, a non-profit dedicated to mitigating the risks from powerful AI systems.”
The source does not explicitly state that ControlAI was founded in 2023. The source does not mention the blimp advertising campaign at the AI Safety Summit.
“In the UK, we operate as a nonprofit (a “private company limited by guarantee”), and in the US we operate as a nonprofit 501(c)(4).”
The source does not mention that Andrea Miotti holds a PhD in machine learning robustness or that she previously worked at Palantir and BCG. The source states that Gabriel Alfour helped Andrea found ControlAI, not that Andrea founded the organization after leading communications and policy efforts at Conjecture.
“AI companies are racing to build Artificial Superintelligence (ASI) - systems more intelligent than all of humanity combined. Currently, no method exists to contain or control smarter-than-human AI systems. If these companies succeed, the consequences would be catastrophic. Top AI scientists, world leaders, and even AI company CEOs themselves warn this could lead to human extinction .”
unsupported: ControlAI's primary theory of change centers on the 'Direct Institutional Plan' (DIP), launched in March 2025 unsupported: safe-by-design AI engineering, metrology of intelligence, and human-controlled transformative AI
“If artificial superintelligence is created in the next few years, humanity risks losing control over its future. Given this, we have a clear imperative: prevent the development of artificial superintelligence and keep humanity in control.”
The source does not explicitly state that ControlAI is based in the UK, although it does mention UK parliamentarians and the UK Prime Minister's office. The claim that ControlAI operates primarily through campaigns, policy proposals, and public engagement rather than technical research is an interpretation not explicitly stated in the source.
“Fighting to keep humanity in control”
The source does not mention that ControlAI is an offshoot of Conjecture. The source does not mention that ControlAI targets control over AI scaling, foundation models, and AI overall. It only mentions that they are 'fighting to keep humanity in control'.
10Claire Berlinski Substack - Is the AI Control Problem Insoluble?claireberlinski.substack.com·Blog post▸
“Between November 2024 and May 2025, ControlAI met with 84 cross-party UK parliamentarians. Roughly 4 in 10 were MPs, 3 in 10 were Lords, and 2 in 10 represented devolved legislatures: the Welsh Senedd, Scottish Parliament, and Northern Ireland Assembly. We briefed these parliamentarians about the risk of extinction from AI that arises from loss of control of advanced AI systems. 1 in 3 lawmakers that we met during this period supported our campaign .”
The claim states that ControlAI launched "The Direct Institutional Plan" in March 2025, but the source does not mention this plan or its launch date. The claim states that over 20 parliamentarians publicly supported their campaign, but the source states that 1 in 3 lawmakers supported their campaign, which would be approximately 28 out of 84.
“Financial Times (12/09/24) - OpenAI acknowledges new models increase risk of misuse to create bioweapons.”
“There's also an open, more general problem, that I don't discuss here, of weaker systems steering stronger systems (not getting gamed and preserving preferences). We don't know how to do that.”
“If you are an influential individual or organization and would like our advice on high-leverage approaches, contact us. We have extensive experience briefing lawmakers, government officials, as well as industry and civil society leaders.”
unsupported: The source does not mention the DIP (Decentralized Intelligence Project). unsupported: The source does not mention ControlAI. misleading_paraphrase: The source mentions advice for influential individuals and organizations, but it does not specify a 'dedicated partners page.'
“These companies are largely utopists who want to build AGI because they believe it will usher in their ideal world.”
“Researchers and engineers don't need to understand AIs to create them – indeed, experts consistently fail to anticipate how quickly new skills will be unlocked, or how existing AIs work. Progress is bottlenecked only by resources (such as AI chips, electrical power, data) instead of scientific insights.”
“While it’s fair to point out that these tests generally put the AIs in unrealistic scenarios, Andrea Miotti, CEO of ControlAI, a group that dramatically warns of AI’s purported extinction risk for humanity, says we shouldn’t ignore the writing on the wall.”
“Andrea Miotti is the Founder and Executive Director of Control AI. How many people are employed at Control AI? 9 people are employed at Control AI.”
The source states that the organization has 9 employees, but the claim specifies 'as of 2024', which is not explicitly mentioned in the source. The copyright date at the bottom of the page is 2026, so it is likely the information is current as of 2024, but this is not explicitly stated.
“The most professionalized of the activist groups focused on AI, ControlAI is an offshoot of Conjecture, an AI startup run by Connor Leahy (who previously cofounded EleutherAI). Set up in 2023, the group has produced slick advertising campaigns, memorably hiring a blimp to fly over the AI Safety Summit in 2023.”
The article was published in 2025, not 2023. The tagline is not mentioned in the article, so it is unsupported.
“The most professionalized of the activist groups focused on AI, ControlAI is an offshoot of Conjecture, an AI startup run by Connor Leahy (who previously cofounded EleutherAI). Set up in 2023, the group has produced slick advertising campaigns, memorably hiring a blimp to fly over the AI Safety Summit in 2023.”
“All six policies are red teamed step-by-step systematically.”
WRONG DATE: The policy sprint occurred in June 2025, not July 2025. UNSUPPORTED: The source does not mention the evaluation of policies through phase transitions and agent dynamics simulations across eight sectors. UNSUPPORTED: The source does not mention scalable monitoring of capability acquisition.
“ControlAI started out as a think tank. Over the past months, they developed a theory of change for how to prevent ASI development (“ Direct Institutional Plan ”). As a pilot campaign they cold-mailed British MPs and Lords to talk to them about AI risk.”
“As a pilot campaign they cold-mailed British MPs and Lords to talk to them about AI risk. So far, they talked to 70 representatives of which 31 agreed to publicly stand against ASI development.”
“Executive summary: This explainer video by Rational Animations and ControlAI argues that, unless AI development is slowed or better controlled, current trends suggest we are on a path toward superintelligent, recursively self-improving systems that could lead to human extinction—not out of malice, but through indifference—making urgent action by governments and the public essential to avert catastrophic outcomes.”
The view count is not mentioned in the article. The claim that AI extinction would occur via indifference is an oversimplification of the source.
“I founded ControlAI because I believe this is just, not just a technical problem. The reality is even with the best scientists working on this problem, we're not gonna make it out alive if we don't put rules in place, regulations in place that actually protect us.”
“Artificial Guarantees A collection of inconsistent statements, shifting baseline tactics, and promises broken by major AI companies and their leaders showing that what they say doesn't always match what they do.”
“Sept 2024 What leaders say about AI A collection of quotes from leaders, researchers, and experts on AI and its risks”
“That said, I still support efforts to implement AI safety regulations, and I think that sort of work is among the best things one can be doing, because: My best guess is that soft safety regulations won't prevent extinction, but I could be wrong about that—they might turn out to work.”