Skip to content
Longterm Wiki
Back

ControlAI About Page

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Control AI

Control AI is an AI safety advocacy organization; this about page provides context on their mission and team, useful for understanding the broader ecosystem of organizations working on AI governance and safety.

Metadata

Importance: 35/100homepage

Summary

Control AI is an organization focused on AI safety and governance, working to ensure that advanced AI systems are developed safely and remain under meaningful human control. The about page outlines the organization's mission, team, and approach to addressing risks from advanced AI.

Key Points

  • Control AI focuses on ensuring advanced AI development is safe and that humans maintain meaningful oversight and control over AI systems
  • The organization likely engages in policy advocacy, research, or public education around AI risk and governance
  • Represents part of the broader civil society ecosystem working on AI safety alongside technical research organizations
  • Mission is oriented toward preventing catastrophic or existential risks from advanced AI systems

Cited by 1 page

PageTypeQuality
ControlAIOrganization63.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20264 KB
About Us | ControlAI 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 About Us

 ControlAI is a non-profit organisation that works to prevent the extinction risk posed by superintelligence and to secure a great future for humanity.

 We have briefed over 150 cross-party UK parliamentarians and the Prime Minister's office since November 2024, with our superintelligence campaign gaining support from more than 100 UK lawmakers. Our work has been covered in media outlets including the Guardian and Time . ControlAI is also active in the United States, Canada, and Germany, where we have briefed dozens of lawmakers, given expert testimony in hearings, and more.

 In the UK, we are a not-for-profit company limited by guarantee. In the US, we are a 501(c)(4) social welfare organization.

 Learn About Our Work

 Our focus

 Nobel Prize winners, top AI experts and even the CEOs of major AI companies have warned that superintelligence poses an extinction risk for humanity. Yet, most decision-makers and most of the public are still in the dark about these risks.

 ControlAI exists to change this. We help hundreds of thousands of people understand where AI is going, and take civic action to secure their future. We do this across all democratic institutions, from lawmakers to the media, civil society, and the public. We operate like a startup: clear objectives, measurable goals, and real-world results over long-term research.

 Learn more about our current work here .

 Our Leadership

 Andrea Miotti 

 Founder & CEO

 Andrea Miotti is the founder and CEO of ControlAI, a non-profit dedicated to mitigating the risks from powerful AI systems. ControlAI calls for prohibiting the development of superintelligent AI, as AI experts assess it poses an extinction risk. The organization’s UK parliamentary campaign is supported by over 100 lawmakers. Andrea’s expert commentary and op-eds have appeared in outlets including TIME, The Guardian, Nature, BBC, Sky News, and more.

 Team

 Mathias Bonde 

 Head of Advocacy

 Sophie Toura 

 Operations & Outreach Manager

 Leticia García Martínez 

 UK Parliamentary Engagement Lead

 Max Salmon 

 Campaigns Strategist

 Max Hernandez-Zapata 

 Policy Advisor, US

 Max Winga 

 Policy Analyst

 Grace Gonzales 

 Media Engagement Lead

 Adam Shimi 

 Policy Researcher

 Tolga Bilge 

 Policy Researcher

 Benjamin Balde 

 Consulting Program Officer (Germany)

 Mayank Adlakha 

 Policy Advisor, UK

 Our Advisors

 Connor Leahy 

 Advisor to ControlAI, CEO at Conjecture, EleutherAI Founder

 Connor Leahy is the CEO of AI start-up Conjecture, working on solving the problems of AI control. He is a leading voice on AI risk mitigation, was one of the participants of the first ever AI Safety Summit, and is a frequent commentator on CNN, BBC and more. Connor previously headed and founded EleutherAI, an online organization for AI research that pioneered open source LLMs for research, 

... (truncated, 4 KB total)
Resource ID: 88a244a30b7e1aef | Stable ID: sid_QPQUVwTr4b