Skip to content
Longterm Wiki
Back

ControlAI Past Campaigns

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Control AI

ControlAI is an advocacy organization working on AI risk reduction; this page tracks their past campaigns and may be useful for researchers studying AI safety policy movements or civil society engagement with AI governance.

Metadata

Importance: 30/100homepagereference

Summary

This page documents the historical advocacy campaigns conducted by ControlAI, an organization focused on reducing risks from advanced AI systems. It provides a record of their policy and public awareness initiatives aimed at influencing AI governance and safety measures.

Key Points

  • Catalogs past advocacy and policy campaigns organized by ControlAI on AI safety issues
  • Demonstrates organized civil society efforts to influence AI governance at institutional levels
  • Provides a track record of AI safety activism and public engagement strategies
  • Useful reference for understanding how AI safety concerns have been translated into advocacy action

Cited by 1 page

PageTypeQuality
ControlAIOrganization63.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20262 KB
Past Campaigns | ControlAI 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 Our Campaigns

 Our current campaign focuses on superintelligence.

 There is a simple truth - humanity’s extinction is possible. Recent history has also shown us another truth - we can create artificial intelligence (AI) that can rival humanity. 

 Under our control, such advanced AI presents one of the greatest opportunities to our collective advancement. With the right approach this technology could be more revolutionary than the creation of the internet and have greater economic impact than the Industrial Revolution. With the wrong approach this technology could be more disruptive and dangerous to life on earth than anything before it, and at its worst, could risk our extinction. 

 Our latest project is " The Direct Institutional Plan ", you can read more about it here. 

 View Current Campaign

 Past Campaigns

 MAR 2025 - PRESENT

 The Direct Institutional Plan

 AI companies are racing to build Artificial Superintelligence (ASI) - systems more intelligent than all of humanity combined. If ASI is created in the next few years, humanity risks losing control over its future. Top AI scientists, world leaders, and even AI company CEOs themselves warn it could lead to human extinction.

 Read More

 DEC 2023 - Jun 2024

 Campaign against deepfakes

 Deepfakes are a growing threat to society, and governments must act.

 Read More

 NOV - DEC 2023

 Campaign against exemptions for Foundation Models in the EU AI Act

 In December 2023 the European Parliament settled on an EU AI Act that placed special regulations upon foundation models that have been trained with computational resources beyond a certain threshold.

 Read More

 OCT 2023

 Campaign to prevent an international endorsement of further scaling

 At the AI Safety Summit, we successfully campaigned against the Summit formally giving its approval to Responsible Scaling Policies.

 Read More

 Get Updates

 Sign up to our newsletter if you'd like to stay updated on our work, how you can get involved, and to receive a weekly roundup of the latest AI news.

 
 
 
 
 
 
 

 

 
 
 
"> Socials 

 Blog 

 Past Work 

 Deepfakes Report 

 Artificial Guarantees

 About 

 Careers 

 Contact 

 Privacy Policy
Resource ID: 88974417a76881e1 | Stable ID: sid_IYm9zARxVD