The Center for AI Policy Has Shut Down
webA postmortem on the Center for AI Policy (CAIP), an AI safety advocacy organization that shut down in 2025, analyzing internal failures and systemic funding biases that disadvantage advocacy over research in the AI safety ecosystem.
Metadata
Importance: 58/100blog postanalysis
Summary
This post serves as a postmortem for CAIP, an AI safety advocacy organization that worked to raise policymakers' awareness of catastrophic AI risks and promote legislative solutions. The author argues CAIP's closure reflects both internal hiring missteps and systemic funding biases that privilege research over advocacy. The post calls for more advocacy funding, a CAIP successor organization, and talent development in AI safety advocacy.
Key Points
- •CAIP shut down due to a mix of internal issues (hiring missteps) and systemic funding biases that favor research over advocacy in the AI safety ecosystem.
- •The AI safety governance space invests approximately 3x more in research than advocacy by FTE count, creating an imbalance that limits policy impact.
- •Congress is argued to be the highest-leverage target for AI governance because it can override corporate interests and produce durable policy outcomes.
- •Individual donations to AI safety advocacy are highlighted as particularly high-leverage given the limited funding pool compared to research.
- •The author calls for a 'CAIP 2.0' willing to speak frankly about catastrophic AI risks and an organization focused on developing advocacy talent.
Cached Content Preview
HTTP 200Fetched Apr 28, 202651 KB
The Center for AI Policy Has Shut Down
T_W 17 Sep 2025 11:04 UTC 95 points 2 comments 14 min read LW link Postmortems & Retrospectives AI AI Governance And the need for more AIS advocacy work
Executive Summary
The Center for AI Policy (CAIP) is no more. CAIP was an advocacy organization that worked to raise policymakers’ awareness of the catastrophic risks from AI and to promote ambitious legislative solutions. Such advocacy is necessary because good governance ideas don’t spread on their own, and to meaningfully reduce AI risk, they must reach the U.S. federal government.
Why did CAIP shut down? The reasons are mixed. Some were internal, such as hiring missteps. But others reflect the broader ecosystem: funders setting the bar for advocacy projects at an unreasonably high level, structural biases in the funding space that privilege research over advocacy. While CAIP’s mistakes played a role, a full account also needs to reckon with these systemic factors.
I focus on CAIP, as I think it filled a particular niche and was impactful, but there are many other advocacy orgs doing great work (see A5), and the core argument is that we need more of that work. Looking forward, impactful advocacy projects will likely continue to compete for a far more limited pool of funds than research efforts. That makes individual support a particularly high-leverage opportunity, and for those concerned with AI risk I’d seriously consider donating to AI safety (AIS) advocacy. The space would also greatly benefit from a CAIP 2.0 (an AIS advocacy organization willing to speak frankly about catastrophic risks) as well as an organization focused on developing advocacy talent.
Some brief notes:
For those not as interested in the CAIP bit, feel free to jump to the “Funders Have Set the Bar too High” section and read from there.
Our executive director Jason has already written extensively about much of this in his sequence , which I aim to partially summarize here as I also make my own case for the need for advocacy. My opinions are shared in a personal capacity.
My deepest gratitude to all of those who spent time reviewing and chatting through the implications of this piece, it’s truly much better for it.
Why Advocacy?
Before describing CAIP’s work, I want to briefly lay out the basic case for AIS advocacy (see A1 [1] for how I’m defining “advocacy”). This is partly for readers unfamiliar with the space, and partly to ground disagreements in a clearer argument for why advocacy matters.
Why AI?
The continued development of AI could pose serious threats to humanity, potentially even existential risks. In response, there are two broad strategies: technical solutions, which aim to make AI models themselves safer, and governance solutions, which aim to shape the behavior of the companies developing those models. Doing work on both seems important.
Why Congress?
Governance efforts can focus on many different actors: state or federal legislatures,
... (truncated, 51 KB total)Resource ID:
8916cd05612a1b5e | Stable ID: sid_dQuTtreDBw