Skip to content
Longterm Wiki

The Center for AI Policy Has Shut Down

web

Author

T_W

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

Relevant to those tracking the AI safety policy ecosystem; CAIP was one of few organizations explicitly advocating for strong AI safety legislation in the US Congress before its closure.

Metadata

Importance: 45/100news

Summary

An announcement on the EA Forum reporting that the Center for AI Policy (CAIP), a US-focused AI governance and policy advocacy organization, has ceased operations. The post likely details the reasons for the shutdown and reflects on the organization's work and impact during its existence.

Key Points

  • The Center for AI Policy (CAIP), a nonprofit focused on US federal AI policy advocacy, has shut down its operations.
  • CAIP was notable for advocating strong AI safety-oriented legislation and engaging directly with Congress on AI governance issues.
  • The shutdown represents a loss of dedicated institutional capacity for AI safety policy advocacy in Washington, D.C.
  • The post is shared on the EA Forum, reflecting the organization's ties to the effective altruism and AI safety communities.
  • Organizational closures in AI governance highlight the challenges of sustaining policy-focused nonprofits in a rapidly shifting funding and political landscape.

Cached Content Preview

HTTP 200Fetched Apr 10, 202658 KB
# The Center for AI Policy Has Shut Down
By T_W
Published: 2025-09-16
### And the need for more AIS advocacy work

Executive Summary
-----------------

[The Center for AI Policy (CAIP)](https://www.centeraipolicy.org/) is no more. CAIP was an advocacy organization that worked to raise policymakers’ awareness of the catastrophic risks from AI and to promote ambitious legislative solutions. Such advocacy is necessary because good governance ideas don’t spread on their own, and to meaningfully reduce AI risk, they must reach the U.S. federal government.

Why did CAIP shut down? The reasons are mixed. Some were internal, such as hiring missteps. But others reflect the broader ecosystem: funders setting the bar for advocacy projects at an unreasonably high level, structural biases in the funding space that privilege research over advocacy. While CAIP’s mistakes played a role, a full account also needs to reckon with these systemic factors.

I focus on CAIP, as I think it filled a particular niche and was impactful, but there are many other advocacy orgs doing great work (see A5), and the core argument is that we need more of that work. Looking forward, impactful advocacy projects will likely continue to compete for a far more limited pool of funds than research efforts. That makes individual support a particularly high-leverage opportunity, and for those concerned with AI risk I’d seriously consider donating to AI safety (AIS) advocacy. The space would also greatly benefit from a CAIP 2.0 (an AIS advocacy organization willing to speak frankly about catastrophic risks) as well as an organization focused on developing advocacy talent.

**Some brief notes:**

*   For those not as interested in the CAIP bit, feel free to jump to the “Funders Have Set the Bar too High” section and read from there.
*   Our executive director Jason has already written extensively about much of this in [his sequence](https://forum.effectivealtruism.org/s/xns7nbQxgTHXew7ZY), which I aim to partially summarize here as I also make my own case for the need for advocacy. My opinions are shared in a personal capacity.
*   My deepest gratitude to all of those who spent time reviewing and chatting through the implications of this piece, it’s truly much better for it.

Why Advocacy?
-------------

Before describing CAIP’s work, I want to briefly lay out the basic case for AIS advocacy (see A1[^gpi1yll8ra] for how I’m defining “advocacy”). This is partly for readers unfamiliar with the space, and partly to ground disagreements in a clearer argument for why advocacy matters.

**Why AI?**

The continued development of AI could pose serious threats to humanity, potentially even existential risks. In response, there are two broad strategies: technical solutions, which aim to make AI models themselves safer, and governance solutions, which aim to shape the behavior of the companies developing those models. Doing work on both seems important.

**Why Congress?**

Governance efforts can focus 

... (truncated, 58 KB total)
Resource ID: d5f988bde6291d69 | Stable ID: sid_JRNhqAw8OO