Skip to content
Longterm Wiki
Back

The Center for AI Policy Has Shut Down

web

Author

T_W

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

Announcement of the closure of the Center for AI Policy, a U.S.-based nonprofit that advocated for strong AI safety legislation; relevant for tracking the institutional landscape of AI governance efforts.

Forum Post Details

Karma
95
Comments
2
Forum
lesswrong
Forum Tags
Postmortems & RetrospectivesAI GovernanceAI

Metadata

Importance: 45/100news

Summary

This LessWrong post announces the closure of the Center for AI Policy (CAIP), a Washington D.C.-based organization focused on AI governance and safety policy advocacy. The post likely discusses the reasons behind the shutdown and its implications for the AI safety policy landscape.

Key Points

  • The Center for AI Policy (CAIP), a prominent AI safety-focused policy organization, has ceased operations.
  • CAIP was known for advocating strong AI safety legislation and lobbying in Washington D.C.
  • The shutdown represents a loss of dedicated AI safety policy advocacy capacity in the U.S. political sphere.
  • The closure may reflect broader challenges facing AI governance organizations in securing funding or political traction.
  • This development has implications for the AI safety community's presence and influence in federal policymaking.

1 FactBase fact citing this source

Cached Content Preview

HTTP 200Fetched Apr 6, 202659 KB
# The Center for AI Policy Has Shut Down
By T_W
Published: 2025-09-17
And the need for more AIS advocacy work
---------------------------------------

Executive Summary
-----------------

[The Center for AI Policy (CAIP)](https://www.centeraipolicy.org/) is no more. CAIP was an advocacy organization that worked to raise policymakers’ awareness of the catastrophic risks from AI and to promote ambitious legislative solutions. Such advocacy is necessary because good governance ideas don’t spread on their own, and to meaningfully reduce AI risk, they must reach the U.S. federal government.

Why did CAIP shut down? The reasons are mixed. Some were internal, such as hiring missteps. But others reflect the broader ecosystem: funders setting the bar for advocacy projects at an unreasonably high level, structural biases in the funding space that privilege research over advocacy. While CAIP’s mistakes played a role, a full account also needs to reckon with these systemic factors.

I focus on CAIP, as I think it filled a particular niche and was impactful, but there are many other advocacy orgs doing great work (see A5), and the core argument is that we need more of that work. Looking forward, impactful advocacy projects will likely continue to compete for a far more limited pool of funds than research efforts. That makes individual support a particularly high-leverage opportunity, and for those concerned with AI risk I’d seriously consider donating to AI safety (AIS) advocacy. The space would also greatly benefit from a CAIP 2.0 (an AIS advocacy organization willing to speak frankly about catastrophic risks) as well as an organization focused on developing advocacy talent.

**Some brief notes:**

*   For those not as interested in the CAIP bit, feel free to jump to the “Funders Have Set the Bar too High” section and read from there.
*   Our executive director Jason has already written extensively about much of this in [his sequence](https://forum.effectivealtruism.org/s/xns7nbQxgTHXew7ZY), which I aim to partially summarize here as I also make my own case for the need for advocacy. My opinions are shared in a personal capacity.
*   My deepest gratitude to all of those who spent time reviewing and chatting through the implications of this piece, it’s truly much better for it.

Why Advocacy?
-------------

Before describing CAIP’s work, I want to briefly lay out the basic case for AIS advocacy (see A1[^bk10zbv2e8g] for how I’m defining “advocacy”). This is partly for readers unfamiliar with the space, and partly to ground disagreements in a clearer argument for why advocacy matters.

**Why AI?**

The continued development of AI could pose serious threats to humanity, potentially even existential risks. In response, there are two broad strategies: technical solutions, which aim to make AI models themselves safer, and governance solutions, which aim to shape the behavior of the companies developing those models. Doing work on both seems important.

**Why Congr

... (truncated, 59 KB total)
Resource ID: 4802fca2e07398db | Stable ID: sid_yq5wQX3f7i