Skip to content
Longterm Wiki

AI Safety Policy Lobbying

Direct engagement with legislators and executive branch officials to shape AI regulation and safety policy. Spans the spectrum from EA-aligned catastrophic risk prevention to broader AI accountability advocacy. Key organizations operate in two clusters. The AI safety / longtermist cluster includes Americans for Responsible Innovation (bipartisan congressional lobbying, EA-funded), AI Policy Institute (public opinion polling + advocacy), Center for AI Policy (shut down May 2025, \$484K total lobbying spend), CAIS Action Fund (translating technical safety research to policy), and IAPS (fellowship pipeline). The broader AI accountability cluster includes Access Now (led 110+ org EU AI Act coalition), Encode Justice (youth-led advocacy to Congress/OSTP), and FLI (Pro-Human AI Declaration with 150+ org coalition, EU AI Act advocacy). Lobbying effectiveness varies: direct congressional engagement (ARI, CAIP) is high-touch but expensive per-contact; coalition lobbying (Access Now, FLI) scales better internationally; public opinion campaigning (AIPI) builds political cover for legislators. The collapse of CAIP due to funding suggests the field is underfunded relative to industry lobbying. Verified lobbying data (Senate LDA, 2025): all six pro-safety orgs with federal filings combined spent roughly \$3.4M — less than Meta spends in a single quarter (\$26.3M total 2025). 774 organizations lobbied on AI in 2025, with 82% representing corporate interests. ARI is the largest pro-safety AI lobbying spender at \$2.1M/year.

Related

Related Wiki Pages

Top Related Pages

Approaches

AI-Human Hybrid Systems

Organizations

Institute for AI Policy and StrategyAI Policy InstituteAccess NowLeading the Future super PACUS AI Safety Institute

Sources

Tags

lobbyingpolicy-advocacyai-safetyintervention-type