Cross-partisan nonprofit founded in early 2017 to prevent American democracy from declining into a more authoritarian form of government. Led by Ian Bassin (Co-founder and Executive Director). 100+ staff (lawyers, strategists, other professionals). Created the AI for Democracy Action Lab (AI-DAL), which brings together tech innovators, legal/policy experts, software engineers, and nonprofit leaders to defend against AI threats to democracy and develop pro-democracy AI applications. Filed 100+ lawsuits challenging authoritarian actions. In July 2025, sued to block USDA's demand for personal data of millions of Americans, framing it as part of a federal "panopticon" of AI-enabled surveillance. Published "The Authoritarian Playbook for 2025" detailing how authoritarians use technology. Primary intervention types: litigation, legislative advocacy, policy research, technology tools.
Electoral Impact Assessment ModelAnalysisElectoral Impact Assessment ModelThis model estimates AI's marginal electoral impact across three vectors — disinformation influence, infrastructure attacks, and voter suppression. Analysis finds 0.2-5% probability of flipping ind...Quality: 65/100AI Surveillance and Regime Durability ModelAnalysisAI Surveillance and Regime Durability ModelUsing historical regime collapse data (military regimes: 9 years, single-party: 30 years) and evidence from 80+ countries adopting surveillance technology, this model estimates AI-enabled authorita...Quality: 64/100Authoritarian Tools Diffusion ModelAnalysisAuthoritarian Tools Diffusion ModelThis model analyzes how AI surveillance technologies diffuse to authoritarian regimes through commercial sales, development assistance, joint ventures, reverse engineering, and illicit acquisition....Quality: 62/100Surveillance Chilling Effects ModelAnalysisSurveillance Chilling Effects ModelQuantifies how AI surveillance reduces freedom of expression through self-censorship mechanisms, estimating 50-70% reduction in dissent within months and 80-95% within 1-2 years in comprehensive su...Quality: 54/100
Risks
AI Surveillance and US Democratic ErosionRiskAI Surveillance and US Democratic ErosionAnalysis of how data centralization, oversight dismantlement, and AI capability acquisition by the US government create near-term threats to democratic processes. Documents the Anthropic-Pentagon s...Quality: 55/100AI-Driven Trust DeclineRiskAI-Driven Trust DeclineUS government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibi...Quality: 55/100AI-Enabled Authoritarian TakeoverRiskAI-Enabled Authoritarian TakeoverComprehensive analysis documenting how 72% of global population (5.7 billion) now lives under autocracy with AI surveillance deployed in 80+ countries, showing 15 consecutive years of declining int...Quality: 61/100AI DisinformationRiskAI DisinformationPost-2024 analysis shows AI disinformation had limited immediate electoral impact (cheap fakes used 7x more than AI content), but creates concerning long-term epistemic erosion with 82% higher beli...Quality: 54/100Epistemic CollapseRiskEpistemic CollapseEpistemic collapse describes the complete erosion of society's ability to establish factual consensus when AI-generated synthetic content overwhelms verification capacity. Current AI detectors achi...Quality: 49/100
Organizations
Revolving Door ProjectOrganizationRevolving Door ProjectGovernment accountability project (of the Goodnation Foundation) led by Executive Director Jeff Hauser. Funded by Democracy Fund. Maintains the "Tracking Uses of AI in the Trump Administration" tra...Encode JusticeOrganizationEncode JusticeYouth-led AI accountability organization founded in July 2020 by Sneha Revanur (Founder and President; named to TIME 100 AI list). Approximately 600 student members across 40 countries. Rebranded t...Center for AI Safety Action FundOrganizationCenter for AI Safety Action FundPolicy advocacy arm of the Center for AI Safety, focused on bipartisan engagement with policymakers on AI national security risks. Sister organization to CAIS (San Francisco-based technical safety ...Freedom HouseOrganizationFreedom HouseComprehensive overview of Freedom House as a democracy-monitoring NGO, with a thin but present AI-relevance angle focused on digital repression and AI-enabled authoritarianism rather than AI safety...Electronic Privacy Information Center (EPIC)OrganizationElectronic Privacy Information Center (EPIC)Independent nonprofit research center focused on privacy and civil liberties, founded in 1994. Led by Alan Butler (Executive Director and President, since 2020). 97% Charity Navigator score. Runs a...Mozilla FoundationOrganizationMozilla FoundationNonprofit foundation behind the Firefox browser, with growing AI governance programs. Approximately 20% of expenditure on trustworthy, open-source AI. Key AI programs include the Democracy x AI Coh...
Historical
Anthropic-Pentagon Standoff (2026)EventAnthropic-Pentagon Standoff (2026)Comprehensive analysis of the February 2026 confrontation between Anthropic and the US government. Triggered when Claude AI was used in the January 2026 Venezuela raid via Palantir, Anthropic refus...Quality: 70/100