Nonpartisan law and policy institute at NYU School of Law, founded in 1995. Revenue approximately $58 million (FY 2023-2024) with 180+ staff (attorneys, scholars, communications professionals). Led by Michael Waldman (President and CEO). Maintains a comprehensive AI Legislation Tracker covering all US states. Published "Agenda to Strengthen U.S. Democracy in the Age of AI" with federal and state policy recommendations. Policing & Technology Program investigates AI surveillance technologies including facial recognition, social media monitoring, and predictive policing. Democracy Futures Project focuses on democratic resilience against emerging threats including AI. Primary intervention types: research, litigation, policy advocacy.
AI Litigation as Democratic DefenseApproachAI Litigation as Democratic DefenseUsing courts to challenge government and corporate AI deployments that threaten democratic governance, civil rights, and privacy. Includes FOIA litigation to compel transparency about government AI...
Analysis
Electoral Impact Assessment ModelAnalysisElectoral Impact Assessment ModelThis model estimates AI's marginal electoral impact across three vectors — disinformation influence, infrastructure attacks, and voter suppression. Analysis finds 0.2-5% probability of flipping ind...Quality: 65/100AI Surveillance and Regime Durability ModelAnalysisAI Surveillance and Regime Durability ModelUsing historical regime collapse data (military regimes: 9 years, single-party: 30 years) and evidence from 80+ countries adopting surveillance technology, this model estimates AI-enabled authorita...Quality: 64/100Authoritarian Tools Diffusion ModelAnalysisAuthoritarian Tools Diffusion ModelThis model analyzes how AI surveillance technologies diffuse to authoritarian regimes through commercial sales, development assistance, joint ventures, reverse engineering, and illicit acquisition....Quality: 62/100Surveillance Chilling Effects ModelAnalysisSurveillance Chilling Effects ModelQuantifies how AI surveillance reduces freedom of expression through self-censorship mechanisms, estimating 50-70% reduction in dissent within months and 80-95% within 1-2 years in comprehensive su...Quality: 54/100
Organizations
Electronic Frontier Foundation (EFF)OrganizationElectronic Frontier Foundation (EFF)Digital rights nonprofit founded in July 1990 by John Gilmore, John Perry Barlow, and Mitch Kapor. Approximately 100 staff (lawyers, activists, technologists). Budget approximately $24 million (202...Revolving Door ProjectOrganizationRevolving Door ProjectGovernment accountability project (of the Goodnation Foundation) led by Executive Director Jeff Hauser. Funded by Democracy Fund. Maintains the "Tracking Uses of AI in the Trump Administration" tra...AlgorithmWatchOrganizationAlgorithmWatchNonprofit research and advocacy organization focused on algorithmic accountability, founded in 2017. Also has a Zurich office. Co-founded and led by Executive Director Matthias Spielkamp. Conducts ...Encode JusticeOrganizationEncode JusticeYouth-led AI accountability organization founded in July 2020 by Sneha Revanur (Founder and President; named to TIME 100 AI list). Approximately 600 student members across 40 countries. Rebranded t...Center for AI Safety Action FundOrganizationCenter for AI Safety Action FundPolicy advocacy arm of the Center for AI Safety, focused on bipartisan engagement with policymakers on AI national security risks. Sister organization to CAIS (San Francisco-based technical safety ...Freedom HouseOrganizationFreedom HouseComprehensive overview of Freedom House as a democracy-monitoring NGO, with a thin but present AI-relevance angle focused on digital repression and AI-enabled authoritarianism rather than AI safety...
Risks
AI Surveillance and US Democratic ErosionRiskAI Surveillance and US Democratic ErosionAnalysis of how data centralization, oversight dismantlement, and AI capability acquisition by the US government create near-term threats to democratic processes. Documents the Anthropic-Pentagon s...Quality: 55/100AI-Driven Trust DeclineRiskAI-Driven Trust DeclineUS government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibi...Quality: 55/100AI-Enabled Authoritarian TakeoverRiskAI-Enabled Authoritarian TakeoverComprehensive analysis documenting how 72% of global population (5.7 billion) now lives under autocracy with AI surveillance deployed in 80+ countries, showing 15 consecutive years of declining int...Quality: 61/100AI DisinformationRiskAI DisinformationPost-2024 analysis shows AI disinformation had limited immediate electoral impact (cheap fakes used 7x more than AI content), but creates concerning long-term epistemic erosion with 82% higher beli...Quality: 54/100Epistemic CollapseRiskEpistemic CollapseEpistemic collapse describes the complete erosion of society's ability to establish factual consensus when AI-generated synthetic content overwhelms verification capacity. Current AI detectors achi...Quality: 49/100
Concepts
AI Scaling LawsConceptAI Scaling LawsEmpirical relationships between compute, data, parameters, and AI performanceQuality: 92/100
Historical
Anthropic-Pentagon Standoff (2026)EventAnthropic-Pentagon Standoff (2026)Comprehensive analysis of the February 2026 confrontation between Anthropic and the US government. Triggered when Claude AI was used in the January 2026 Venezuela raid via Palantir, Anthropic refus...Quality: 70/100