The OpenAI Foundation holds 26% equity (~\$130B) in OpenAI Group PBC with governance control, but detailed analysis of board member incentives reve...
The US AI Safety Institute (AISI), established November 2023 within NIST with \$10M budget (FY2025 request \$82.7M), conducted pre-deployment evalu...
FAR AI is an AI safety research nonprofit founded in July 2022 by Adam Gleave (CEO) and Karl Berzins (Co-founder & President). Based in Berkeley, C...
Leading the Future represents a \$125 million industry effort to prevent AI regulation through political spending, directly opposing AI safety advo...
Organizations advancing forecasting methodology, prediction aggregation, and epistemic infrastructure to improve decision-making on AI safety and e...
FTX was a major crypto exchange that collapsed in November 2022 due to fraud, with its AI safety relevance stemming from FTX Future Fund grants to ...
Goodfire is a well-funded AI interpretability startup valued at \$1.25B (Feb 2026) developing mechanistic interpretability tools like Ember API to ...
Oxford-based organization that coordinates the effective altruism movement, running EA Global conferences, supporting local groups, and maintaining...
Comprehensive reference page on Anthropic covering financials (\$380B valuation, \$14B ARR), safety research (Constitutional AI, mechanistic interp...
METR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replica...
Palisade Research is a 2023-founded nonprofit conducting empirical research on AI shutdown resistance and autonomous hacking capabilities, with not...
A nonprofit AI safety and security research organization founded in 2021, known for pioneering AI Control research, developing causal scrubbing int...
Elicit is an AI research assistant with 2M+ users that searches 138M papers and automates literature reviews, founded by AI alignment researchers f...
Berkeley nonprofit founded 2012 teaching applied rationality through workshops (\$3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satis...
NIST plays a central coordinating role in U.S. AI governance through voluntary standards and risk management frameworks, but faces criticism for te...
Rethink Priorities is a research organization founded in 2018 that grew from 2 to ~130 people by 2022, conducting evidence-based analysis across an...
The Centre for Long-Term Resilience is a UK-based think tank that has demonstrated concrete policy influence on AI and biosecurity risks, including...
Comprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of ...
Pause AI is a grassroots advocacy movement founded May 2023 calling for international pause on frontier AI development until safety proven, growing...
The Frontier Model Forum represents the AI industry's primary self-governance initiative for frontier AI safety, establishing frameworks and fundin...
An independent Swiss foundation launched in February 2024, spun out of NTI | bio, that develops free open-source tools for DNA synthesis screening ...
The FTX Future Fund was a major longtermist philanthropic initiative that distributed 132M USD in grants (including ~32M USD to AI safety) before c...
Bridgewater AIA Labs launched a \$2B AI-driven macro fund in July 2024 that returned 11.9% in 2025, using proprietary ML models plus LLMs from Open...
The Giving Pledge, while attracting 250+ billionaire signatories since 2010, has a disappointing track record with only 36% of deceased pledgers ac...
The biosecurity division of the Nuclear Threat Initiative, NTI | bio works to reduce global catastrophic biological risks through DNA synthesis scr...
The Hewlett Foundation is a \$14.8 billion philanthropic organization that focuses primarily on AI cybersecurity rather than AI alignment or existe...
AI Impacts is a research organization that conducts empirical analysis of AI timelines and risks through surveys and historical trend analysis, con...
Peter Thiel funded MIRI (\$1.6M+) in its early years but has stated he believed they were "building an AGI" rather than doing safety research. He b...
ControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 1...
Epoch AI maintains comprehensive databases tracking 3,200+ ML models showing 4.4x annual compute growth and projects data exhaustion 2026-2032. The...
A foundational collection of blog posts on rationality, cognitive biases, and AI alignment that shaped the rationalist movement and influenced effe...
Elite forecasting group Samotsvety dominated INFER competitions 2020-2022 with relative Brier scores twice as good as competitors, providing influe...
Comprehensive profile of the \$9 billion MacArthur Foundation documenting its evolution from 1978 to present, with \$8.27 billion in total grants a...
A biosecurity nonprofit applying the Delay/Detect/Defend framework to protect against catastrophic pandemics, including AI-enabled biological threa...
Analysis of the AI revenue gap. Hyperscalers are spending ~\$700B on AI infrastructure in 2026 while direct AI service revenue is ~\$25-50B—a 6-14x...
Schmidt Futures is a major philanthropic initiative founded by Eric Schmidt that has committed substantial funding to AI safety research (\$135M ac...
Overview and comparison of organizations working on biosecurity and pandemic preparedness relevant to AI-era biological risks. Coefficient Giving (...
The Johns Hopkins Center for Health Security is a well-established biosecurity organization that has significantly influenced US policy on pandemic...
AI Futures Project is a nonprofit co-founded in 2024 by Daniel Kokotajlo, Eli Lifland, and Thomas Larsen that produces detailed AI capability forec...
Comprehensive reference page on Microsoft's AI strategy covering its \$80B+ infrastructure spend, restructured \$135B OpenAI stake (~27% ownership)...
Head-to-head comparison of frontier AI companies on talent, safety culture, agentic AI capability, and 3-10 year financial projections. Key finding...
Manifest is a 2024 forecasting conference that generated significant controversy within EA/rationalist communities due to speaker selection includi...
FutureSearch is an AI forecasting startup founded by former Metaculus leaders that combines LLM research agents with human judgment, demonstrating ...
Apollo Research demonstrated in December 2024 that all six tested frontier models (including o1, Claude 3.5 Sonnet, Gemini 1.5 Pro) engage in schem...
A pandemic preparedness nonprofit originally founded to advocate for COVID-19 human challenge trials, now working on indoor air quality (germicidal...
An EA-funded biosecurity nonprofit founded in 2023 by Jake Swett, dedicated to achieving breakthroughs in pandemic prevention through far-UVC germi...
MATS is a well-documented 12-week fellowship program that has successfully trained 213 AI safety researchers with strong career outcomes (80% in al...
Comprehensive reference page on Giving What We Can covering its history, pledge structure, research approach, and criticisms; notes 10,000+ pledger...
Policy advocacy organization founded ~2022-2023 by Nick Beckstead focusing on legislative requirements for AI safety protocols, whistleblower prote...
A Swiss nonprofit foundation providing free, privacy-preserving DNA synthesis screening software using novel cryptographic protocols. Co-founded by...
CSER is a Cambridge-based existential risk research centre founded in 2012, now funded at ~\$1M+ annually from FLI and other sources, producing 24+...
Comprehensive reference page on ARC (Alignment Research Center), covering its evolution from a dual theory/evals organization to ARC Theory (3 perm...
Seldon Lab is a San Francisco-based AI safety accelerator founded in early 2025 that combines research publication with startup investment, claimin...
Situational Awareness LP is a hedge fund founded by Leopold Aschenbrenner in 2024 that manages ~\$2B in AI-focused public equities (semiconductors,...
SFF distributed \$141M since 2019 (primarily from Jaan Tallinn's ~\$900M fortune), with the 2025 round totaling \$34.33M (86% to AI safety). Uses u...
Lionheart Ventures is a small venture capital firm (\$25M inaugural fund) focused on AI safety and mental health investments, notable for its inves...
Good Judgment Inc. is a commercial forecasting organization that emerged from successful IARPA research, demonstrating that trained 'superforecaste...
Comprehensive profile of FLI documenting \$25M+ in grants distributed (2015: \$7M to 37 projects, 2021: \$25M program), major public campaigns (Asi...
CAIS is a nonprofit research organization founded by Dan Hendrycks that has distributed compute grants to researchers, published technical AI safet...
FRI's XPT tournament found superforecasters gave 9.7% average probability to AI progress outcomes that occurred vs 24.6% from domain experts, sugge...