The OpenAI Foundation holds 26% equity (~$130B) in OpenAI Group PBC with governance control, but detailed analysis of board member incentives revea...
The US AI Safety Institute (AISI), established November 2023 within NIST with $10M budget (FY2025 request $82.7M), conducted pre-deployment evaluat...
FAR AI is an AI safety research nonprofit founded in July 2022 by Adam Gleave (CEO) and Karl Berzins (Co-founder & President). Based in Berkeley, C...
Leading the Future represents a $125 million industry effort to prevent AI regulation through political spending, directly opposing AI safety advoc...
Organizations advancing forecasting methodology, prediction aggregation, and epistemic infrastructure to improve decision-making on AI safety and e...
FTX was a major crypto exchange that collapsed in November 2022 due to fraud, with its AI safety relevance stemming from FTX Future Fund grants to ...
Goodfire is a well-funded AI interpretability startup valued at $1.25B (Feb 2026) developing mechanistic interpretability tools like Ember API to m...
Comprehensive reference page on Anthropic covering financials ($380B valuation, $14B ARR at Series G growing to $19B by March 2026), safety researc...
Oxford-based organization that coordinates the effective altruism movement, running EA Global conferences, supporting local groups, and maintaining...
Palisade Research is a 2023-founded nonprofit conducting empirical research on AI shutdown resistance and autonomous hacking capabilities, with not...
METR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replica...
A nonprofit AI safety and security research organization founded in 2021, known for pioneering AI Control research, developing causal scrubbing int...
Elicit is an AI research assistant with 2M+ users that searches 138M papers and automates literature reviews, founded by AI alignment researchers f...
Berkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisf...
Comprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of ...
NIST plays a central coordinating role in U.S. AI governance through voluntary standards and risk management frameworks, but faces criticism for te...
Rethink Priorities is a research organization founded in 2018 that grew from 2 to ~130 people by 2022, conducting evidence-based analysis across an...
The Centre for Long-Term Resilience is a UK-based think tank that has demonstrated concrete policy influence on AI and biosecurity risks, including...
Pause AI is a grassroots advocacy movement founded May 2023 calling for international pause on frontier AI development until safety proven, growing...
The FTX Future Fund was a major longtermist philanthropic initiative that distributed 132M USD in grants (including ~32M USD to AI safety) before c...
The Frontier Model Forum represents the AI industry's primary self-governance initiative for frontier AI safety, establishing frameworks and fundin...
An independent Swiss foundation launched in February 2024, spun out of NTI | bio.
Bridgewater AIA Labs launched a $2B AI-driven macro fund in July 2024 that returned 11.9% in 2025, using proprietary ML models plus LLMs from OpenA...
The Giving Pledge, while attracting 250+ billionaire signatories since 2010, has a disappointing track record with only 36% of deceased pledgers ac...
The biosecurity division of the Nuclear Threat Initiative, NTI | bio works to reduce global catastrophic biological risks through DNA synthesis scr...
The Hewlett Foundation is a $14.8 billion philanthropic organization that focuses primarily on AI cybersecurity rather than AI alignment or existen...
AI Impacts is a research organization that conducts empirical analysis of AI timelines and risks through surveys and historical trend analysis, con...
Peter Thiel funded MIRI ($1.6M+) in its early years but has stated he believed they were "building an AGI" rather than doing safety research. He be...
ControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 1...
Analysis of the AI revenue gap. Hyperscalers are spending ~$700B on AI infrastructure in 2026 while direct AI service revenue is ~$25-50B—a 6-14x m...
Epoch AI maintains comprehensive databases tracking 3,200+ ML models showing 4.4x annual compute growth and projects data exhaustion 2026-2032. The...
A foundational collection of blog posts on rationality, cognitive biases, and AI alignment that shaped the rationalist movement and influenced effe...
Elite forecasting group Samotsvety dominated INFER competitions 2020-2022 with relative Brier scores twice as good as competitors, providing influe...
Comprehensive profile of the $9 billion MacArthur Foundation documenting its evolution from 1978 to present, with $8.27 billion in total grants acr...
A biosecurity nonprofit applying the Delay/Detect/Defend framework to protect against catastrophic pandemics, including AI-enabled biological threats.
Schmidt Futures is a major philanthropic initiative founded by Eric Schmidt that has committed substantial funding to AI safety research ($135M acr...
Overview and comparison of organizations working on biosecurity and pandemic preparedness relevant to AI-era biological risks. Coefficient Giving (...
The Johns Hopkins Center for Health Security is a well-established biosecurity organization that has significantly influenced US policy on pandemic...
AI Futures Project is a nonprofit co-founded in 2024 by Daniel Kokotajlo, Eli Lifland, and Thomas Larsen that produces detailed AI capability forec...
Comprehensive reference page on Microsoft's AI strategy covering its $80B+ infrastructure spend, restructured $135B OpenAI stake (~27% ownership), ...
Head-to-head comparison of frontier AI companies on talent, safety culture, agentic AI capability, and 3-10 year financial projections. Key finding...
Comprehensive reference page on Giving What We Can covering its history, pledge structure, research approach, and criticisms; notes 10,000+ pledger...
Manifest is a 2024 forecasting conference that generated significant controversy within EA/rationalist communities due to speaker selection includi...
FutureSearch is an AI forecasting startup founded by former Metaculus leaders that combines LLM research agents with human judgment, demonstrating ...
A pandemic preparedness nonprofit originally founded to advocate for COVID-19 human challenge trials, now working on indoor air quality (germicidal...
Comprehensive reference page on ARC (Alignment Research Center), covering its evolution from a dual theory/evals organization to ARC Theory (3 perm...
An EA-funded biosecurity nonprofit founded in 2023 by Jake Swett, dedicated to achieving breakthroughs in pandemic prevention through far-UVC germi...
Apollo Research demonstrated in December 2024 that all six tested frontier models (including o1, Claude 3.5 Sonnet, Gemini 1.5 Pro) engage in schem...
MATS is a well-documented 12-week fellowship program that has successfully trained 213 AI safety researchers with strong career outcomes (80% in al...
Policy advocacy organization founded ~2022-2023 by Nick Beckstead focusing on legislative requirements for AI safety protocols, whistleblower prote...
A Swiss nonprofit foundation providing free, privacy-preserving DNA synthesis screening software using novel cryptographic protocols.
Coefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). Th...
CSER is a Cambridge-based existential risk research centre founded in 2012, now funded at ~$1M+ annually from FLI and other sources, producing 24+ ...
Situational Awareness LP is a hedge fund founded by Leopold Aschenbrenner in 2024 that manages ~$2B in AI-focused public equities (semiconductors, ...
Seldon Lab is a San Francisco-based AI safety accelerator founded in early 2025 that combines research publication with startup investment, claimin...
Lionheart Ventures is a small venture capital firm ($25M inaugural fund) focused on AI safety and mental health investments, notable for its invest...
Good Judgment Inc. is a commercial forecasting organization that emerged from successful IARPA research, demonstrating that trained 'superforecaste...
SFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses uniq...
CAIS is a nonprofit research organization founded by Dan Hendrycks that has distributed compute grants to researchers, published technical AI safet...
Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilom...