People
Overview
Section titled “Overview”This section profiles key individuals shaping AI safety research, policy, and public discourse. Profiles include their contributions, positions on key debates, and organizational affiliations.
Featured Researchers
Section titled “Featured Researchers”AI Safety Pioneers
Section titled “AI Safety Pioneers”- Eliezer Yudkowsky - MIRI founder, early AI risk advocate
- Nick Bostrom - FHI founder, Superintelligence author
- Stuart Russell - UC Berkeley, Human Compatible author
Alignment Researchers
Section titled “Alignment Researchers”- Paul Christiano - ARC founder, iterated amplification
- Jan Leike - Former OpenAI alignment lead
- Chris Olah - Anthropic, interpretability pioneer
- Neel Nanda - DeepMind, mechanistic interpretability
Lab Leaders
Section titled “Lab Leaders”- Dario Amodei - Anthropic CEO
- Daniela Amodei - Anthropic President
- Demis Hassabis - DeepMind CEO
- Ilya Sutskever - Former OpenAI Chief Scientist
Public Voices
Section titled “Public Voices”- Geoffrey Hinton - “Godfather of AI”, recent safety advocate
- Yoshua Bengio - Turing Award winner, safety advocate
- Connor Leahy - Conjecture CEO, public communicator
Effective Altruism & Policy
Section titled “Effective Altruism & Policy”- Holden Karnofsky - Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100, key funder
- Toby Ord - FHI, The Precipice author
- Dan Hendrycks - CAIS director
Profile Contents
Section titled “Profile Contents”Each profile includes:
- Background - Career history and key contributions
- Positions - Views on key AI safety debates
- Affiliations - Organizations and collaborations
- Key publications - Influential papers and writings