Longterm Wiki
Navigation
Updated 2026-03-13HistoryData
Page StatusContent
Edited today175 words
Content2/13
LLM summaryScheduleEntityEdit historyOverview
Tables0/ ~1Diagrams0Int. links24/ ~3Ext. links0/ ~1Footnotes0/ ~2References0/ ~1Quotes0Accuracy0
Issues1
StructureNo tables or diagrams - consider adding visual content

People

Overview

This section profiles key individuals shaping AI safety research, policy, and public discourse. Profiles include their contributions, positions on key debates, and organizational affiliations.

AI Safety Pioneers

  • Eliezer Yudkowsky - MIRI founder, early AI risk advocate
  • Nick Bostrom - FHI founder, Superintelligence author
  • Stuart Russell - UC Berkeley, Human Compatible author

Alignment Researchers

  • Paul Christiano - ARC founder, iterated amplification
  • Jan Leike - Former OpenAI alignment lead
  • Chris Olah - Anthropic, interpretability pioneer
  • Neel Nanda - DeepMind, mechanistic interpretability

Lab Leaders

  • Dario Amodei - Anthropic CEO
  • Daniela Amodei - Anthropic President
  • Demis Hassabis - DeepMind CEO
  • Ilya Sutskever - Former OpenAI Chief Scientist

Public Voices

  • Geoffrey Hinton - "Godfather of AI", recent safety advocate
  • Yoshua Bengio - Turing Award winner, safety advocate
  • Connor Leahy - Conjecture CEO, public communicator

Effective Altruism & Policy

  • Holden Karnofsky - Coefficient Giving, key funder
  • Toby Ord - FHI, The Precipice author
  • Dan Hendrycks - CAIS director

Profile Contents

Each profile includes:

  • Background - Career history and key contributions
  • Positions - Views on key AI safety debates
  • Affiliations - Organizations and collaborations
  • Key publications - Influential papers and writings

Related Pages

Top Related Pages

Organizations

Center for AI SafetyConjecture

Other

Demis HassabisGeoffrey HintonDario AmodeiToby OrdEliezer YudkowskyHolden Karnofsky