Beth BarnesFounded METR (formerly ARC Evals), pioneering dangerous capability evaluations for frontier AI models; led pre-deployment evaluations of GPT-4 and ClaudeMar 20261 value▶
Founded METR (formerly ARC Evals), pioneering dangerous capability evaluations for frontier AI models; led pre-deployment evaluations of GPT-4 and Claude
Chris OlahPioneer of neural network interpretability and visualization; co-founder of Anthropic; creator of Distill.pub and the Circuits thread at Transformer Circuits—1 value▶
Pioneer of neural network interpretability and visualization; co-founder of Anthropic; creator of Distill.pub and the Circuits thread at Transformer Circuits
David Sacks (White House AI Czar)White House AI and Crypto Czar under Trump administration; co-founder of Craft Ventures; PayPal Mafia member and former COO; founded Yammer (sold to Microsoft for $1.2B)Mar 20261 value▶
White House AI and Crypto Czar under Trump administration; co-founder of Craft Ventures; PayPal Mafia member and former COO; founded Yammer (sold to Microsoft for $1.2B)
Demis HassabisCEO of Google DeepMind; creator of AlphaGo, AlphaFold, and Gemini; Nobel Prize in Chemistry 2024; chess prodigy and game designer—1 value▶
Eliezer YudkowskyFounder of MIRI; pioneer of AI alignment as a field; author of 'The Sequences' on rationality; author of Harry Potter and the Methods of Rationality; prominent AI doomer—1 value▶
Founder of MIRI; pioneer of AI alignment as a field; author of 'The Sequences' on rationality; author of Harry Potter and the Methods of Rationality; prominent AI doomer
Evan HubingerCo-authored Risks from Learned Optimization (2019) introducing mesa-optimization and deceptive alignment; led Sleeper Agents and Alignment Faking research at Anthropic; 3,400+ citationsMar 20261 value▶
Co-authored Risks from Learned Optimization (2019) introducing mesa-optimization and deceptive alignment; led Sleeper Agents and Alignment Faking research at Anthropic; 3,400+ citations
Gary MarcusProminent AI critic skeptical of deep learning as path to AGI; advocate for hybrid neurosymbolic AI approaches; author of Rebooting AI and Kluge; founder of Geometric Intelligence (acquired by Uber)Mar 20261 value▶
Prominent AI critic skeptical of deep learning as path to AGI; advocate for hybrid neurosymbolic AI approaches; author of Rebooting AI and Kluge; founder of Geometric Intelligence (acquired by Uber)
Geoffrey HintonGodfather of deep learning; pioneer of backpropagation, Boltzmann machines, and deep neural networks; Nobel Prize in Physics 2024; Turing Award 2018—1 value▶
Gwern BranwenEarly advocate of AI scaling hypothesis predicting AGI by 2030; maintains gwern.net as comprehensive research archive on AI, psychology, and statistics; influential in rationalist and LessWrong communities; over 90,000 Wikipedia editsMar 20261 value▶
Early advocate of AI scaling hypothesis predicting AGI by 2030; maintains gwern.net as comprehensive research archive on AI, psychology, and statistics; influential in rationalist and LessWrong communities; over 90,000 Wikipedia edits
Helen TonerFormer OpenAI board member who voted to remove Sam Altman in November 2023; Interim Executive Director of Georgetown CSET; TIME 100 Most Influential People in AI 2024; expertise in US-China AI competition and AI governanceMar 20261 value▶
Former OpenAI board member who voted to remove Sam Altman in November 2023; Interim Executive Director of Georgetown CSET; TIME 100 Most Influential People in AI 2024; expertise in US-China AI competition and AI governance
Holden KarnofskyCo-founder of GiveWell and Open Philanthropy; influential figure in effective altruism; author of 'Most Important Century' blog series on transformative AI—1 value▶
Co-founder of GiveWell and Open Philanthropy; influential figure in effective altruism; author of 'Most Important Century' blog series on transformative AI
Ilya SutskeverCo-founder of OpenAI and SSI; co-inventor of the sequence-to-sequence model and AlexNet; key figure in the deep learning revolution—1 value▶
Issa RiceCreated Timelines Wiki, AI Watch, and Org Watch for EA and AI safety communities; prolific knowledge infrastructure builder; contract researcher primarily funded by Vipul NaikMar 20261 value▶
Created Timelines Wiki, AI Watch, and Org Watch for EA and AI safety communities; prolific knowledge infrastructure builder; contract researcher primarily funded by Vipul Naik
Jan LeikeHead of Alignment Science at Anthropic; former co-lead of OpenAI Superalignment team; prominent advocate for AI safety resource allocation—1 value▶
Leopold AschenbrennerAuthor of Situational Awareness: The Decade Ahead predicting AGI by 2027; Columbia University valedictorian at age 19; former OpenAI Superalignment team researcher; founder of $1.5B+ AI-focused hedge fundMar 20261 value▶
Author of Situational Awareness: The Decade Ahead predicting AGI by 2027; Columbia University valedictorian at age 19; former OpenAI Superalignment team researcher; founder of $1.5B+ AI-focused hedge fund
Marc Andreessen (AI Investor)Co-created Mosaic web browser (1993); founded Netscape (sold to AOL for $4.2B); co-founded Andreessen Horowitz managing $90B+ in venture capital; techno-optimist AI advocate opposing safety regulationMar 20261 value▶
Co-created Mosaic web browser (1993); founded Netscape (sold to AOL for $4.2B); co-founded Andreessen Horowitz managing $90B+ in venture capital; techno-optimist AI advocate opposing safety regulation
Max TegmarkCo-founded Future of Life Institute; developed 23 Asilomar AI Principles; organized 2023 AI pause letter with 30,000+ signatories; author of Life 3.0 (NYT bestseller); TIME 100 Most Influential in AI 2023Mar 20261 value▶
Co-founded Future of Life Institute; developed 23 Asilomar AI Principles; organized 2023 AI pause letter with 30,000+ signatories; author of Life 3.0 (NYT bestseller); TIME 100 Most Influential in AI 2023
Nate Soares (MIRI)Executive Director of MIRI; focuses on mathematical foundations of AI alignment including logical uncertainty, decision theory, and corrigibility; author of MIRI technical agenda papersMar 20261 value▶
Executive Director of MIRI; focuses on mathematical foundations of AI alignment including logical uncertainty, decision theory, and corrigibility; author of MIRI technical agenda papers
Nick BostromAuthor of 'Superintelligence'; founder of FHI at Oxford; pioneer of existential risk studies; influential philosopher on AI risk, simulation argument, and anthropic reasoning—1 value▶
Author of 'Superintelligence'; founder of FHI at Oxford; pioneer of existential risk studies; influential philosopher on AI risk, simulation argument, and anthropic reasoning
Nuño SempereCo-founded Samotsvety forecasting group (won CSET-Foretell by ~2x margin); founded Sentinel for global catastrophe early warning; built Metaforecast.org; ranked 2nd all-time on INFER platformMar 20261 value▶
Co-founded Samotsvety forecasting group (won CSET-Foretell by ~2x margin); founded Sentinel for global catastrophe early warning; built Metaforecast.org; ranked 2nd all-time on INFER platform
Paul ChristianoPioneer of RLHF and AI alignment research; founder of Alignment Research Center (ARC); key theorist of iterated amplification and eliciting latent knowledge—1 value▶
Pioneer of RLHF and AI alignment research; founder of Alignment Research Center (ARC); key theorist of iterated amplification and eliciting latent knowledge
Robin HansonPioneered prediction markets (since 1988); invented Logarithmic Market Scoring Rule (LMSR); proposed futarchy governance; originated Great Filter hypothesis; authored The Age of Em and The Elephant in the Brain; 5,200+ citationsMar 20261 value▶
Pioneered prediction markets (since 1988); invented Logarithmic Market Scoring Rule (LMSR); proposed futarchy governance; originated Great Filter hypothesis; authored The Age of Em and The Elephant in the Brain; 5,200+ citations
Stuart RussellCo-author of 'AI: A Modern Approach' (the standard AI textbook); founder of CHAI at UC Berkeley; leading advocate for provably beneficial AI—1 value▶
Vidur KapurSuperforecaster affiliated with Good Judgment, Swift Centre, Samotsvety, and RAND; AI policy researcher at ControlAI; key member of Sentinel early warning system for global catastrophesMar 20261 value▶
Superforecaster affiliated with Good Judgment, Swift Centre, Samotsvety, and RAND; AI policy researcher at ControlAI; key member of Sentinel early warning system for global catastrophes
Vipul NaikCreated Donations List Website tracking $72.8B in philanthropic donations; funded ~$255K in contract research for EA knowledge infrastructure; two-time International Mathematical Olympiad silver medalist; former MIRI researcherMar 20261 value▶
Created Donations List Website tracking $72.8B in philanthropic donations; funded ~$255K in contract research for EA knowledge infrastructure; two-time International Mathematical Olympiad silver medalist; former MIRI researcher
Yann LeCunPioneer of convolutional neural networks (CNNs); Chief AI Scientist at Meta; Turing Award 2018; vocal skeptic of AGI existential risk—1 value▶