Navigation
Notable For
notable-for · 66 facts across 66 entities · biographical
Definition
| Name | Notable For |
| Description | What this person is primarily known for (one-line summary) |
| Data Type | text |
| Unit | — |
| Category | biographical |
| Temporal | No |
| Computed | No |
| Applies To | person |
All Facts (66)
Ajeya CotraBio Anchors AI timelines report, AI safety grantmaking, intelligence explosion analysis, crunch time frameworkMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Bio Anchors AI timelines report, AI safety grantmaking, intelligence explosion analysis, crunch time framework | — | f_aC2nLw9pYr |
Allan DafoeVP of AI Policy at Google DeepMind; founder of the Centre for the Governance of AI (GovAI); leading AI governance researcher—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | VP of AI Policy at Google DeepMind; founder of the Centre for the Governance of AI (GovAI); leading AI governance researcher | — | f_xXooK78TVw |
Andrej KarpathyFormer Director of AI at Tesla, former OpenAI researcher; founded Eureka Labs; influential AI educator and researcher—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Former Director of AI at Tesla, former OpenAI researcher; founded Eureka Labs; influential AI educator and researcher | — | f_JOFyWsItEQ |
Andrew NgFounder of DeepLearning.AI and Coursera; former head of Google Brain and Baidu AI; leading AI educator—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Founder of DeepLearning.AI and Coursera; former head of Google Brain and Baidu AI; leading AI educator | — | f_Rg89IN7s7w |
Beth BarnesFounded METR (formerly ARC Evals), pioneering dangerous capability evaluations for frontier AI models; led pre-deployment evaluations of GPT-4 and ClaudeMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Founded METR (formerly ARC Evals), pioneering dangerous capability evaluations for frontier AI models; led pre-deployment evaluations of GPT-4 and Claude | — | f_bB6tN2pQ4v |
Buck ShlegerisAI safety research; Redwood Research leadershipMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | AI safety research; Redwood Research leadership | — | f_bS2nLw9pYr |
Chris OlahPioneer of neural network interpretability and visualization; co-founder of Anthropic; creator of Distill.pub and the Circuits thread at Transformer Circuits—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Pioneer of neural network interpretability and visualization; co-founder of Anthropic; creator of Distill.pub and the Circuits thread at Transformer Circuits | colah.github.io | f_cO4fG5hI6j |
Connor LeahyCEO of Conjecture; co-founder of EleutherAI; prominent AI safety advocate; testified before UK Parliament and EU on AI risks—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CEO of Conjecture; co-founder of EleutherAI; prominent AI safety advocate; testified before UK Parliament and EU on AI risks | en.wikipedia.org | f_cL1aB2cD3e |
Dan HendrycksAI safety research; benchmark creation; CAIS leadership; catastrophic risk focusMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | AI safety research; benchmark creation; CAIS leadership; catastrophic risk focus | — | f_dH2nLw9pYr |
Daniela AmodeiCo-founding Anthropic; Operations and business leadershipMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Co-founding Anthropic; Operations and business leadership | — | f_dAm2nLw9pY |
Dario AmodeiCEO and co-founder of Anthropic; formerly VP of Research at OpenAI; leading proponent of responsible AI scaling—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CEO and co-founder of Anthropic; formerly VP of Research at OpenAI; leading proponent of responsible AI scaling | en.wikipedia.org | f_dA4jK5lM6n |
David KruegerCambridge professor researching AI alignment and safety; work on deceptive alignment and goal misgeneralization—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Cambridge professor researching AI alignment and safety; work on deceptive alignment and goal misgeneralization | — | f_hzaOwUZKsQ |
David SacksWhite House AI and Crypto Czar under Trump administration; co-founder of Craft Ventures; PayPal Mafia member and former COO; founded Yammer (sold to Microsoft for $1.2B)Mar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | White House AI and Crypto Czar under Trump administration; co-founder of Craft Ventures; PayPal Mafia member and former COO; founded Yammer (sold to Microsoft for $1.2B) | en.wikipedia.org | f_dS7pQ3tV6x |
Demis HassabisCEO of Google DeepMind; creator of AlphaGo, AlphaFold, and Gemini; Nobel Prize in Chemistry 2024; chess prodigy and game designer—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CEO of Google DeepMind; creator of AlphaGo, AlphaFold, and Gemini; Nobel Prize in Chemistry 2024; chess prodigy and game designer | en.wikipedia.org | f_dH4fG5hI6j |
Eli LiflandRanked #1 on RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-leads Samotsvety forecasting team; co-founded AI Futures ProjectMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Ranked #1 on RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-leads Samotsvety forecasting team; co-founded AI Futures Project | — | f_eL6tN2pQ1v |
Eliezer YudkowskyFounder of MIRI; pioneer of AI alignment as a field; author of 'The Sequences' on rationality; author of Harry Potter and the Methods of Rationality; prominent AI doomer—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Founder of MIRI; pioneer of AI alignment as a field; author of 'The Sequences' on rationality; author of Harry Potter and the Methods of Rationality; prominent AI doomer | en.wikipedia.org | f_eY4fG5hI6j |
Elizabeth KellyLeading US AI Safety Institute; AI policyMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Leading US AI Safety Institute; AI policy | — | f_eK2nLw9pYr |
Elon MuskCEO of Tesla and SpaceX; founder of xAI; co-founder of OpenAI; owner of X/Twitter; one of the wealthiest people in history—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CEO of Tesla and SpaceX; founder of xAI; co-founder of OpenAI; owner of X/Twitter; one of the wealthiest people in history | en.wikipedia.org | f_eM4fG5hI6j |
Evan HubingerCo-authored Risks from Learned Optimization (2019) introducing mesa-optimization and deceptive alignment; led Sleeper Agents and Alignment Faking research at Anthropic; 3,400+ citationsMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Co-authored Risks from Learned Optimization (2019) introducing mesa-optimization and deceptive alignment; led Sleeper Agents and Alignment Faking research at Anthropic; 3,400+ citations | — | f_eH6tN2pQ5v |
Fei-Fei LiStanford professor; creator of ImageNet; co-director of Stanford Human-Centered AI Institute (HAI)—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Stanford professor; creator of ImageNet; co-director of Stanford Human-Centered AI Institute (HAI) | — | f_QB41VrylJQ |
Gary MarcusProminent AI critic skeptical of deep learning as path to AGI; advocate for hybrid neurosymbolic AI approaches; author of Rebooting AI and Kluge; founder of Geometric Intelligence (acquired by Uber)Mar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Prominent AI critic skeptical of deep learning as path to AGI; advocate for hybrid neurosymbolic AI approaches; author of Rebooting AI and Kluge; founder of Geometric Intelligence (acquired by Uber) | en.wikipedia.org | f_gM6tN2pQ8v |
Geoffrey HintonGodfather of deep learning; pioneer of backpropagation, Boltzmann machines, and deep neural networks; Nobel Prize in Physics 2024; Turing Award 2018—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Godfather of deep learning; pioneer of backpropagation, Boltzmann machines, and deep neural networks; Nobel Prize in Physics 2024; Turing Award 2018 | en.wikipedia.org | f_gH4fG5hI6j |
Gwern BranwenEarly advocate of AI scaling hypothesis predicting AGI by 2030; maintains gwern.net as comprehensive research archive on AI, psychology, and statistics; influential in rationalist and LessWrong communities; over 90,000 Wikipedia editsMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Early advocate of AI scaling hypothesis predicting AGI by 2030; maintains gwern.net as comprehensive research archive on AI, psychology, and statistics; influential in rationalist and LessWrong communities; over 90,000 Wikipedia edits | gwern.net | f_gw6tN2pQ9v |
Helen TonerFormer OpenAI board member who voted to remove Sam Altman in November 2023; Interim Executive Director of Georgetown CSET; TIME 100 Most Influential People in AI 2024; expertise in US-China AI competition and AI governanceMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Former OpenAI board member who voted to remove Sam Altman in November 2023; Interim Executive Director of Georgetown CSET; TIME 100 Most Influential People in AI 2024; expertise in US-China AI competition and AI governance | — | f_hT6tN2pQ4v |
Holden KarnofskyCo-founder of GiveWell and Open Philanthropy; influential figure in effective altruism; author of 'Most Important Century' blog series on transformative AI—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Co-founder of GiveWell and Open Philanthropy; influential figure in effective altruism; author of 'Most Important Century' blog series on transformative AI | en.wikipedia.org | f_hK4fG5hI6j |
Ian HogarthLeading UK AI Safety Institute; AI investor and writerMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Leading UK AI Safety Institute; AI investor and writer | — | f_iH2nLw9pYr |
Ilya SutskeverCo-founder of OpenAI and SSI; co-inventor of the sequence-to-sequence model and AlexNet; key figure in the deep learning revolution—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Co-founder of OpenAI and SSI; co-inventor of the sequence-to-sequence model and AlexNet; key figure in the deep learning revolution | en.wikipedia.org | f_iS4fG5hI6j |
Issa RiceCreated Timelines Wiki, AI Watch, and Org Watch for EA and AI safety communities; prolific knowledge infrastructure builder; contract researcher primarily funded by Vipul NaikMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Created Timelines Wiki, AI Watch, and Org Watch for EA and AI safety communities; prolific knowledge infrastructure builder; contract researcher primarily funded by Vipul Naik | — | f_iR6tN2pQ8v |
Jack ClarkCo-founder of Anthropic; former Policy Director at OpenAI; creator of the Import AI newsletter; AI policy advocate—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Co-founder of Anthropic; former Policy Director at OpenAI; creator of the Import AI newsletter; AI policy advocate | — | f_y6ELYGnAGQ |
Jacob SteinhardtUC Berkeley professor working on AI safety and robustness; leads the Steinhardt Group; runs AI forecasting contests—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | UC Berkeley professor working on AI safety and robustness; leads the Steinhardt Group; runs AI forecasting contests | — | f_r8MSWiNrLQ |
Jan LeikeVP of Alignment Science at Anthropic; former co-lead of OpenAI Superalignment team; prominent advocate for AI safety resource allocation—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | VP of Alignment Science at Anthropic; former co-lead of OpenAI Superalignment team; prominent advocate for AI safety resource allocation | en.wikipedia.org | f_jL4fG5hI6j |
Jared KaplanCo-founder of Anthropic; Johns Hopkins physics professor; co-author of neural scaling laws research—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Co-founder of Anthropic; Johns Hopkins physics professor; co-author of neural scaling laws research | — | f_clGvV7hzcA |
Jensen HuangCEO and co-founder of NVIDIA; architect of the GPU computing revolution that enabled modern AI—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CEO and co-founder of NVIDIA; architect of the GPU computing revolution that enabled modern AI | — | f_uWhqL9PqsQ |
Katja GraceFounder of AI Impacts; conducts research on AI timelines and forecasting; author of influential AI researcher surveys—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Founder of AI Impacts; conducts research on AI timelines and forecasting; author of influential AI researcher surveys | — | f_TJsbiNSvsg |
Leopold AschenbrennerAuthor of Situational Awareness: The Decade Ahead predicting AGI by 2027; Columbia University valedictorian at age 19; former OpenAI Superalignment team researcher; founder of $1.5B+ AI-focused hedge fundMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Author of Situational Awareness: The Decade Ahead predicting AGI by 2027; Columbia University valedictorian at age 19; former OpenAI Superalignment team researcher; founder of $1.5B+ AI-focused hedge fund | — | f_lA6tN2pQ2v |
Luke MuehlhauserFormer Executive Director of MIRI; former GiveWell/Open Philanthropy researcher on AI risk—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Former Executive Director of MIRI; former GiveWell/Open Philanthropy researcher on AI risk | — | f_yFiXtoJMDg |
Marc AndreessenCo-created Mosaic web browser (1993); founded Netscape (sold to AOL for $4.2B); co-founded Andreessen Horowitz managing $90B+ in venture capital; techno-optimist AI advocate opposing safety regulationMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Co-created Mosaic web browser (1993); founded Netscape (sold to AOL for $4.2B); co-founded Andreessen Horowitz managing $90B+ in venture capital; techno-optimist AI advocate opposing safety regulation | en.wikipedia.org | f_mA6tN2pQ7v |
Mark ZuckerbergCEO of Meta; leads Meta AI and open-source Llama model development; advocate for open-source AI—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CEO of Meta; leads Meta AI and open-source Llama model development; advocate for open-source AI | — | f_YbXRRr8Bcw |
Max TegmarkCo-founded Future of Life Institute; developed 23 Asilomar AI Principles; organized 2023 AI pause letter with 30,000+ signatories; author of Life 3.0 (NYT bestseller); TIME 100 Most Influential in AI 2023Mar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Co-founded Future of Life Institute; developed 23 Asilomar AI Principles; organized 2023 AI pause letter with 30,000+ signatories; author of Life 3.0 (NYT bestseller); TIME 100 Most Influential in AI 2023 | — | f_mT6tN2pQ3v |
Mira MuratiFormer CTO of OpenAI; led development of ChatGPT, DALL-E, and GPT-4; now building a new AI company—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Former CTO of OpenAI; led development of ChatGPT, DALL-E, and GPT-4; now building a new AI company | — | f_f3XKVPdewA |
Mustafa SuleymanCo-founder of DeepMind; CEO of Microsoft AI; founded Inflection AI; advocate for AI safety and governance—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Co-founder of DeepMind; CEO of Microsoft AI; founded Inflection AI; advocate for AI safety and governance | — | f_65egajgbIQ |
Nate SoaresExecutive Director of MIRI; focuses on mathematical foundations of AI alignment including logical uncertainty, decision theory, and corrigibility; author of MIRI technical agenda papersMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Executive Director of MIRI; focuses on mathematical foundations of AI alignment including logical uncertainty, decision theory, and corrigibility; author of MIRI technical agenda papers | — | f_nS6tN2pQ5v |
Neel NandaMechanistic interpretability; TransformerLens library; educational contentMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Mechanistic interpretability; TransformerLens library; educational content | — | f_nN2nLw9pYr |
Nick BecksteadLongtermism, existential risk, FTX Future Fund leadership, Secure AI ProjectMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Longtermism, existential risk, FTX Future Fund leadership, Secure AI Project | — | f_nB2nLw9pYr |
Nick BostromAuthor of 'Superintelligence'; founder of FHI at Oxford; pioneer of existential risk studies; influential philosopher on AI risk, simulation argument, and anthropic reasoning—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Author of 'Superintelligence'; founder of FHI at Oxford; pioneer of existential risk studies; influential philosopher on AI risk, simulation argument, and anthropic reasoning | en.wikipedia.org | f_nB4fG5hI6j |
Nuño SempereCo-founded Samotsvety forecasting group (won CSET-Foretell by ~2x margin); founded Sentinel for global catastrophe early warning; built Metaforecast.org; ranked 2nd all-time on INFER platformMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Co-founded Samotsvety forecasting group (won CSET-Foretell by ~2x margin); founded Sentinel for global catastrophe early warning; built Metaforecast.org; ranked 2nd all-time on INFER platform | — | f_nP6tN2pQ9v |
Paul ChristianoPioneer of RLHF and AI alignment research; founder of Alignment Research Center (ARC); key theorist of iterated amplification and eliciting latent knowledge—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Pioneer of RLHF and AI alignment research; founder of Alignment Research Center (ARC); key theorist of iterated amplification and eliciting latent knowledge | en.wikipedia.org | f_pC4fG5hI6j |
Philip TetlockPioneered science of superforecasting; authored Expert Political Judgment (2005) and Superforecasting (2015); co-led Good Judgment Project winning IARPA tournament 2011-2015; identified superforecasters outperforming intelligence analysts by 60-85%Mar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Pioneered science of superforecasting; authored Expert Political Judgment (2005) and Superforecasting (2015); co-led Good Judgment Project winning IARPA tournament 2011-2015; identified superforecasters outperforming intelligence analysts by 60-85% | — | f_pT6tN2pQ1v |
Richard NgoAI governance researcher; formerly at OpenAI and DeepMind; influential writer on AI alignment and x-risk—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | AI governance researcher; formerly at OpenAI and DeepMind; influential writer on AI alignment and x-risk | — | f_b1ZhBzul2g |
Robin HansonPioneered prediction markets (since 1988); invented Logarithmic Market Scoring Rule (LMSR); proposed futarchy governance; originated Great Filter hypothesis; authored The Age of Em and The Elephant in the Brain; 5,200+ citationsMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Pioneered prediction markets (since 1988); invented Logarithmic Market Scoring Rule (LMSR); proposed futarchy governance; originated Great Filter hypothesis; authored The Age of Em and The Elephant in the Brain; 5,200+ citations | — | f_rH6tN2pQ2v |
Rohin ShahResearch scientist at Google DeepMind working on AI alignment; creator of the Alignment Newsletter—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Research scientist at Google DeepMind working on AI alignment; creator of the Alignment Newsletter | — | f_MYw0rJFjJQ |
Sam AltmanCEO of OpenAI; previously president of Y Combinator; central figure in the commercialization of large language models—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CEO of OpenAI; previously president of Y Combinator; central figure in the commercialization of large language models | en.wikipedia.org | f_bN7vQ2xJ4p |
Sam McCandlishCo-founder of Anthropic; co-author of neural scaling laws research; formerly at OpenAI—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Co-founder of Anthropic; co-author of neural scaling laws research; formerly at OpenAI | — | f_ZN4oC3AUWw |
Satya NadellaCEO of Microsoft; led Microsoft's multi-billion dollar investment in OpenAI; shaped enterprise AI strategy—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CEO of Microsoft; led Microsoft's multi-billion dollar investment in OpenAI; shaped enterprise AI strategy | — | f_ogkKZjCfQg |
Scott AlexanderAuthor of Astral Codex Ten (formerly Slate Star Codex); influential rationalist blogger covering AI risk and effective altruism—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Author of Astral Codex Ten (formerly Slate Star Codex); influential rationalist blogger covering AI risk and effective altruism | — | f_VR9yTjYlmg |
Shane LeggCo-founding DeepMind; Early work on AGI; Machine super intelligence thesisMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Co-founding DeepMind; Early work on AGI; Machine super intelligence thesis | — | f_sL2nLw9pYr |
Stuart RussellCo-author of 'AI: A Modern Approach' (the standard AI textbook); founder of CHAI at UC Berkeley; leading advocate for provably beneficial AI—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Co-author of 'AI: A Modern Approach' (the standard AI textbook); founder of CHAI at UC Berkeley; leading advocate for provably beneficial AI | en.wikipedia.org | f_sR4fG5hI6j |
Sundar PichaiCEO of Google and Alphabet; oversees Google DeepMind and Gemini AI development—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | CEO of Google and Alphabet; oversees Google DeepMind and Gemini AI development | — | f_F6CMdX41GQ |
Timnit GebruFounder of the DAIR Institute; former co-lead of Google's Ethical AI team; AI ethics and fairness researcher—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Founder of the DAIR Institute; former co-lead of Google's Ethical AI team; AI ethics and fairness researcher | — | f_3NUOYZSdQw |
Toby OrdThe Precipice; existential risk quantification; effective altruismMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | The Precipice; existential risk quantification; effective altruism | — | f_tO2nLw9pYr |
Victoria KrakovnaResearch scientist at Google DeepMind working on AI safety; co-founder of the Future of Life Institute—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Research scientist at Google DeepMind working on AI safety; co-founder of the Future of Life Institute | — | f_13qgbQWHbA |
Vidur KapurSuperforecaster affiliated with Good Judgment, Swift Centre, Samotsvety, and RAND; AI policy researcher at ControlAI; key member of Sentinel early warning system for global catastrophesMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Superforecaster affiliated with Good Judgment, Swift Centre, Samotsvety, and RAND; AI policy researcher at ControlAI; key member of Sentinel early warning system for global catastrophes | — | f_vK6tN2pQ4v |
Vipul NaikCreated Donations List Website tracking $72.8B in philanthropic donations; funded ~$255K in contract research for EA knowledge infrastructure; two-time International Mathematical Olympiad silver medalist; former MIRI researcherMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Created Donations List Website tracking $72.8B in philanthropic donations; funded ~$255K in contract research for EA knowledge infrastructure; two-time International Mathematical Olympiad silver medalist; former MIRI researcher | — | f_vN6tN2pQ6v |
Will MacAskillCo-founder of effective altruism movement; longtermism; "What We Owe the Future" (2022); Giving What We Can; 80,000 HoursMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Co-founder of effective altruism movement; longtermism; "What We Owe the Future" (2022); Giving What We Can; 80,000 Hours | — | f_wM2nLw9pYr |
Yann LeCunPioneer of convolutional neural networks (CNNs); Chief AI Scientist at Meta; Turing Award 2018; vocal skeptic of AGI existential risk—1 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| — | Pioneer of convolutional neural networks (CNNs); Chief AI Scientist at Meta; Turing Award 2018; vocal skeptic of AGI existential risk | en.wikipedia.org | f_yL4fG5hI6j |
Yoshua BengioDeep learning pioneer; now AI safety advocateMar 20261 value▶
| As Of | Value | Source | Fact ID |
|---|---|---|---|
| Mar 2026 | Deep learning pioneer; now AI safety advocate | — | f_yB2nLw9pYr |
Coverage
| Applies To | person |
| Applicable Entities | 105 |
| Have Current Data | 66 of 105 (63%) |
Missing (39)
Alexandre KaskasoliAmanda AskellAndreas StuhlmüllerAnthony AguirreAvital BalwitBen GoldhaberBenjamin Weinstein-RaunCaroline EllisonDustin MoskovitzElizabeth GarrettEmilia JavorskyGavin NewsomGreg BrockmanHuw PriceJaan TallinnJacob HiltonJaime SevillaJoe BidenJosh JacobsonJosué EstradaJulia WiseKathleen FinlinsonKetan RamakrishnanMargrethe VestagerMaría de la Lama LaviadaMark BrakelMark NitzbergMartin ReesOliver SourbutOliver ZhangRichard MallahSam Bankman-FriedSamuel R. BowmanScott WienerSeán Ó hÉigeartaighSlava MatyukhinTimothy Telleen-LawtonYafah EdelmanZach Robinson