Longterm Wiki

Notable For

notable-for · 44 facts across 44 entities · biographical

Definition

NameNotable For
DescriptionWhat this person is primarily known for (one-line summary)
Data Typetext
Unit
Categorybiographical
TemporalNo
ComputedNo
Applies Toperson

All Facts (44)

Ajeya CotraBio Anchors AI timelines report, AI safety grantmaking, intelligence explosion analysis, crunch time frameworkMar 20261 value
As OfValueSourceFact ID
Mar 2026Bio Anchors AI timelines report, AI safety grantmaking, intelligence explosion analysis, crunch time frameworkf_aC2nLw9pYr
Beth BarnesFounded METR (formerly ARC Evals), pioneering dangerous capability evaluations for frontier AI models; led pre-deployment evaluations of GPT-4 and ClaudeMar 20261 value
As OfValueSourceFact ID
Mar 2026Founded METR (formerly ARC Evals), pioneering dangerous capability evaluations for frontier AI models; led pre-deployment evaluations of GPT-4 and Claudef_bB6tN2pQ4v
Buck ShlegerisAI safety research; Redwood Research leadershipMar 20261 value
As OfValueSourceFact ID
Mar 2026AI safety research; Redwood Research leadershipf_bS2nLw9pYr
Chris OlahPioneer of neural network interpretability and visualization; co-founder of Anthropic; creator of Distill.pub and the Circuits thread at Transformer Circuits1 value
As OfValueSourceFact ID
Pioneer of neural network interpretability and visualization; co-founder of Anthropic; creator of Distill.pub and the Circuits thread at Transformer Circuitscolah.github.iof_cO4fG5hI6j
Connor LeahyCEO of Conjecture; co-founder of EleutherAI; prominent AI safety advocate; testified before UK Parliament and EU on AI risks1 value
As OfValueSourceFact ID
CEO of Conjecture; co-founder of EleutherAI; prominent AI safety advocate; testified before UK Parliament and EU on AI risksen.wikipedia.orgf_cL1aB2cD3e
Dan HendrycksAI safety research; benchmark creation; CAIS leadership; catastrophic risk focusMar 20261 value
As OfValueSourceFact ID
Mar 2026AI safety research; benchmark creation; CAIS leadership; catastrophic risk focusf_dH2nLw9pYr
Daniela AmodeiCo-founding Anthropic; Operations and business leadershipMar 20261 value
As OfValueSourceFact ID
Mar 2026Co-founding Anthropic; Operations and business leadershipf_dAm2nLw9pY
Dario AmodeiCEO and co-founder of Anthropic; formerly VP of Research at OpenAI; leading proponent of responsible AI scaling1 value
As OfValueSourceFact ID
CEO and co-founder of Anthropic; formerly VP of Research at OpenAI; leading proponent of responsible AI scalingen.wikipedia.orgf_dA4jK5lM6n
David Sacks (White House AI Czar)White House AI and Crypto Czar under Trump administration; co-founder of Craft Ventures; PayPal Mafia member and former COO; founded Yammer (sold to Microsoft for $1.2B)Mar 20261 value
As OfValueSourceFact ID
Mar 2026White House AI and Crypto Czar under Trump administration; co-founder of Craft Ventures; PayPal Mafia member and former COO; founded Yammer (sold to Microsoft for $1.2B)en.wikipedia.orgf_dS7pQ3tV6x
Demis HassabisCEO of Google DeepMind; creator of AlphaGo, AlphaFold, and Gemini; Nobel Prize in Chemistry 2024; chess prodigy and game designer1 value
As OfValueSourceFact ID
CEO of Google DeepMind; creator of AlphaGo, AlphaFold, and Gemini; Nobel Prize in Chemistry 2024; chess prodigy and game designeren.wikipedia.orgf_dH4fG5hI6j
Eli LiflandRanked #1 on RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-leads Samotsvety forecasting team; co-founded AI Futures ProjectMar 20261 value
As OfValueSourceFact ID
Mar 2026Ranked #1 on RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-leads Samotsvety forecasting team; co-founded AI Futures Projectf_eL6tN2pQ1v
Eliezer YudkowskyFounder of MIRI; pioneer of AI alignment as a field; author of 'The Sequences' on rationality; author of Harry Potter and the Methods of Rationality; prominent AI doomer1 value
As OfValueSourceFact ID
Founder of MIRI; pioneer of AI alignment as a field; author of 'The Sequences' on rationality; author of Harry Potter and the Methods of Rationality; prominent AI doomeren.wikipedia.orgf_eY4fG5hI6j
Elizabeth Kelly (US AISI Director)Leading US AI Safety Institute; AI policyMar 20261 value
As OfValueSourceFact ID
Mar 2026Leading US AI Safety Institute; AI policyf_eK2nLw9pYr
Elon MuskCEO of Tesla and SpaceX; founder of xAI; co-founder of OpenAI; owner of X/Twitter; one of the wealthiest people in history1 value
As OfValueSourceFact ID
CEO of Tesla and SpaceX; founder of xAI; co-founder of OpenAI; owner of X/Twitter; one of the wealthiest people in historyen.wikipedia.orgf_eM4fG5hI6j
Evan HubingerCo-authored Risks from Learned Optimization (2019) introducing mesa-optimization and deceptive alignment; led Sleeper Agents and Alignment Faking research at Anthropic; 3,400+ citationsMar 20261 value
As OfValueSourceFact ID
Mar 2026Co-authored Risks from Learned Optimization (2019) introducing mesa-optimization and deceptive alignment; led Sleeper Agents and Alignment Faking research at Anthropic; 3,400+ citationsf_eH6tN2pQ5v
Gary MarcusProminent AI critic skeptical of deep learning as path to AGI; advocate for hybrid neurosymbolic AI approaches; author of Rebooting AI and Kluge; founder of Geometric Intelligence (acquired by Uber)Mar 20261 value
As OfValueSourceFact ID
Mar 2026Prominent AI critic skeptical of deep learning as path to AGI; advocate for hybrid neurosymbolic AI approaches; author of Rebooting AI and Kluge; founder of Geometric Intelligence (acquired by Uber)en.wikipedia.orgf_gM6tN2pQ8v
Geoffrey HintonGodfather of deep learning; pioneer of backpropagation, Boltzmann machines, and deep neural networks; Nobel Prize in Physics 2024; Turing Award 20181 value
As OfValueSourceFact ID
Godfather of deep learning; pioneer of backpropagation, Boltzmann machines, and deep neural networks; Nobel Prize in Physics 2024; Turing Award 2018en.wikipedia.orgf_gH4fG5hI6j
Gwern BranwenEarly advocate of AI scaling hypothesis predicting AGI by 2030; maintains gwern.net as comprehensive research archive on AI, psychology, and statistics; influential in rationalist and LessWrong communities; over 90,000 Wikipedia editsMar 20261 value
As OfValueSourceFact ID
Mar 2026Early advocate of AI scaling hypothesis predicting AGI by 2030; maintains gwern.net as comprehensive research archive on AI, psychology, and statistics; influential in rationalist and LessWrong communities; over 90,000 Wikipedia editsgwern.netf_gw6tN2pQ9v
Helen TonerFormer OpenAI board member who voted to remove Sam Altman in November 2023; Interim Executive Director of Georgetown CSET; TIME 100 Most Influential People in AI 2024; expertise in US-China AI competition and AI governanceMar 20261 value
As OfValueSourceFact ID
Mar 2026Former OpenAI board member who voted to remove Sam Altman in November 2023; Interim Executive Director of Georgetown CSET; TIME 100 Most Influential People in AI 2024; expertise in US-China AI competition and AI governancef_hT6tN2pQ4v
Holden KarnofskyCo-founder of GiveWell and Open Philanthropy; influential figure in effective altruism; author of 'Most Important Century' blog series on transformative AI1 value
As OfValueSourceFact ID
Co-founder of GiveWell and Open Philanthropy; influential figure in effective altruism; author of 'Most Important Century' blog series on transformative AIen.wikipedia.orgf_hK4fG5hI6j
Ian HogarthLeading UK AI Safety Institute; AI investor and writerMar 20261 value
As OfValueSourceFact ID
Mar 2026Leading UK AI Safety Institute; AI investor and writerf_iH2nLw9pYr
Ilya SutskeverCo-founder of OpenAI and SSI; co-inventor of the sequence-to-sequence model and AlexNet; key figure in the deep learning revolution1 value
As OfValueSourceFact ID
Co-founder of OpenAI and SSI; co-inventor of the sequence-to-sequence model and AlexNet; key figure in the deep learning revolutionen.wikipedia.orgf_iS4fG5hI6j
Issa RiceCreated Timelines Wiki, AI Watch, and Org Watch for EA and AI safety communities; prolific knowledge infrastructure builder; contract researcher primarily funded by Vipul NaikMar 20261 value
As OfValueSourceFact ID
Mar 2026Created Timelines Wiki, AI Watch, and Org Watch for EA and AI safety communities; prolific knowledge infrastructure builder; contract researcher primarily funded by Vipul Naikf_iR6tN2pQ8v
Jan LeikeHead of Alignment Science at Anthropic; former co-lead of OpenAI Superalignment team; prominent advocate for AI safety resource allocation1 value
As OfValueSourceFact ID
Head of Alignment Science at Anthropic; former co-lead of OpenAI Superalignment team; prominent advocate for AI safety resource allocationen.wikipedia.orgf_jL4fG5hI6j
Leopold AschenbrennerAuthor of Situational Awareness: The Decade Ahead predicting AGI by 2027; Columbia University valedictorian at age 19; former OpenAI Superalignment team researcher; founder of $1.5B+ AI-focused hedge fundMar 20261 value
As OfValueSourceFact ID
Mar 2026Author of Situational Awareness: The Decade Ahead predicting AGI by 2027; Columbia University valedictorian at age 19; former OpenAI Superalignment team researcher; founder of $1.5B+ AI-focused hedge fundf_lA6tN2pQ2v
Marc Andreessen (AI Investor)Co-created Mosaic web browser (1993); founded Netscape (sold to AOL for $4.2B); co-founded Andreessen Horowitz managing $90B+ in venture capital; techno-optimist AI advocate opposing safety regulationMar 20261 value
As OfValueSourceFact ID
Mar 2026Co-created Mosaic web browser (1993); founded Netscape (sold to AOL for $4.2B); co-founded Andreessen Horowitz managing $90B+ in venture capital; techno-optimist AI advocate opposing safety regulationen.wikipedia.orgf_mA6tN2pQ7v
Max TegmarkCo-founded Future of Life Institute; developed 23 Asilomar AI Principles; organized 2023 AI pause letter with 30,000+ signatories; author of Life 3.0 (NYT bestseller); TIME 100 Most Influential in AI 2023Mar 20261 value
As OfValueSourceFact ID
Mar 2026Co-founded Future of Life Institute; developed 23 Asilomar AI Principles; organized 2023 AI pause letter with 30,000+ signatories; author of Life 3.0 (NYT bestseller); TIME 100 Most Influential in AI 2023f_mT6tN2pQ3v
Nate Soares (MIRI)Executive Director of MIRI; focuses on mathematical foundations of AI alignment including logical uncertainty, decision theory, and corrigibility; author of MIRI technical agenda papersMar 20261 value
As OfValueSourceFact ID
Mar 2026Executive Director of MIRI; focuses on mathematical foundations of AI alignment including logical uncertainty, decision theory, and corrigibility; author of MIRI technical agenda papersf_nS6tN2pQ5v
Neel NandaMechanistic interpretability; TransformerLens library; educational contentMar 20261 value
As OfValueSourceFact ID
Mar 2026Mechanistic interpretability; TransformerLens library; educational contentf_nN2nLw9pYr
Nick BecksteadLongtermism, existential risk, FTX Future Fund leadership, Secure AI ProjectMar 20261 value
As OfValueSourceFact ID
Mar 2026Longtermism, existential risk, FTX Future Fund leadership, Secure AI Projectf_nB2nLw9pYr
Nick BostromAuthor of 'Superintelligence'; founder of FHI at Oxford; pioneer of existential risk studies; influential philosopher on AI risk, simulation argument, and anthropic reasoning1 value
As OfValueSourceFact ID
Author of 'Superintelligence'; founder of FHI at Oxford; pioneer of existential risk studies; influential philosopher on AI risk, simulation argument, and anthropic reasoningen.wikipedia.orgf_nB4fG5hI6j
Nuño SempereCo-founded Samotsvety forecasting group (won CSET-Foretell by ~2x margin); founded Sentinel for global catastrophe early warning; built Metaforecast.org; ranked 2nd all-time on INFER platformMar 20261 value
As OfValueSourceFact ID
Mar 2026Co-founded Samotsvety forecasting group (won CSET-Foretell by ~2x margin); founded Sentinel for global catastrophe early warning; built Metaforecast.org; ranked 2nd all-time on INFER platformf_nP6tN2pQ9v
Paul ChristianoPioneer of RLHF and AI alignment research; founder of Alignment Research Center (ARC); key theorist of iterated amplification and eliciting latent knowledge1 value
As OfValueSourceFact ID
Pioneer of RLHF and AI alignment research; founder of Alignment Research Center (ARC); key theorist of iterated amplification and eliciting latent knowledgeen.wikipedia.orgf_pC4fG5hI6j
Philip Tetlock (Forecasting Pioneer)Pioneered science of superforecasting; authored Expert Political Judgment (2005) and Superforecasting (2015); co-led Good Judgment Project winning IARPA tournament 2011-2015; identified superforecasters outperforming intelligence analysts by 60-85%Mar 20261 value
As OfValueSourceFact ID
Mar 2026Pioneered science of superforecasting; authored Expert Political Judgment (2005) and Superforecasting (2015); co-led Good Judgment Project winning IARPA tournament 2011-2015; identified superforecasters outperforming intelligence analysts by 60-85%f_pT6tN2pQ1v
Robin HansonPioneered prediction markets (since 1988); invented Logarithmic Market Scoring Rule (LMSR); proposed futarchy governance; originated Great Filter hypothesis; authored The Age of Em and The Elephant in the Brain; 5,200+ citationsMar 20261 value
As OfValueSourceFact ID
Mar 2026Pioneered prediction markets (since 1988); invented Logarithmic Market Scoring Rule (LMSR); proposed futarchy governance; originated Great Filter hypothesis; authored The Age of Em and The Elephant in the Brain; 5,200+ citationsf_rH6tN2pQ2v
Sam AltmanCEO of OpenAI; previously president of Y Combinator; central figure in the commercialization of large language models1 value
As OfValueSourceFact ID
CEO of OpenAI; previously president of Y Combinator; central figure in the commercialization of large language modelsen.wikipedia.orgf_bN7vQ2xJ4p
Shane LeggCo-founding DeepMind; Early work on AGI; Machine super intelligence thesisMar 20261 value
As OfValueSourceFact ID
Mar 2026Co-founding DeepMind; Early work on AGI; Machine super intelligence thesisf_sL2nLw9pYr
Stuart RussellCo-author of 'AI: A Modern Approach' (the standard AI textbook); founder of CHAI at UC Berkeley; leading advocate for provably beneficial AI1 value
As OfValueSourceFact ID
Co-author of 'AI: A Modern Approach' (the standard AI textbook); founder of CHAI at UC Berkeley; leading advocate for provably beneficial AIen.wikipedia.orgf_sR4fG5hI6j
Toby OrdThe Precipice; existential risk quantification; effective altruismMar 20261 value
As OfValueSourceFact ID
Mar 2026The Precipice; existential risk quantification; effective altruismf_tO2nLw9pYr
Vidur KapurSuperforecaster affiliated with Good Judgment, Swift Centre, Samotsvety, and RAND; AI policy researcher at ControlAI; key member of Sentinel early warning system for global catastrophesMar 20261 value
As OfValueSourceFact ID
Mar 2026Superforecaster affiliated with Good Judgment, Swift Centre, Samotsvety, and RAND; AI policy researcher at ControlAI; key member of Sentinel early warning system for global catastrophesf_vK6tN2pQ4v
Vipul NaikCreated Donations List Website tracking $72.8B in philanthropic donations; funded ~$255K in contract research for EA knowledge infrastructure; two-time International Mathematical Olympiad silver medalist; former MIRI researcherMar 20261 value
As OfValueSourceFact ID
Mar 2026Created Donations List Website tracking $72.8B in philanthropic donations; funded ~$255K in contract research for EA knowledge infrastructure; two-time International Mathematical Olympiad silver medalist; former MIRI researcherf_vN6tN2pQ6v
Will MacAskillEffective altruism, longtermism, "What We Owe the Future", Giving What We CanMar 20261 value
As OfValueSourceFact ID
Mar 2026Effective altruism, longtermism, "What We Owe the Future", Giving What We Canf_wM2nLw9pYr
Yann LeCunPioneer of convolutional neural networks (CNNs); Chief AI Scientist at Meta; Turing Award 2018; vocal skeptic of AGI existential risk1 value
As OfValueSourceFact ID
Pioneer of convolutional neural networks (CNNs); Chief AI Scientist at Meta; Turing Award 2018; vocal skeptic of AGI existential risken.wikipedia.orgf_yL4fG5hI6j
Yoshua BengioDeep learning pioneer; now AI safety advocateMar 20261 value
As OfValueSourceFact ID
Mar 2026Deep learning pioneer; now AI safety advocatef_yB2nLw9pYr

Coverage

Applies Toperson
Applicable Entities48
Have Current Data44 of 48 (92%)
Property: Notable For | Longterm Wiki