Eli Lifland
- QualityRated 58 but structure suggests 80 (underrated by 22 points)
- Links6 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Attribute | Assessment |
|---|---|
| Primary Focus | AGI forecasting, scenario planning, AI governanceAi GovernanceThis page contains only component imports with no actual content - it displays dynamically loaded data from an external source that cannot be evaluated. |
| Key Achievements | #1 RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-lead of SamotsvetySamotsvetyElite forecasting group Samotsvety dominated INFER competitions 2020-2022 with relative Brier scores twice as good as competitors, providing influential probabilistic forecasts including 28% TAI by...Quality: 61/100 forecasting team |
| Current Roles | Researcher at AI Futures ProjectAi Futures ProjectAI Futures Project is a nonprofit founded in 2024-2025 by former OpenAI researcher Daniel Kokotajlo that produces detailed AI capability forecasts, most notably the AI 2027 scenario depicting rapid...Quality: 50/100; co-founder/advisor at Sage; guest fund manager at Long Term Future Fund |
| Educational Background | Computer science and economics degrees from University of Virginia |
| Notable Contributions | AI 2027 detailed scenario forecast; TextAttack framework for adversarial NLP attacks; top-ranked forecasting track record |
| Community Standing | Prominent figure in LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 and Effective Altruism communities; known for openness to critique and technical rigor |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | elilifland.com |
| Wikipedia | en.wikipedia.org |
Overview
Section titled “Overview”Eli Lifland is a prominent AI researcher, forecaster, and entrepreneur who has become one of the most influential voices in AGI timelineAgi TimelineComprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current p...Quality: 59/100 forecasting and AI safety planning. He ranks #1 on the RAND Forecasting Initiative all-time leaderboard and has consistently demonstrated exceptional forecasting accuracy across multiple platforms, including securing first place finishes for his SamotsvetySamotsvetyElite forecasting group Samotsvety dominated INFER competitions 2020-2022 with relative Brier scores twice as good as competitors, providing influential probabilistic forecasts including 28% TAI by...Quality: 61/100 team in 2020, 2021, and 2022.1 His work combines technical expertise in AI systems with practical governance insights, making him a key bridge between technical AI research and policy planning.
Lifland is best known for co-authoring the AI 2027 scenario forecast, a detailed exploration of potential AGI developmentAgi DevelopmentComprehensive synthesis of AGI timeline forecasts showing dramatic compression: Metaculus aggregates predict 25% probability by 2027 and 50% by 2031 (down from 50-year median in 2020), with industr...Quality: 52/100 trajectories that has sparked significant discussion in AI safety communities.23 The project, developed alongside Daniel Kokotajlo, Thomas Larsen, and Scott Alexander, provides a concrete scenario for how superhuman AI capabilities might emerge by 2027-2028, including geopolitical tensions, technical breakthroughs, and alignment challenges. While his timelines have shifted—moving from a 2027 median to approximately 2032-2035 by late 2025—his work remains influential in shaping how researchers and policymakers think about near-term AGI risks.45
Currently, Lifland serves as a founding researcher at the AI Futures ProjectAi Futures ProjectAI Futures Project is a nonprofit founded in 2024-2025 by former OpenAI researcher Daniel Kokotajlo that produces detailed AI capability forecasts, most notably the AI 2027 scenario depicting rapid...Quality: 50/100, where he focuses on AGI capabilities forecasting and scenario planning.6 He also co-founded and advises Sage, an organization building interactive AI explainers and forecasting tools, and serves as a guest fund manager at the Long Term Future Fund.7 His previous technical contributions include working on ElicitElicitElicit is an AI research assistant with 2M+ users that searches 138M papers and automates literature reviews, founded by AI alignment researchers from Ought and funded by Open Philanthropy ($31M to...Quality: 63/100 at Ought and co-creating TextAttack, a Python framework for adversarial attacks in natural language processing that has been cited 129 times.8
Background and Education
Section titled “Background and Education”Eli Lifland holds degrees in computer science and economics from the University of Virginia.9 Before entering AI research and forecasting professionally, he demonstrated competitive excellence in multiple domains, including competitive programming (Battlecode, where he placed in the top 4 in three out of six years), mobile gaming (Clash Royale competitions), and speedcubing (solving Rubik’s cubes, including one-handed and blindfolded variants, though he describes his performance as “decent but not world-class”).10
His early technical work focused on AI robustness and adversarial machine learning. While still in college or shortly after, he co-created TextAttack, a Python framework for adversarial attacks, data augmentation, and adversarial trainingAdversarial TrainingAdversarial training, universally adopted at frontier labs with $10-150M/year investment, improves robustness to known attacks but creates an arms race dynamic and provides no protection against mo...Quality: 58/100 in NLP.11 This work resulted in multiple well-cited academic papers, including “Reevaluating Adversarial Examples in Natural Language” (129 citations) and contributions to the RAFT few-shot classification benchmark (77 citations).12 These technical contributions established his credibility in AI safety-adjacent research before he pivoted more fully toward forecasting and governance work.
Forecasting Career and Track Record
Section titled “Forecasting Career and Track Record”Lifland’s forecasting career has been marked by exceptional and consistently documented success across multiple platforms and competitions. He ranks #1 on the RAND Forecasting Initiative (CSETCsetCSET is a $100M+ Georgetown center with 50+ staff conducting data-driven AI policy research, particularly on U.S.-China competition and export controls. The center conducts hundreds of annual gover...Quality: 43/100-Foretell/INFER) all-time leaderboard and also held the #1 position for seasons one and two as of early 2024.13 On GJOpen, his Brier score of 0.23 significantly outperforms the median of 0.301 (ratio 0.76), and he secured 2nd place in the MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100 Economist 2021 tournament and 1st in the Salk Tournament as of September 2022.14
As co-lead of the Samotsvety Forecasting team (approximately 15 forecasters), Lifland helped guide the team to dominant performances across multiple years. In 2020, Samotsvety placed 1st with a relative score of -0.912 compared to -0.062 for 2nd place, with individual team members finishing 5th, 6th, and 7th.15 The team repeated this success in 2021, achieving 1st place with a relative score of -3.259 compared to -0.889 for 2nd place and -0.267 for Pro Forecasters, with individuals finishing 1st, 2nd, 4th, and 5th.16 Samotsvety holds positions 1, 2, 3, and 4 in INFER’s all-time ranking, with some members achieving Superforecaster™ status.17
The team has produced public forecasts on critical topics including AI existential risk and nuclear risk.18 Notably, Lifland worked with Samotsvety on AI existential risk forecasting for the Future Fund (now the Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. Worldview Prize), engaging in weekly discussions to decompose risks. In reviewing Joe Carlsmith’s analysis, he personally estimated a 30% chance of existential risk from intent misalignment or power-seeking AIRiskPower-Seeking AIFormal proofs demonstrate optimal policies seek power in MDPs (Turner et al. 2021), now empirically validated: OpenAI o3 sabotaged shutdown in 79% of tests (Palisade 2025), and Claude 3 Opus showed...Quality: 67/100 by 2070, updating from an initial 5% to greater than 10%.19
AI Futures Project and AI 2027
Section titled “AI Futures Project and AI 2027”Lifland serves as a founding researcher at the AI Futures Project, a 501(c)(3) organization (EIN 99-4320292) focused on AGI forecasting, scenario planning, and policy engagement.20 The organization is comfortably funded through the medium term via private donations and a Secure Funding Foundation (SFF) grant, though it operates entirely on charitable contributions.21
The project’s flagship output is AI 2027, a detailed scenario forecast exploring how superintelligence might emerge between 2024 and 2027-2028.22 The scenario was co-authored with Daniel Kokotajlo (Executive Director of AI Futures Project and former OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100 researcher), Thomas Larsen (who contributed full-time and has experience in both technical AI safetyTechnical Ai SafetyThis page contains only code/component references with no actual content about technical AI safety. The page is a stub that imports React components but provides no information, analysis, or substa... and AI policy), and Scott Alexander (who primarily assisted with rewriting).23 Romeo Dean contributed supplements on compute and security considerations.24
The AI 2027 forecast presents a concrete narrative of AI development including:
- Superhuman AI coders emerging by 2027-2028 (median estimates), capable of automating significant portions of AI research and development25
- Superhuman AI researchers following approximately one year after superhuman coders, improving data efficiency and accelerating progress26
- Geopolitical tensions, particularly a US-China AI race, influencing safety decisions and deployment timelines27
- Alignment challenges, including exploration of safer model series using chain-of-thought reasoning to address failures28
- Economic impacts, including widespread job displacement and concerns about AI company revenues as indicators of capability progress29
The project has sparked considerable discussion through podcasts, webinars (including a CEPR webinar on the AI 2027 forecast), and community engagement.30 Lifland has been featured discussing the implications for alignment research, safety, and international cooperation in venues including Lawfare Media and ControlAIControlaiControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct in...Quality: 63/100.3132
Timeline Updates and Methodology
Section titled “Timeline Updates and Methodology”Lifland’s AGI timeline estimates have evolved significantly as new evidence emerges. His median forecast shifted from 2027 (when AI 2027 was initially published) to 2032 by December 2024, then to 2031 by April 2025, and ultimately to approximately 2035 by late 2025 and early 2026.3334 These updates reflect his assessment that “discrete capabilities progress appears slower in 2025 than in 2024,” despite 2024’s rapid advancement.35
His forecasting methodology integrates multiple data streams:
- Benchmark tracking: Monitoring METRLab ResearchMETRMETR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replication, cybersecurity, CBRN, and manipulation capabi...Quality: 66/100’s time horizon suite, RE-Bench, Cybench, OSWorld, SWE-Bench Verified, FrontierMath, and CBRN evaluations36
- Revenue extrapolations: Tracking AGI company revenues as indicators of capability deployment, though he acknowledges these as weaker evidence than full models37
- Model architecture progress: Assessing advances in distillation and reinforcement learning algorithms like PPO38
- Hardware considerations: Accounting for compute availability and efficiency, including RL’s relative inefficiency (citing Toby OrdResearcherToby OrdComprehensive biographical profile of Toby Ord documenting his 10% AI extinction estimate and role founding effective altruism, with detailed tables on risk assessments, academic background, and in...Quality: 41/100’s estimate that RL is approximately 1,000,000x less efficient than pre-training)39
For his January 2026 forecast, Lifland’s median for Automated Coder capabilities is approximately 2035 (1.5 years later than the AI 2027 model), while his median for TED-AI (a more general intelligence milestone) is 1.5 years earlier due to modeling faster takeoff dynamics.40 He has noted that some 2025 predictions fell below expectations (such as RE-Bench scores being higher than anticipated), while others like CBRN capabilities met or exceeded milestones (OpenAI achieved CBRN “High” and Cyber “Medium” ratings).41
Other Research and Technical Contributions
Section titled “Other Research and Technical Contributions”Beyond forecasting, Lifland has made technical contributions to AI safety and robustness research. His work at Ought involved contributions to Elicit, an AI-powered research assistant designed to help researchers more efficiently process academic literature.42 He also contributed to AI robustness research during this period, which aligns with his earlier work on adversarial examples.
His academic publications include:
- TextAttack framework and associated papers on adversarial NLP attacks and evaluation43
- RAFT Benchmark for few-shot text classification (77 citations)44
- SaSTL (Spatial Aggregation Signal Temporal Logic) for runtime monitoring in smart cities (36 citations), co-authored with collaborators at IEEE ICCPS 202045
- Papers on spatial-temporal specification-based monitoring systems (31 citations)46
These contributions demonstrate breadth across AI safety, robustness, and monitoring systems. His Google Scholar profile shows an h-index of 7, with full publication details available.47
Lifland has also contributed to forecasting methodology discussions within the Effective Altruism community. In a March 2024 podcast with Ozzie Gooen, he critiqued shallow crowd forecasting for high-importance questions, arguing that platforms may underweight questions requiring deeper research rather than quick probability estimates.48 He has advocated for “purer” AI alignmentAlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100 approaches such as AI-assisted alignmentAi AssistedComprehensive analysis of AI-assisted alignment showing automated red-teaming reduced jailbreak rates from 86% to 4.4%, weak-to-strong generalization recovered 80-90% of GPT-3.5 performance from GP...Quality: 63/100 and empirical iteration on failures, referencing discussions on aligning transformative AI if developed very soon.49
Sage and AI Digest
Section titled “Sage and AI Digest”Lifland co-founded Sage, an organization focused on building interactive AI explainers and forecasting tools.50 One of Sage’s key projects is AI Digest, which received $550,000 from Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. for its work, with an additional $550,000 for forecasting projects.51 The organization aims to make AI developments more accessible to broader audiences through interactive tools and clear explanations.
As co-founder and advisor, Lifland continues to guide Sage’s strategic direction while primarily focusing his research efforts on the AI Futures Project.52 The organization operates within the effective altruism ecosystem and has benefited from networking through programs like the Constellation Astra Fellowship, which facilitated connections leading to Lifland’s collaboration with Romeo Dean and Daniel Kokotajlo on AI 2027.53
Role in the AI Safety Community
Section titled “Role in the AI Safety Community”Lifland plays an active role in the AI safety and alignment communities, particularly through LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 and the Effective Altruism Forum. He is known for engaging openly with criticism and maintaining a technically rigorous approach to forecasting and scenario planning. Community members describe him as helpful, kind, and receptive to feedback, with a willingness to award bounties for identifying errors in his work.54
He serves as a mentor in the MATS Program (focusing on Strategy & Forecasting, Policy & Governance streams), helping guide the next generation of AI safety researchers.55 His work has been featured in the documentary “Making God,” which explores AGI risks and was released in 2025 or later.56 He has also contributed to discussions on navigating the AI alignment landscape, emphasizing the importance of upskilling for AI safety work while critiquing what he views as over-recruitment of junior researchers in the alignment community.57
Lifland has taken the Giving What We Can Pledge, committing to donate 10% of his lifetime income to effective charities, reflecting his integration into effective altruism principles.58 His career focus on ensuring that “advanced AI goes well” aligns with long-term future priorities, and he has received support from the Long Term Future Fund (implied through his involvement with the EA community).59
His collaborative approach extends to the Constellation Astra Fellowship, where over 80% of the first cohort were placed in AI safety roles at organizations including Redwood ResearchRedwood ResearchA nonprofit AI safety and security research organization founded in 2021, known for pioneering AI Control research, developing causal scrubbing interpretability methods, and conducting landmark ali...Quality: 78/100, METR, AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100, OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100, and DeepMindLabGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100.60 Through mentorship relationships (such as with Buck Shlegeris, who connected a fellow to a team leadership role at Redwood Research), Lifland has helped facilitate career development in AI safety.61
Criticisms and Controversies
Section titled “Criticisms and Controversies”Lifland’s work, particularly the AI 2027 timelines model, has faced methodological criticism from community members. In a detailed critique posted to LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100, the EA Forum, and Substack in June 2024, forecaster “titotal” described the model’s fundamental structure as “highly questionable,” with little empirical validation, misrepresented code, and poor justification for parameters like superexponential time horizon growth curves.62 Titotal, identifying as a physicist, argued that models need strong conceptual and empirical justifications before influencing major decisions, characterizing AI 2027 as resembling a “shoddy toy model stapled to a sci-fi short story” disguised as rigorous research.63
Critics have also raised concerns about philosophical overconfidence, warning that popularizing flawed models could lead people to make significant life decisions (such as whether to attend law school) based on shaky forecasts.64 However, others counter that inaction on short timelines could be costlier if the forecasts prove accurate, and that models inevitably inform real-world decisions regardless of their limitations.65
Lifland responded to these criticisms with notable openness, acknowledging errors and reviewing titotal’s critique for factual accuracy. He agreed to changes in the model write-up and paid $500 bounties to both titotal and another critic, Peter Johnson, for identifying issues.6667 He released an updated model addressing some concerns, including adding more weight on superexponential growth and extending overall timelines. While defending the core plausibility of superhuman coders emerging by 2027, Lifland emphasized that the model represents his team’s best current guess and challenged critics to develop better alternatives.68
Other criticisms include:
- Lack of skeptic engagement: Some community members felt AI 2027 did not sufficiently address skeptical frameworks or justify its models against competing views, focusing more on detailed scenario planning than on persuading dissenters69
- Unverifiable predictions: Concerns that predictions like “METR tasks taking approximately 16 hours may not scale to model improvement complexity” are difficult to validate empirically70
- Forecasting depth debates: Disagreements with other forecasters like Ozzie Gooen about whether platforms adequately weight high-importance questions, with Lifland arguing they underweight questions requiring deep research71
Lifland has been forthright about forecast misses, noting in 2025 that some predictions fell below expectations (such as discrete capabilities progress appearing slower than in 2024) while others exceeded them.72 He maintains that despite imperfections, models like AI 2027 represent state-of-the-art thinking and provide valuable frameworks for navigating uncertainty about AGI development.
No major personal controversies or ethical issues have been documented beyond these methodological debates, and Lifland’s willingness to engage with criticism has generally been well-received in the community.73
Key Uncertainties
Section titled “Key Uncertainties”Several major uncertainties surround Lifland’s forecasts and their implications:
-
Timeline accuracy: While Lifland’s median forecast has shifted to approximately 2035 for key AGI milestones, substantial uncertainty remains about whether superhuman AI capabilities will emerge on this timeline, sooner, or significantly later. His models acknowledge this uncertainty, but the true trajectory depends on factors including compute availability, algorithmic breakthroughs, and alignment progressAi Transition Model MetricAlignment ProgressComprehensive empirical tracking of AI alignment progress across 10 dimensions finds highly uneven progress: dramatic improvements in jailbreak resistance (87%→3% ASR for frontier models) but conce...Quality: 66/100.
-
Model validity: The methodological critiques of the AI 2027 timelines model raise questions about whether the underlying technical approach (including superexponential growth assumptions and parameter choices) accurately captures AI development dynamics. Even with updates, the model’s predictive power remains untested by time.
-
Alignment solutions: Lifland’s scenarios explore alignment challenges, but whether proposed solutions like chain-of-thought reasoning or AI-assisted alignment research will prove sufficient for safe superhuman AI remains deeply uncertain.
-
Geopolitical dynamics: The AI 2027 scenario includes significant US-China race dynamics, but how international cooperation or competition will actually unfold—and whether it will prioritize safety over capability advancement—is unpredictable.
-
Economic and social impacts: While the scenarios explore job displacement and revenue growth as indicators, the actual economic and social consequences of rapid AI progress remain highly uncertain, including questions about wealth distribution, governance structures, and societal adaptation.
-
Forecasting generalization: While Lifland has an exceptional track record on specific forecasting platforms, the degree to which success on shorter-term, more constrained questions predicts accuracy on unprecedented, long-term AGI development remains unclear.