Skip to content

Eli Lifland

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:58 (Adequate)⚠️
Importance:72 (High)
Last edited:2026-02-01 (5 days ago)
Words:3.0k
Structure:
📊 2📈 0🔗 29📚 7514%Score: 12/15
LLM Summary:Comprehensive biographical profile of Eli Lifland, a top-ranked forecaster and AI researcher who co-authored the influential AI 2027 scenario forecast predicting AGI by 2027-2028, though his timelines have since shifted to 2032-2035. The page provides detailed documentation of his forecasting track record, methodological approaches, and contributions to AI safety discourse, though it primarily serves as reference material rather than novel analysis.
Issues (2):
  • QualityRated 58 but structure suggests 80 (underrated by 22 points)
  • Links6 links could use <R> components
AttributeAssessment
Primary FocusAGI forecasting, scenario planning, AI governance
Key Achievements#1 RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-lead of Samotsvety forecasting team
Current RolesResearcher at AI Futures Project; co-founder/advisor at Sage; guest fund manager at Long Term Future Fund
Educational BackgroundComputer science and economics degrees from University of Virginia
Notable ContributionsAI 2027 detailed scenario forecast; TextAttack framework for adversarial NLP attacks; top-ranked forecasting track record
Community StandingProminent figure in LessWrong and Effective Altruism communities; known for openness to critique and technical rigor
SourceLink
Official Websiteelilifland.com
Wikipediaen.wikipedia.org

Eli Lifland is a prominent AI researcher, forecaster, and entrepreneur who has become one of the most influential voices in AGI timeline forecasting and AI safety planning. He ranks #1 on the RAND Forecasting Initiative all-time leaderboard and has consistently demonstrated exceptional forecasting accuracy across multiple platforms, including securing first place finishes for his Samotsvety team in 2020, 2021, and 2022.1 His work combines technical expertise in AI systems with practical governance insights, making him a key bridge between technical AI research and policy planning.

Lifland is best known for co-authoring the AI 2027 scenario forecast, a detailed exploration of potential AGI development trajectories that has sparked significant discussion in AI safety communities.23 The project, developed alongside Daniel Kokotajlo, Thomas Larsen, and Scott Alexander, provides a concrete scenario for how superhuman AI capabilities might emerge by 2027-2028, including geopolitical tensions, technical breakthroughs, and alignment challenges. While his timelines have shifted—moving from a 2027 median to approximately 2032-2035 by late 2025—his work remains influential in shaping how researchers and policymakers think about near-term AGI risks.45

Currently, Lifland serves as a founding researcher at the AI Futures Project, where he focuses on AGI capabilities forecasting and scenario planning.6 He also co-founded and advises Sage, an organization building interactive AI explainers and forecasting tools, and serves as a guest fund manager at the Long Term Future Fund.7 His previous technical contributions include working on Elicit at Ought and co-creating TextAttack, a Python framework for adversarial attacks in natural language processing that has been cited 129 times.8

Eli Lifland holds degrees in computer science and economics from the University of Virginia.9 Before entering AI research and forecasting professionally, he demonstrated competitive excellence in multiple domains, including competitive programming (Battlecode, where he placed in the top 4 in three out of six years), mobile gaming (Clash Royale competitions), and speedcubing (solving Rubik’s cubes, including one-handed and blindfolded variants, though he describes his performance as “decent but not world-class”).10

His early technical work focused on AI robustness and adversarial machine learning. While still in college or shortly after, he co-created TextAttack, a Python framework for adversarial attacks, data augmentation, and adversarial training in NLP.11 This work resulted in multiple well-cited academic papers, including “Reevaluating Adversarial Examples in Natural Language” (129 citations) and contributions to the RAFT few-shot classification benchmark (77 citations).12 These technical contributions established his credibility in AI safety-adjacent research before he pivoted more fully toward forecasting and governance work.

Lifland’s forecasting career has been marked by exceptional and consistently documented success across multiple platforms and competitions. He ranks #1 on the RAND Forecasting Initiative (CSET-Foretell/INFER) all-time leaderboard and also held the #1 position for seasons one and two as of early 2024.13 On GJOpen, his Brier score of 0.23 significantly outperforms the median of 0.301 (ratio 0.76), and he secured 2nd place in the Metaculus Economist 2021 tournament and 1st in the Salk Tournament as of September 2022.14

As co-lead of the Samotsvety Forecasting team (approximately 15 forecasters), Lifland helped guide the team to dominant performances across multiple years. In 2020, Samotsvety placed 1st with a relative score of -0.912 compared to -0.062 for 2nd place, with individual team members finishing 5th, 6th, and 7th.15 The team repeated this success in 2021, achieving 1st place with a relative score of -3.259 compared to -0.889 for 2nd place and -0.267 for Pro Forecasters, with individuals finishing 1st, 2nd, 4th, and 5th.16 Samotsvety holds positions 1, 2, 3, and 4 in INFER’s all-time ranking, with some members achieving Superforecaster™ status.17

The team has produced public forecasts on critical topics including AI existential risk and nuclear risk.18 Notably, Lifland worked with Samotsvety on AI existential risk forecasting for the Future Fund (now the Open Philanthropy Worldview Prize), engaging in weekly discussions to decompose risks. In reviewing Joe Carlsmith’s analysis, he personally estimated a 30% chance of existential risk from intent misalignment or power-seeking AI by 2070, updating from an initial 5% to greater than 10%.19

Lifland serves as a founding researcher at the AI Futures Project, a 501(c)(3) organization (EIN 99-4320292) focused on AGI forecasting, scenario planning, and policy engagement.20 The organization is comfortably funded through the medium term via private donations and a Secure Funding Foundation (SFF) grant, though it operates entirely on charitable contributions.21

The project’s flagship output is AI 2027, a detailed scenario forecast exploring how superintelligence might emerge between 2024 and 2027-2028.22 The scenario was co-authored with Daniel Kokotajlo (Executive Director of AI Futures Project and former OpenAI researcher), Thomas Larsen (who contributed full-time and has experience in both technical AI safety and AI policy), and Scott Alexander (who primarily assisted with rewriting).23 Romeo Dean contributed supplements on compute and security considerations.24

The AI 2027 forecast presents a concrete narrative of AI development including:

  • Superhuman AI coders emerging by 2027-2028 (median estimates), capable of automating significant portions of AI research and development25
  • Superhuman AI researchers following approximately one year after superhuman coders, improving data efficiency and accelerating progress26
  • Geopolitical tensions, particularly a US-China AI race, influencing safety decisions and deployment timelines27
  • Alignment challenges, including exploration of safer model series using chain-of-thought reasoning to address failures28
  • Economic impacts, including widespread job displacement and concerns about AI company revenues as indicators of capability progress29

The project has sparked considerable discussion through podcasts, webinars (including a CEPR webinar on the AI 2027 forecast), and community engagement.30 Lifland has been featured discussing the implications for alignment research, safety, and international cooperation in venues including Lawfare Media and ControlAI.3132

Lifland’s AGI timeline estimates have evolved significantly as new evidence emerges. His median forecast shifted from 2027 (when AI 2027 was initially published) to 2032 by December 2024, then to 2031 by April 2025, and ultimately to approximately 2035 by late 2025 and early 2026.3334 These updates reflect his assessment that “discrete capabilities progress appears slower in 2025 than in 2024,” despite 2024’s rapid advancement.35

His forecasting methodology integrates multiple data streams:

  • Benchmark tracking: Monitoring METR’s time horizon suite, RE-Bench, Cybench, OSWorld, SWE-Bench Verified, FrontierMath, and CBRN evaluations36
  • Revenue extrapolations: Tracking AGI company revenues as indicators of capability deployment, though he acknowledges these as weaker evidence than full models37
  • Model architecture progress: Assessing advances in distillation and reinforcement learning algorithms like PPO38
  • Hardware considerations: Accounting for compute availability and efficiency, including RL’s relative inefficiency (citing Toby Ord’s estimate that RL is approximately 1,000,000x less efficient than pre-training)39

For his January 2026 forecast, Lifland’s median for Automated Coder capabilities is approximately 2035 (1.5 years later than the AI 2027 model), while his median for TED-AI (a more general intelligence milestone) is 1.5 years earlier due to modeling faster takeoff dynamics.40 He has noted that some 2025 predictions fell below expectations (such as RE-Bench scores being higher than anticipated), while others like CBRN capabilities met or exceeded milestones (OpenAI achieved CBRN “High” and Cyber “Medium” ratings).41

Other Research and Technical Contributions

Section titled “Other Research and Technical Contributions”

Beyond forecasting, Lifland has made technical contributions to AI safety and robustness research. His work at Ought involved contributions to Elicit, an AI-powered research assistant designed to help researchers more efficiently process academic literature.42 He also contributed to AI robustness research during this period, which aligns with his earlier work on adversarial examples.

His academic publications include:

  • TextAttack framework and associated papers on adversarial NLP attacks and evaluation43
  • RAFT Benchmark for few-shot text classification (77 citations)44
  • SaSTL (Spatial Aggregation Signal Temporal Logic) for runtime monitoring in smart cities (36 citations), co-authored with collaborators at IEEE ICCPS 202045
  • Papers on spatial-temporal specification-based monitoring systems (31 citations)46

These contributions demonstrate breadth across AI safety, robustness, and monitoring systems. His Google Scholar profile shows an h-index of 7, with full publication details available.47

Lifland has also contributed to forecasting methodology discussions within the Effective Altruism community. In a March 2024 podcast with Ozzie Gooen, he critiqued shallow crowd forecasting for high-importance questions, arguing that platforms may underweight questions requiring deeper research rather than quick probability estimates.48 He has advocated for “purer” AI alignment approaches such as AI-assisted alignment and empirical iteration on failures, referencing discussions on aligning transformative AI if developed very soon.49

Lifland co-founded Sage, an organization focused on building interactive AI explainers and forecasting tools.50 One of Sage’s key projects is AI Digest, which received $550,000 from Open Philanthropy for its work, with an additional $550,000 for forecasting projects.51 The organization aims to make AI developments more accessible to broader audiences through interactive tools and clear explanations.

As co-founder and advisor, Lifland continues to guide Sage’s strategic direction while primarily focusing his research efforts on the AI Futures Project.52 The organization operates within the effective altruism ecosystem and has benefited from networking through programs like the Constellation Astra Fellowship, which facilitated connections leading to Lifland’s collaboration with Romeo Dean and Daniel Kokotajlo on AI 2027.53

Lifland plays an active role in the AI safety and alignment communities, particularly through LessWrong and the Effective Altruism Forum. He is known for engaging openly with criticism and maintaining a technically rigorous approach to forecasting and scenario planning. Community members describe him as helpful, kind, and receptive to feedback, with a willingness to award bounties for identifying errors in his work.54

He serves as a mentor in the MATS Program (focusing on Strategy & Forecasting, Policy & Governance streams), helping guide the next generation of AI safety researchers.55 His work has been featured in the documentary “Making God,” which explores AGI risks and was released in 2025 or later.56 He has also contributed to discussions on navigating the AI alignment landscape, emphasizing the importance of upskilling for AI safety work while critiquing what he views as over-recruitment of junior researchers in the alignment community.57

Lifland has taken the Giving What We Can Pledge, committing to donate 10% of his lifetime income to effective charities, reflecting his integration into effective altruism principles.58 His career focus on ensuring that “advanced AI goes well” aligns with long-term future priorities, and he has received support from the Long Term Future Fund (implied through his involvement with the EA community).59

His collaborative approach extends to the Constellation Astra Fellowship, where over 80% of the first cohort were placed in AI safety roles at organizations including Redwood Research, METR, Anthropic, OpenAI, and DeepMind.60 Through mentorship relationships (such as with Buck Shlegeris, who connected a fellow to a team leadership role at Redwood Research), Lifland has helped facilitate career development in AI safety.61

Lifland’s work, particularly the AI 2027 timelines model, has faced methodological criticism from community members. In a detailed critique posted to LessWrong, the EA Forum, and Substack in June 2024, forecaster “titotal” described the model’s fundamental structure as “highly questionable,” with little empirical validation, misrepresented code, and poor justification for parameters like superexponential time horizon growth curves.62 Titotal, identifying as a physicist, argued that models need strong conceptual and empirical justifications before influencing major decisions, characterizing AI 2027 as resembling a “shoddy toy model stapled to a sci-fi short story” disguised as rigorous research.63

Critics have also raised concerns about philosophical overconfidence, warning that popularizing flawed models could lead people to make significant life decisions (such as whether to attend law school) based on shaky forecasts.64 However, others counter that inaction on short timelines could be costlier if the forecasts prove accurate, and that models inevitably inform real-world decisions regardless of their limitations.65

Lifland responded to these criticisms with notable openness, acknowledging errors and reviewing titotal’s critique for factual accuracy. He agreed to changes in the model write-up and paid $500 bounties to both titotal and another critic, Peter Johnson, for identifying issues.6667 He released an updated model addressing some concerns, including adding more weight on superexponential growth and extending overall timelines. While defending the core plausibility of superhuman coders emerging by 2027, Lifland emphasized that the model represents his team’s best current guess and challenged critics to develop better alternatives.68

Other criticisms include:

  • Lack of skeptic engagement: Some community members felt AI 2027 did not sufficiently address skeptical frameworks or justify its models against competing views, focusing more on detailed scenario planning than on persuading dissenters69
  • Unverifiable predictions: Concerns that predictions like “METR tasks taking approximately 16 hours may not scale to model improvement complexity” are difficult to validate empirically70
  • Forecasting depth debates: Disagreements with other forecasters like Ozzie Gooen about whether platforms adequately weight high-importance questions, with Lifland arguing they underweight questions requiring deep research71

Lifland has been forthright about forecast misses, noting in 2025 that some predictions fell below expectations (such as discrete capabilities progress appearing slower than in 2024) while others exceeded them.72 He maintains that despite imperfections, models like AI 2027 represent state-of-the-art thinking and provide valuable frameworks for navigating uncertainty about AGI development.

No major personal controversies or ethical issues have been documented beyond these methodological debates, and Lifland’s willingness to engage with criticism has generally been well-received in the community.73

Several major uncertainties surround Lifland’s forecasts and their implications:

  1. Timeline accuracy: While Lifland’s median forecast has shifted to approximately 2035 for key AGI milestones, substantial uncertainty remains about whether superhuman AI capabilities will emerge on this timeline, sooner, or significantly later. His models acknowledge this uncertainty, but the true trajectory depends on factors including compute availability, algorithmic breakthroughs, and alignment progress.

  2. Model validity: The methodological critiques of the AI 2027 timelines model raise questions about whether the underlying technical approach (including superexponential growth assumptions and parameter choices) accurately captures AI development dynamics. Even with updates, the model’s predictive power remains untested by time.

  3. Alignment solutions: Lifland’s scenarios explore alignment challenges, but whether proposed solutions like chain-of-thought reasoning or AI-assisted alignment research will prove sufficient for safe superhuman AI remains deeply uncertain.

  4. Geopolitical dynamics: The AI 2027 scenario includes significant US-China race dynamics, but how international cooperation or competition will actually unfold—and whether it will prioritize safety over capability advancement—is unpredictable.

  5. Economic and social impacts: While the scenarios explore job displacement and revenue growth as indicators, the actual economic and social consequences of rapid AI progress remain highly uncertain, including questions about wealth distribution, governance structures, and societal adaptation.

  6. Forecasting generalization: While Lifland has an exceptional track record on specific forecasting platforms, the degree to which success on shorter-term, more constrained questions predicts accuracy on unprecedented, long-term AGI development remains unclear.

  1. Samotsvety Track Record

  2. AI 2027 About Page

  3. Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027

  4. Eli Lifland LessWrong Profile

  5. Marketing AI Institute - Moving Back AGI Timeline

  6. AI Futures Project About Page

  7. Eli Lifland Personal Website

  8. Eli Lifland Google Scholar Profile

  9. Eli Lifland Personal Website

  10. Eli Lifland Personal Website

  11. Eli Lifland Google Scholar Profile

  12. Eli Lifland Google Scholar Profile

  13. Samotsvety Track Record

  14. Samotsvety Track Record

  15. Samotsvety Track Record

  16. Samotsvety Track Record

  17. Samotsvety Track Record

  18. Quantified Uncertainty - Eli Lifland Podcast

  19. Quantified Uncertainty - Eli Lifland on AI Alignment

  20. AI Futures Project

  21. Zvi Substack - Big Nonprofits Post 2025

  22. AI 2027 About Page

  23. Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027

  24. AI 2027 About Page

  25. AI 2027 Website

  26. AI 2027 Website

  27. ControlAI Newsletter - Future of AI Special Edition

  28. Eli Lifland LessWrong Profile

  29. Eli Lifland LessWrong Profile

  30. CEPR Webinar - AI 2027 Scenario Forecast

  31. Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027

  32. ControlAI Newsletter - Future of AI Special Edition

  33. Eli Lifland Personal Website

  34. Marketing AI Institute - Moving Back AGI Timeline

  35. Eli Lifland LessWrong Profile

  36. Eli Lifland LessWrong Profile

  37. Eli Lifland LessWrong Profile

  38. AI 2027 Website

  39. Eli Lifland LessWrong Profile

  40. Eli Lifland LessWrong Profile

  41. Eli Lifland LessWrong Profile

  42. Eli Lifland Personal Website

  43. Eli Lifland Google Scholar Profile

  44. Eli Lifland Google Scholar Profile

  45. Eli Lifland Google Scholar Profile

  46. Eli Lifland Google Scholar Profile

  47. Eli Lifland Google Scholar Profile

  48. EA Forum - Is Forecasting a Promising EA Cause Area

  49. EA Forum - Eli Lifland on Navigating AI Alignment

  50. Eli Lifland Personal Website

  51. Manifund - AI Digest Project

  52. Eli Lifland Personal Website

  53. Constellation Astra Fellowship

  54. EA Forum - Eli Lifland User Profile

  55. MATS Program - Eli Lifland Mentor Profile

  56. EA Forum - Making God Documentary

  57. Quantified Uncertainty - Eli Lifland Podcast

  58. Eli Lifland Personal Website

  59. Eli Lifland Personal Website

  60. Constellation Astra Fellowship

  61. Constellation Astra Fellowship

  62. LessWrong - Deep Critique of AI 2027 Timeline Models

  63. LessWrong - Deep Critique of AI 2027 Timeline Models

  64. EA Forum - Practical Value of Flawed Models

  65. EA Forum - Practical Value of Flawed Models

  66. AI Futures Notes Substack - Response to Titotal Critique

  67. EA Forum - Practical Value of Flawed Models

  68. AI Futures Notes Substack - Response to Titotal Critique

  69. ControlAI Newsletter - Future of AI Special Edition

  70. Eli Lifland LessWrong Profile

  71. EA Forum - Is Forecasting a Promising EA Cause Area

  72. Eli Lifland LessWrong Profile

  73. EA Forum - Eli Lifland User Profile