Longterm Wiki
Navigation
Updated 2026-03-07HistoryDataStatementsClaims
Citations verified28 accurate5 flagged7 unchecked
Page StatusContent
Edited 1 day ago1.1k words3 backlinksUpdated every 6 weeksDue in 6 weeks
58QualityAdequate27ImportancePeripheral40.5ResearchLow
Summary

Biographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his forecasting track record, the AI Futures timelines model, and his contributions to AI safety discourse.

Content8/13
LLM summaryScheduleEntityEdit historyOverview
Tables2/ ~5Diagrams0Int. links13/ ~9Ext. links1/ ~6Footnotes0/ ~3References17/ ~3Quotes33/40Accuracy33/40RatingsN:4 R:6 A:5 C:8Backlinks3
Issues1
Links1 link could use <R> components

Eli Lifland

Person

Eli Lifland

Biographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his forecasting track record, the AI Futures timelines model, and his contributions to AI safety discourse.

Related
Organizations
AI Futures ProjectSamotsvetyMetaculusOpen PhilanthropyLessWrong
1.1k words · 3 backlinks

Quick Assessment

DimensionAssessment
Primary FocusAGI forecasting, scenario planning, AI governance
Key Achievements#1 RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-lead of Samotsvety forecasting team
Current RolesCo-founder and researcher at AI Futures Project; co-founder/advisor at Sage; guest fund manager at Long Term Future Fund
Educational BackgroundComputer science and economics degrees from University of Virginia
Notable ContributionsAI 2027 scenario forecast; AI Futures timelines model; top-ranked forecasting track record
SourceLink
Official Websiteelilifland.com

Overview

Eli Lifland is a forecaster and AI safety researcher who ranks #1 on the RAND Forecasting Initiative all-time leaderboard. He co-leads the Samotsvety forecasting team, which placed first in the CSET-Foretell/INFER competition in 2020, 2021, and 2022.1 His work focuses on AGI timeline forecasting, scenario planning, and AI safety.

Lifland co-founded the AI Futures Project alongside Daniel Kokotajlo and Thomas Larsen, and co-authored AI 2027, a detailed scenario forecast exploring potential AGI development trajectories.23 The project, with contributions from Scott Alexander and Romeo Dean, provides a concrete scenario for how superhuman AI capabilities might emerge, including geopolitical tensions, technical breakthroughs, and alignment challenges.

Lifland also co-founded and advises Sage, an organization building interactive AI explainers and forecasting tools, and serves as a guest fund manager at the Long Term Future Fund.4 He previously worked on Elicit at Ought and co-created TextAttack, a Python framework for adversarial attacks in natural language processing.5

AI Futures Project and AI 2027

Lifland is a co-founder and researcher at the AI Futures Project, a 501(c)(3) organization focused on AGI forecasting, scenario planning, and policy engagement.6 The organization was co-founded with Daniel Kokotajlo (Executive Director, former OpenAI researcher) and Thomas Larsen (founder of the Center for AI Policy).7

The project's flagship output is AI 2027, a detailed scenario forecast released in April 2025 exploring how superintelligence might emerge.8 The scenario was co-authored with Scott Alexander (who primarily assisted with rewriting) and Romeo Dean (who contributed supplements on compute and security considerations).9

The AI 2027 forecast presents a concrete narrative of AI development including:

  • Increasingly capable AI agents automating significant portions of AI research and development10
  • Geopolitical tensions, particularly a US-China AI race, influencing safety decisions and deployment timelines11
  • Alignment challenges, including exploration of safer model series using chain-of-thought reasoning to address failures12
  • Economic impacts, including widespread job displacement13

The project received significant attention and has been discussed in venues including Lawfare Media, ControlAI, and a CEPR webinar.141516

AI Futures Timelines Model

The AI Futures Project maintains a quantitative timelines model that generates probability distributions for key AGI milestones such as Automated Coder (AC) and superintelligence (ASI). The model incorporates benchmark tracking, compute availability, algorithmic progress, and other inputs to produce forecasts that team members then adjust based on their individual judgment.17

Lifland's personal AGI timeline estimates have shifted as new evidence has emerged. His median TED-AI (a general intelligence milestone) forecast has followed this trajectory:18

  • 2021: ~2060
  • July 2022: ~2050
  • January 2024: ~2038
  • Mid-2024: ~2035
  • December 2024: ~2032
  • April 2025: ~2031
  • July 2025: ~2033
  • January 2026: ~2035

The AI Futures Project has emphasized that the AI 2027 scenario was never intended as a confident prediction that AGI would arrive in 2027, and that all team members maintain high uncertainty about when AGI and ASI will be built.19 The December 2025 model update predicted 3-5 years longer timelines to full coding automation compared to the April 2025 AI 2027 forecast, attributed primarily to more conservative modeling of pre-automation AI R&D speedups and recognition of potential data bottlenecks.20

Forecasting Track Record

Lifland ranks #1 on the RAND Forecasting Initiative (CSET-Foretell/INFER) all-time leaderboard.21 On GJOpen, his Brier score of 0.23 outperforms the median of 0.301 (ratio 0.76), and he secured 2nd place in the Metaculus Economist 2021 tournament and 1st in the Salk Tournament as of September 2022.22

As co-lead of the Samotsvety Forecasting team (approximately 15 forecasters), Lifland helped guide the team to first-place finishes in the INFER competition in 2020, 2021, and 2022.23 In 2020, Samotsvety placed 1st with a relative score of -0.912 compared to -0.062 for 2nd place. In 2021, they achieved 1st with a relative score of -3.259 compared to -0.889 for 2nd place. Samotsvety holds positions 1, 2, 3, and 4 in INFER's all-time ranking, with some members achieving Superforecaster status.24

The team has produced public forecasts on critical topics including AI existential risk and nuclear risk.25

Sage and AI Digest

Lifland co-founded Sage, an organization focused on building interactive AI explainers and forecasting tools.26 One of Sage's key projects is AI Digest, which received $550,000 from Coefficient Giving for its work, with an additional $550,000 for forecasting projects.27 The organization aims to make AI developments more accessible to broader audiences through interactive tools and clear explanations.

Role in the AI Safety Community

Lifland is active in the AI safety and alignment communities, particularly through LessWrong and the Effective Altruism Forum. He serves as a mentor in the MATS Program (focusing on Strategy & Forecasting, Policy & Governance streams).28 He has also been featured in the documentary "Making God," which explores AGI risks.29

Lifland has taken the Giving What We Can Pledge, committing to donate 10% of his lifetime income to effective charities.30

Criticisms and Controversies

Lifland's work, particularly the AI 2027 timelines model, has faced methodological criticism from community members. In a detailed critique posted to LessWrong, the EA Forum, and Substack, forecaster "titotal" described the model's fundamental structure as "highly questionable," with little empirical validation and poor justification for parameters like superexponential time horizon growth curves.31 Titotal argued that models need strong conceptual and empirical justifications before influencing major decisions, characterizing AI 2027 as resembling a "shoddy toy model stapled to a sci-fi short story" disguised as rigorous research.32

Critics have also raised concerns about philosophical overconfidence, warning that popularizing flawed models could lead people to make significant life decisions based on shaky forecasts.33 Others counter that inaction on short timelines could be costlier if the forecasts prove accurate.34

Lifland responded to these criticisms by acknowledging errors and reviewing titotal's critique for factual accuracy. He agreed to changes in the model write-up and paid $500 bounties to both titotal and another critic, Peter Johnson, for identifying issues.3536 The team released a detailed response explaining their reasoning more thoroughly, including their justification for the model's assumptions.37

Other criticisms include:

  • Lack of skeptic engagement: Some community members felt AI 2027 did not sufficiently address skeptical frameworks or justify its models against competing views38
  • Unverifiable predictions: Concerns that some predictions are difficult to validate empirically39

Lifland has been forthright about forecast misses and has regularly updated his timelines as new evidence emerges.40 No major personal controversies or ethical issues have been documented beyond these methodological debates.

Sources

Footnotes

  1. Samotsvety Track RecordSamotsvety Track Record

  2. AI 2027 About PageAI 2027 About Page

  3. Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027

  4. Eli Lifland Personal WebsiteEli Lifland Personal Website

  5. Eli Lifland Google Scholar ProfileEli Lifland Google Scholar Profile

  6. AI Futures Project About PageAI Futures Project About Page

  7. AI Futures Project About PageAI Futures Project About Page

  8. AI 2027 About PageAI 2027 About Page

  9. AI 2027 About PageAI 2027 About Page

  10. AI 2027 WebsiteAI 2027 Website

  11. ControlAI Newsletter - Future of AI Special EditionControlAI Newsletter - Future of AI Special Edition

  12. AI 2027 WebsiteAI 2027 Website

  13. AI 2027 WebsiteAI 2027 Website

  14. Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027

  15. ControlAI Newsletter - Future of AI Special EditionControlAI Newsletter - Future of AI Special Edition

  16. CEPR Webinar - AI 2027 Scenario ForecastCEPR Webinar - AI 2027 Scenario Forecast

  17. AI Futures Blog - Clarifying Timelines ForecastsAI Futures Blog - Clarifying Timelines Forecasts

  18. Citation rc-9ca6 (data unavailable)

  19. AI Futures Blog - Clarifying Timelines ForecastsAI Futures Blog - Clarifying Timelines Forecasts

  20. Marketing AI Institute - Moving Back AGI TimelineMarketing AI Institute - Moving Back AGI Timeline

  21. Samotsvety Track RecordSamotsvety Track Record

  22. Samotsvety Track RecordSamotsvety Track Record

  23. Samotsvety Track RecordSamotsvety Track Record

  24. Samotsvety Track RecordSamotsvety Track Record

  25. EA Forum - Samotsvety's AI Risk ForecastsEA Forum - Samotsvety's AI Risk Forecasts

  26. Eli Lifland Personal WebsiteEli Lifland Personal Website

  27. Manifund - AI Digest ProjectManifund - AI Digest Project

  28. MATS Program - Eli Lifland Mentor ProfileMATS Program - Eli Lifland Mentor Profile

  29. EA Forum - Making God DocumentaryEA Forum - Making God Documentary

  30. Eli Lifland Personal WebsiteEli Lifland Personal Website

  31. LessWrong - Deep Critique of AI 2027 Timeline ModelsLessWrong - Deep Critique of AI 2027 Timeline Models

  32. LessWrong - Deep Critique of AI 2027 Timeline ModelsLessWrong - Deep Critique of AI 2027 Timeline Models

  33. EA Forum - Practical Value of Flawed ModelsEA Forum - Practical Value of Flawed Models

  34. EA Forum - Practical Value of Flawed ModelsEA Forum - Practical Value of Flawed Models

  35. AI Futures Notes Substack - Response to Titotal CritiqueAI Futures Notes Substack - Response to Titotal Critique

  36. EA Forum - Practical Value of Flawed ModelsEA Forum - Practical Value of Flawed Models

  37. AI Futures Notes Substack - Response to Titotal CritiqueAI Futures Notes Substack - Response to Titotal Critique

  38. ControlAI Newsletter - Future of AI Special EditionControlAI Newsletter - Future of AI Special Edition

  39. AI 2027 WebsiteAI 2027 Website

  40. AI Futures Blog - Clarifying Timelines ForecastsAI Futures Blog - Clarifying Timelines Forecasts

References

View claims
Claims (1)View all claims
In a detailed critique posted to LessWrong, the EA Forum, and Substack, forecaster "titotal" described the model's fundamental structure as "highly questionable," with little empirical validation and poor justification for parameters like superexponential time horizon growth curves. Titotal argued that models need strong conceptual and empirical justifications before influencing major decisions, characterizing AI 2027 as resembling a "shoddy toy model stapled to a sci-fi short story" disguised as rigorous research.
Accurate100%Feb 22, 2026
Not to mince words, I think it’s pretty bad . It’s not just that I disagree with their parameter estimates, it’s that I think the fundamental structure of their model is highly questionable and at times barely justified, there is very little empirical validation of the model, and there are parts of the code that the write-up of the model straight up misrepresents.
Claims (4)View all claims
The model incorporates benchmark tracking, compute availability, algorithmic progress, and other inputs to produce forecasts that team members then adjust based on their individual judgment.
Unsupported0%Feb 22, 2026
All-things-considered forecasts: Our forecasts for what will happen in the world, including adjustments on top of the outputs of our timelines and takeoff models.

The source does not describe the specific inputs of the model or how team members adjust the forecasts.

The AI Futures Project has emphasized that the AI 2027 scenario was never intended as a confident prediction that AGI would arrive in 2027, and that all team members maintain high uncertainty about when AGI and ASI will be built. The December 2025 model update predicted 3-5 years longer timelines to full coding automation compared to the April 2025 AI 2027 forecast, attributed primarily to more conservative modeling of pre-automation AI R&D speedups and recognition of potential data bottlenecks.
Accurate100%Feb 22, 2026
We’ve done our best to make it clear that it has never been the case that we were confident AGI would arrive in 2027.
His median TED-AI (a general intelligence milestone) forecast has followed this trajectory:
2018: 2070. Early 2020: 2050. Nov 2020: 2030. Aug 2021: 2029 . Early 2022: 2029. Dec 2022: 2027. Nov 2023: 2027. Jan 2024: 2027. Feb 2024: 2027. Jan 2025: 2027. Feb 2025: 2028. Apr 2025: 2028. Aug 2025: EOY 2029 (2030.0). Nov 2025: 2030. Jan 2026: Dec 2030 (2030.95).
+1 more claims
3AI 2027ai-2027.com
Claims (4)View all claims
- Increasingly capable AI agents automating significant portions of AI research and development
Accurate100%Feb 22, 2026
OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.
- Alignment challenges, including exploration of safer model series using chain-of-thought reasoning to address failures
Accurate100%Feb 22, 2026
The alignment plan OpenBrain follows the Leike & Sutskever (2023) playbook: now that they have a model capable of greatly speeding up alignment research (especially coding portions), they will use existing alignment techniques like deliberative alignment and weak-to-strong generalization to try to get it to internalize the Spec in the right way.
- Economic impacts, including widespread job displacement
Accurate100%Feb 22, 2026
AI has started to take jobs, but has also created new ones.
+1 more claims
4EA Forum - Samotsvety's AI Risk Forecastsforum.effectivealtruism.org·Blog post
Claims (1)View all claims
The team has produced public forecasts on critical topics including AI existential risk and nuclear risk.
Accurate100%Feb 22, 2026
I sent out surveys to get Samotsvety’s up-to-date views on all 5 of these questions, and thought it would be valuable to share the forecasts publicly.
5AI 2027 About Pageai-2027.com
Claims (2)View all claims
The project's flagship output is AI 2027, a detailed scenario forecast released in April 2025 exploring how superintelligence might emerge. The scenario was co-authored with Scott Alexander (who primarily assisted with rewriting) and Romeo Dean (who contributed supplements on compute and security considerations).
Minor issues85%Feb 22, 2026
Daniel Kokotajlo, Eli Lifland, Thomas Larsen, and Romeo Dean wrote the content of the scenario and endings. AI 2027 was informed by experience from more than a dozen tabletop exercises with hundreds of different people. ...Scott Alexander volunteered to rewrite our content in an engaging style; the fun parts of the text are his and the boring parts are ours.

The claim states that AI 2027 is a detailed scenario forecast released in April 2025, but the source does not explicitly state that it is a 'detailed' forecast. The claim states that Scott Alexander primarily assisted with rewriting, but the source says he volunteered to rewrite the content in an engaging style. The claim states that Romeo Dean contributed supplements on compute and security considerations, but the source says he specializes in forecasting AI chip production and usage.

Lifland co-founded the AI Futures Project alongside Daniel Kokotajlo and Thomas Larsen, and co-authored AI 2027, a detailed scenario forecast exploring potential AGI development trajectories. The project, with contributions from Scott Alexander and Romeo Dean, provides a concrete scenario for how superhuman AI capabilities might emerge, including geopolitical tensions, technical breakthroughs, and alignment challenges.
Accurate100%Feb 22, 2026
The AI 2027 scenario is the first major release from the AI Futures Project .
Claims (2)View all claims
The project received significant attention and has been discussed in venues including Lawfare Media, ControlAI, and a CEPR webinar.
Inaccurate50%Feb 22, 2026
This novel report has already elicited a lot of attention with some reviewers celebrating its creativity and others questioning its methodology.

The claim mentions ControlAI and a CEPR webinar, but these are not mentioned in the source text.

Lifland co-founded the AI Futures Project alongside Daniel Kokotajlo and Thomas Larsen, and co-authored AI 2027, a detailed scenario forecast exploring potential AGI development trajectories. The project, with contributions from Scott Alexander and Romeo Dean, provides a concrete scenario for how superhuman AI capabilities might emerge, including geopolitical tensions, technical breakthroughs, and alignment challenges.
Accurate100%Feb 22, 2026
Daniel Kokotajlo, former OpenAI researcher and Executive Director of the AI Futures Project, and Eli Lifland, a researcher with the AI Futures Project, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and a contributing editor at Lawfare , to discuss what AI may look like in 2027.
Claims (3)View all claims
Lifland also co-founded and advises Sage, an organization building interactive AI explainers and forecasting tools, and serves as a guest fund manager at the Long Term Future Fund. He previously worked on Elicit at Ought and co-created TextAttack, a Python framework for adversarial attacks in natural language processing.
Minor issues85%Feb 22, 2026
I advise Sage , an organization that is as of 2026, primarily focused on the AI Village .

The source does not mention that Lifland serves as a guest fund manager at the Long Term Future Fund. The source states that Lifland advises Sage, an organization that is 'primarily focused on the AI Village' as of 2026, not an organization 'building interactive AI explainers and forecasting tools'.

Lifland has taken the Giving What We Can Pledge, committing to donate 10% of his lifetime income to effective charities.
Accurate100%Feb 22, 2026
I've taken the Giving What We Can Pledge to donate at least 10% of my lifetime income to whatever I think is the most effective use of my money, and ideas I learned about due to effective altruism have had a large impact on my career decisions.
Lifland co-founded Sage, an organization focused on building interactive AI explainers and forecasting tools. One of Sage's key projects is AI Digest, which received \$550,000 from Coefficient Giving for its work, with an additional \$550,000 for forecasting projects. The organization aims to make AI developments more accessible to broader audiences through interactive tools and clear explanations.
Claims (1)View all claims
Lifland also co-founded and advises Sage, an organization building interactive AI explainers and forecasting tools, and serves as a guest fund manager at the Long Term Future Fund. He previously worked on Elicit at Ought and co-created TextAttack, a Python framework for adversarial attacks in natural language processing.
Claims (1)View all claims
Lifland is a co-founder and researcher at the AI Futures Project, a 501(c)(3) organization focused on AGI forecasting, scenario planning, and policy engagement. The organization was co-founded with Daniel Kokotajlo (Executive Director, former OpenAI researcher) and Thomas Larsen (founder of the Center for AI Policy).
Accurate100%Feb 22, 2026
The AI Futures Project is a 501(c)(3) nonprofit research organization (EIN 99-4320292).
Claims (3)View all claims
The project received significant attention and has been discussed in venues including Lawfare Media, ControlAI, and a CEPR webinar.
Minor issues80%Feb 22, 2026
At ControlAI, we’re focused on the extinction threat of AI to humanity, so I’d be particularly keen to get your views on how likely you think that is.

The claim mentions Lawfare Media, but the source does not. The claim mentions a CEPR webinar, but the source does not.

- Geopolitical tensions, particularly a US-China AI race, influencing safety decisions and deployment timelines
Accurate100%Feb 22, 2026
Usually there are warning signs that the AIs are not aligned, but they’re ignored because of the race. The US-China race seems to go as fast as possible.
- Lack of skeptic engagement: Some community members felt AI 2027 did not sufficiently address skeptical frameworks or justify its models against competing views
Accurate100%Feb 22, 2026
I’ve also seen some critiques that mentioned that we didn’t address skeptics enough, or that we only worked within our own frameworks and didn’t justify why they make sense compared to the frameworks that more skeptical people tend to use.
Claims (1)View all claims
The project received significant attention and has been discussed in venues including Lawfare Media, ControlAI, and a CEPR webinar.
Claims (1)View all claims
The AI Futures Project has emphasized that the AI 2027 scenario was never intended as a confident prediction that AGI would arrive in 2027, and that all team members maintain high uncertainty about when AGI and ASI will be built. The December 2025 model update predicted 3-5 years longer timelines to full coding automation compared to the April 2025 AI 2027 forecast, attributed primarily to more conservative modeling of pre-automation AI R&D speedups and recognition of potential data bottlenecks.
Inaccurate30%Feb 22, 2026
The "AI 2027" report , a project that originally predicted Artificial General Intelligence (AGI) could arrive in two years, has been updated by its authors. The new consensus? It will arrive around 2030.

wrong_attribution unsupported wrong_numbers

Claims (1)View all claims
Lifland co-founded Sage, an organization focused on building interactive AI explainers and forecasting tools. One of Sage's key projects is AI Digest, which received \$550,000 from Coefficient Giving for its work, with an additional \$550,000 for forecasting projects. The organization aims to make AI developments more accessible to broader audiences through interactive tools and clear explanations.
Claims (1)View all claims
He serves as a mentor in the MATS Program (focusing on Strategy & Forecasting, Policy & Governance streams). He has also been featured in the documentary "Making God," which explores AGI risks.
Inaccurate50%Feb 22, 2026
Focus Strategy and Forecasting, Policy and Governance

unsupported: The source does not mention the documentary "Making God".

15EA Forum - Making God Documentaryforum.effectivealtruism.org·Blog post
Claims (1)View all claims
He serves as a mentor in the MATS Program (focusing on Strategy & Forecasting, Policy & Governance streams). He has also been featured in the documentary "Making God," which explores AGI risks.
Minor issues85%Feb 22, 2026
5) Heather-Rose is Government Affairs Lead in LA for Labor Union SAG-AFTRA. She spoke to us about her: political campaigning to educate Congressmen and Women on risks from AI; serves on SAG-AFTRA’s New Technology Committee, focusing on protecting actors' rights against AI misuse; she became interested in AI safety in 2020 and has since been advocating for regulations on AI-generated content and deepfakes; job loss concerns, too.

The source does not mention the MATS Program or the specific streams he mentors in. The source does not explicitly state that he is 'featured' in the documentary, but rather that he is the Executive Producer.

16EA Forum - Practical Value of Flawed Modelsforum.effectivealtruism.org·Blog post
Claims (2)View all claims
Critics have also raised concerns about philosophical overconfidence, warning that popularizing flawed models could lead people to make significant life decisions based on shaky forecasts. Others counter that inaction on short timelines could be costlier if the forecasts prove accurate.
Accurate100%Feb 22, 2026
At the end of the post, he emphatically concludes that forecasts like AI 2027 shouldn’t be popularized because, in practice, they influence people to make serious life decisions based on what amounts to a shoddy toy model stapled to a sci-fi short story (problematically disguised as rigorous, empirically-backed research).
He agreed to changes in the model write-up and paid \$500 bounties to both titotal and another critic, Peter Johnson, for identifying issues. The team released a detailed response explaining their reasoning more thoroughly, including their justification for the model's assumptions.
17AI Futures Notes Substack - Response to Titotal Critiqueaifuturesnotes.substack.com·Blog post
Claims (1)View all claims
He agreed to changes in the model write-up and paid \$500 bounties to both titotal and another critic, Peter Johnson, for identifying issues. The team released a detailed response explaining their reasoning more thoroughly, including their justification for the model's assumptions.
Accurate100%Feb 22, 2026
We expected as much, which is why we advertised a bounty program to pay people to critique our model. We ended up paying out $500 each to titotal and Peter Johnson as a result.
Citation verification: 22 verified, 3 flagged, 7 unchecked of 40 total

Related Pages

Top Related Pages

Concepts

AGI TimelineAgi DevelopmentAI Scaling Laws

Organizations

Elicit (AI Research Tool)OpenAIControlAICoefficient GivingOpen PhilanthropyAstralis Foundation

Key Debates

AI Risk Critical Uncertainties Model

Other

Philip Tetlock (Forecasting Pioneer)Connor Leahy

Models

AI Capability Threshold ModelAI Risk Activation Timeline ModelAI-Bioweapons Timeline Model

Approaches

AI-Augmented ForecastingPrediction Markets (AI Forecasting)