Back
Existential Risk Persuasion Tournament (XPT) Results
webforecastingresearch.org·forecastingresearch.org/xpt
A key empirical reference for quantitative existential risk estimates; frequently cited in AI safety discourse to contextualize AI risk relative to other global catastrophic threats.
Metadata
Importance: 72/100organizational reportdataset
Summary
The Existential Risk Persuasion Tournament (XPT) aggregated probabilistic forecasts from 169 participants—including domain experts, forecasting specialists, and superforecasters—on humanity's extinction risks by 2100. The tournament examined threats including AI, nuclear war, engineered pandemics, and other catastrophic risks, using structured deliberation and persuasion rounds to update estimates. It provides one of the most systematic crowd-sourced quantitative assessments of existential risk probabilities available.
Key Points
- •169 participants produced probabilistic forecasts on human extinction and civilization collapse risks by 2100 across multiple threat categories.
- •AI was identified as a major concern, with participants assigning notable probability to AI-related existential catastrophe.
- •The tournament used persuasion rounds where participants could update forecasts after reviewing arguments, testing whether expert deliberation shifts risk estimates.
- •Results offer rare quantitative benchmarks for comparing AI risk against other existential threats like nuclear war and engineered pandemics.
- •Findings are relevant to prioritization decisions in AI safety and global catastrophic risk funding and policy.
Review
The XPT represents an innovative approach to understanding complex existential risks by bringing together accurate forecasters and domain experts in a structured, collaborative prediction environment. By incentivizing participants to discuss, explain, and update their forecasts, the tournament aimed to generate high-quality insights into potential catastrophic scenarios facing humanity in the next century. The methodology's key strength lies in its interactive format, which allows participants to engage directly with different perspectives and potentially refine their predictions through structured dialogue. Of particular interest are the observed differences between superforecasters and expert perspectives, especially regarding the likelihood of catastrophic outcomes. The researchers noted intriguing discrepancies, such as why superforecasters seemed less concerned about extreme risks despite agreeing on many fundamental points. This approach provides a novel framework for exploring how expertise, forecasting skill, and interdisciplinary knowledge interact when assessing long-term global risks.
Cited by 3 pages
| Page | Type | Quality |
|---|---|---|
| Forecasting Research Institute (FRI) | Organization | 55.0 |
| XPT (Existential Risk Persuasion Tournament) | Project | 54.0 |
| Instrumental Convergence | Risk | 64.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20263 KB
XPT — Forecasting Research Institute 0 The Existential Risk Persuasion Tournament (XPT) 2022 Tournament How likely is it that humanity will go extinct by 2100? Which is more dangerous, AI or nuclear war? How successful will we be at developing alternative energy sources? Will we be able to control emerging pandemics? In the Existential Risk Persuasion Tournament (XPT), we asked 169 participants—including accurate forecasters and experts on existential risks—to help us understand how people make forecasts about questions like these. We incentivized them to talk to each other, explain their reasoning, and update their forecasts, working individually and in teams to come up with high-quality forecasts and rationales about the risks humanity faces in the next century. We discovered points where our participants agree and disagree on what they expect to happen to humanity in the coming decades, on topics related to AI, nuclear war, biological pathogens, and other dangers. We were also left with some puzzling questions. Why are superforecasters (people who have been accurate about short-run forecasts in the past) so much less worried about catastrophic outcomes than experts, even though they agree on many other questions? Why didn’t they get more worried when they heard the experts explaining their positions? The XPT is a first step toward understanding how expert knowledge and forecasting skill are related, and how people do (or don’t) learn from one another when they make forecasts about important long-term questions. See the full report here . Read the full report Future work The 2022 study was the first in series of Existential Risk Persuasion Tournaments we plan to run over the coming years with this set of forecasters. Establishing this longitudinal data will allow us to track changes in forecasters’ perceptions over time, giving them a chance to update both over a short timescale, through discussion within the tournament, and a longer one, across years of following real-world events. Subsequent tournaments will also feature refinements of our methods, learned both from previous iterations of such tournaments and from our other complementary projects.
Resource ID:
5c91c25b0c337e1b | Stable ID: sid_Uwu1U4KZJn