Back
Forecasting Research Institute
webforecastingresearch.org·forecastingresearch.org/research
FRI's research on forecasting methodology is a useful reference for AI safety researchers interested in how to rigorously quantify and communicate uncertainty about AI risks and transformative AI timelines.
Metadata
Importance: 52/100homepage
Summary
The Forecasting Research Institute (FRI) conducts empirical research on forecasting methodologies, judgment aggregation, and the use of prediction markets and expert elicitation to improve decision-making under uncertainty. Their work is particularly relevant to AI safety and governance insofar as it informs how we assess and communicate risks from emerging technologies. FRI aims to make forecasting tools more rigorous and widely applicable to high-stakes domains.
Key Points
- •FRI studies how forecasting and prediction aggregation methods can improve accuracy in complex, high-stakes domains including AI risk.
- •Research covers judgment aggregation, superforecasting, and structured expert elicitation to reduce uncertainty in long-horizon predictions.
- •Their work informs AI governance efforts by providing better tools for estimating probabilities of transformative or catastrophic AI outcomes.
- •FRI collaborates with policy and research communities to translate forecasting insights into actionable guidance.
- •The institute's output is relevant to operationalizing AI risk estimates used in safety roadmaps and governance frameworks.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Instrumental Convergence | Risk | 64.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20269 KB
Research — Forecasting Research Institute
0
Forecasting the Economic Effects of AI
In this working paper, we elicit forecasts on the economic effects of AI from academic economists, AI experts, highly accurate forecasters, and the general public.
While all groups expect significant advancement in AI capabilities, they do not anticipate GDP, productivity, or labor force participation to deviate much from historical trends. However, conditional on a 'rapid' AI progress scenario, economists forecast substantial economic shifts, such as GDP growth rising to 3.5% and labor force participation falling from its current level of 62.6% to 55.0% by 2050.
Find out more here .
The Longitudinal Expert AI Panel (LEAP)
LEAP is a three-year project tracking the views of leading computer scientists, AI industry professionals, policy researchers, and economists on the trajectory of artificial intelligence. Every month, LEAP participants provide thousands of forecasts on key AI progress indicators including benchmarks, labor market impacts, and scientific discovery.
For more about LEAP, and to view reports from each month of surveys and analysis of every question, visit the LEAP website .
ForecastBench
ForecastBench is a dynamic, contamination-free benchmark of large language model (LLM) forecasting accuracy. The benchmark compares the performance of LLMs to both the general public and superforecasters, and it serves as a valuable proxy for general intelligence. Originally launched in September 2024, ForecastBench received a major update in October 2025 and is now open to public submissions.
For more about ForecastBench and to see the latest leaderboard, visit www.forecastbench.org
Assessing Near-Term Accuracy in the Existential Risk Persuasion Tournament
This report assesses the accuracy of short-term forecasts made during the Existential Risk Persuasion Tournament (XPT)—a 2022 study that convened 169 superforecasters and domain experts to make predictions on long-term risks including AI, climate change, nuclear war, and pandemic.
Find more about near-term accuracy in the XPT here .
Forecasting Biosecurity Risks from LLMs
This forecasting study on biological risks from large language models (LLMs) examined expert views on AI-enabled biosecurity threats. The study saw 46
... (truncated, 9 KB total)Resource ID:
bcb075f246413790 | Stable ID: sid_kyqEMnruVi