Epistemic & Forecasting Organizations
Page Status
Quality:70 (Good)
Importance:72 (High)
Last edited:2026-01-29 (3 days ago)
Overview
Section titled “Overview”This section covers organizations focused on improving forecasting, epistemic tools, and quantitative reasoning—particularly as applied to AI safety and existential risk assessment. These organizations provide critical infrastructure for understanding AI timelines, evaluating interventions, and making better decisions under uncertainty.
Key Organizations
Section titled “Key Organizations”| Organization | Focus | Key Products/Projects |
|---|---|---|
| Epoch AI | AI trends research & compute tracking | ML Trends Database, Parameter Counts, Training Compute Estimates |
| Metaculus | Prediction aggregation platform | AI Forecasting, AGI Timeline Questions, Tournaments |
| Forecasting Research Institute | Forecasting methodology research | XPT Tournament, ForecastBench, Superforecaster Studies |
| QURI | Epistemic tools development | Squiggle Language, Squiggle Hub, Metaforecast |
| Manifold | Prediction markets platform | AI Markets, Manifest Conference, Manifund |
Why Epistemic Infrastructure Matters for AI Safety
Section titled “Why Epistemic Infrastructure Matters for AI Safety”Forecasting and epistemic tools are essential for AI safety because:
- Timeline Uncertainty: AI development trajectories are highly uncertain; better forecasting helps allocate resources appropriately
- Intervention Evaluation: Quantifying the expected impact of safety interventions requires probabilistic reasoning tools
- Early Warning: Prediction markets and forecasting platforms can provide early signals about concerning developments
- Decision Support: Policymakers and researchers need calibrated uncertainty estimates, not false precision
- Accountability: Track records create feedback loops that improve institutional decision-making
Related Resources
Section titled “Related Resources”- AGI TimelineAgi TimelineComprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current p...Quality: 59/100 - Forecasts on when transformative AI may arrive
- Alignment EvaluationsAlignment EvalsComprehensive review of alignment evaluation methods showing Apollo Research found 1-13% scheming rates across frontier models, while anti-scheming training reduced covert actions 30x (13%→0.4%) bu...Quality: 65/100 - Methods for assessing AI safety