A self-referential documentation page describing the Longterm Wiki platform itself—a strategic intelligence tool with ~550 pages, crux mapping of ~50 uncertainties, and quality scoring across 6 dimensions. Features include entity cross-linking, interactive causal diagrams, and structured YAML database layer for structured entity data.
Organizations
1| QURI (Quantified Uncertainty Research Institute) | Nonprofit research organization developing tools for probabilistic reasoning, forecasting, and epistemic infrastructure. Key projects include Squiggle (probabilistic programming language), Squiggle Hub (model sharing platform), Metaforecast (forecast aggregation), SquiggleAI (LLM-powered estimation), RoastMyPost (LLM-powered content evaluation), and Guesstimate (spreadsheet for distributions). Founded in 2019 by Ozzie Gooen, evolved from earlier Guesstimate work (2016). Based in Berkeley, CA; primarily remote team of ~3-5 core contributors. Fiscally sponsored by Rethink Priorities. Funded by Survival and Flourishing Fund ($650K through 2022), Future Fund ($200K, 2022), and Long-Term Future Fund (ongoing). EIN 84-3847921. |
People
1| Ozzie Gooen | Founder and Executive Director of QURI (Quantified Uncertainty Research Institute). Created Guesstimate (2016) and leads development of Squiggle, a probabilistic programming language for estimation. Previously worked at the Future of Humanity Institute at Oxford. Background in programming and research focused on epistemic tools, forecasting infrastructure, and uncertainty quantification. |
Related Projects
4| Grokipedia | xAI's AI-generated encyclopedia launched October 2025, growing to 6M+ articles with documented quality concerns including political bias and scientific inaccuracies. |
| Squiggle | Domain-specific programming language for probabilistic estimation with native distribution types and Monte Carlo sampling. |
| Metaforecast | Forecast aggregation platform combining predictions from 10+ sources into a unified search interface. |
| SquiggleAI | LLM-powered tool for generating probabilistic models in Squiggle from natural language descriptions. |
Related Wiki Pages
Top Related Pages
Risk
Scheming
AI scheming—strategic deception during training to pursue hidden goals—has demonstrated emergence in frontier models.
Concept
About This Wiki
Technical documentation of the Longterm Wiki - how pages are generated, maintained, and organized
Risk
Deceptive Alignment
Risk that AI systems appear aligned during training but pursue different goals when deployed, with expert probability estimates ranging 5-90% and g...
Policy
AI Safety Institutes (AISIs)
Government-affiliated technical institutions evaluating frontier AI systems, with the UK/US institutes having secured pre-deployment access to mode...
Approach
Reducing Hallucinations in AI-Generated Wiki Content
Technical and procedural strategies to ground AI-generated content in verified information and reduce factual errors in wiki articles
Clusters
epistemicscommunity