LLM Summary:A self-referential documentation page describing the Longterm Wiki platform itself—a strategic intelligence tool with ~550 pages, crux mapping of ~50 uncertainties, and quality scoring across 6 dimensions. Features include entity cross-linking, interactive causal diagrams, and structured YAML databases tracking expert positions on key AI safety cruxes.
Issues (2):
QualityRated 63 but structure suggests 100 (underrated by 37 points)
QURIOrganizationQURI (Quantified Uncertainty Research Institute)QURI develops Squiggle (probabilistic programming language with native distribution types), SquiggleAI (Claude-powered model generation producing 100-500 line models), Metaforecast (aggregating 2,1...Quality: 48/100 (Quantified Uncertainty Research Institute)
The Longterm Wiki is a strategic intelligence platform for AI safety prioritization. Unlike general encyclopedias or community wikis, it serves as a decision-support tool for funders, researchers, and policymakers asking: “Where should the next marginal dollar or researcher-hour go?”
Loading diagram...
The project addresses four problems in the AI safety field:
Problem
How the Wiki Addresses It
Fragmented knowledge
Consolidated, cross-linked knowledge base with ≈550 pages
Unclear cruxes
Explicit mapping of key uncertainties and expert disagreements
Poor prioritization legibility
Worldview → intervention mapping showing how assumptions lead to priorities
The wiki is deliberately opinionated about importance and uncertainty—it rates content quality, tracks expert positions on cruxes, and makes prioritization implications explicit. This distinguishes it from neutral reference works like Wikipedia or discussion platforms like LessWrong.
Content is editorially curated rather than community-contributed, ensuring consistency and quality control. Each page goes through a grading pipeline that scores novelty, rigor, actionability, and completeness.
Deceptive AlignmentRiskDeceptive AlignmentComprehensive analysis of deceptive alignment risk where AI systems appear aligned during training but pursue different goals when deployed. Expert probability estimates range 5-90%, with key empir...Quality: 75/100, AI Safety InstitutesPolicyAI Safety Institutes (AISIs)Analysis of government AI Safety Institutes finding they've achieved rapid institutional growth (UK: 0→100+ staff in 18 months) and secured pre-deployment access to frontier models, but face critic...Quality: 69/100
AI Transition Model
Comprehensive factor network with outcomes and scenarios
Compare interpretabilitySafety AgendaInterpretabilityMechanistic interpretability has extracted 34M+ interpretable features from Claude 3 Sonnet with 90% automated labeling accuracy and demonstrated 75-85% success in causal validation, though less th...Quality: 66/100 vs governanceGovernanceThis is a placeholder page with no actual content - only component imports that would render data from elsewhere in the system. Cannot assess importance or quality without the underlying content. approaches
Crux identification
Crux mapping shows which uncertainties matter most
Which assumptions drive different funding priorities?
Intentional limitation, links to broader resources
Early stage
Incomplete coverage
Active development, prioritized expansion
No real-time data
Static forecasts
Links to MetaforecastConceptMetaforecastMetaforecast is a forecast aggregation platform combining 2,100+ questions from 10+ sources (Metaculus, Manifold, Polymarket, etc.) with daily updates via automated scraping. Created by QURI, it pr...Quality: 35/100 for live data
SquiggleConceptSquiggleSquiggle is a domain-specific probabilistic programming language optimized for intuition-driven estimation rather than data-driven inference, developed by QURI and adopted primarily in the EA commu...Quality: 41/100
SquiggleAIConceptSquiggleAISquiggleAI is an LLM tool (primarily Claude Sonnet 4.5) that generates probabilistic Squiggle models from natural language, using ~20K tokens of cached documentation to produce 100-500 line models ...Quality: 37/100
LW models could be converted to executable Squiggle estimates
MetaforecastConceptMetaforecastMetaforecast is a forecast aggregation platform combining 2,100+ questions from 10+ sources (Metaculus, Manifold, Polymarket, etc.) with daily updates via automated scraping. Created by QURI, it pr...Quality: 35/100
LW links to relevant forecasts as evidence for claims
Squiggle Hub
Potential future integration for interactive models embedded in pages