Skip to content
Longterm Wiki
Updated 2026-02-27HistoryData
Page StatusDocumentationDashboard
Edited 6 weeks ago37 words
Content0/12
SummaryScheduleEntityEdit history
Tables0Diagrams0Int. links0/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0

Hallucination Risk

Per-page hallucination risk scores based on citation density, claim type, and source quality. Pages with high risk have many unsourced claims or rely on low-quality sources — prioritize these for citation audits. Run pnpm crux query risk --level=high for the CLI equivalent.

Risk scores computed from citation density, entity type, quality, content integrity, and other signals. 734 pages assessed. 108 high-risk pages.

Total Assessed
734
High Risk
108
Medium Risk
486
Low Risk
140
Avg Score
48

Most Common Risk Factors

no-citations426few-external-sources241biographical-claims169conceptual-content140low-quality-score131high-rigor115well-cited115minimal-content91low-rigor-score58moderately-cited51

Risk Distribution

High: 15%Medium: 66%Low: 19%
734 results
Factors
The MIRI Era (2000-2015)90highhistorical31
specific-factual-claimsno-citationslow-rigor-score+2
ASML85highorganization-
biographical-claimsno-citationsmostly-unsourced-footnotes
Bureau of Industry and Security85highorganization-
biographical-claimsno-citationsmostly-unsourced-footnotes
Center for Human-Compatible AI (CHAI)85highorganization37
biographical-claimsno-citationslow-quality-score+1
Conjecture85highorganization37
biographical-claimsno-citationslow-quality-score+1
Ford Foundation85highorganization-
biographical-claimsno-citationsmostly-unsourced-footnotes
Freedom House85highorganization-
biographical-claimsno-citationsmostly-unsourced-footnotes
Paul Christiano85highperson39
biographical-claimsno-citationslow-quality-score+1
Yoshua Bengio85highperson39
biographical-claimsno-citationslow-quality-score+1
Rogue AI Scenarios85highrisk55
low-citation-densityfew-external-sourcessevere-truncation
Large Language Models80highcapability60
low-citation-densitysevere-truncation
Ada Lovelace Institute80highorganization-
biographical-claimsno-citationsfew-external-sources
AI Now Institute80highorganization-
biographical-claimsno-citationsfew-external-sources
AI Policy Institute80highorganization-
biographical-claimsno-citationsfew-external-sources
Americans for Responsible Innovation80highorganization-
biographical-claimsno-citationsfew-external-sources
Brookings Institution AI and Emerging Technology Initiative80highorganization-
biographical-claimsno-citationsfew-external-sources
Carnegie Endowment for International Peace — AI Program80highorganization-
biographical-claimsno-citationsfew-external-sources
Center for Democracy and Technology80highorganization-
biographical-claimsno-citationsfew-external-sources
CSIS Wadhwani Center for AI and Advanced Technologies80highorganization-
biographical-claimsno-citationsfew-external-sources
Google DeepMind80highorganization37
biographical-claimslow-citation-densitylow-quality-score+1
Partnership on AI80highorganization-
biographical-claimsno-citationsfew-external-sources
RAND Corporation AI Policy Research80highorganization-
biographical-claimsno-citationsfew-external-sources
Stanford HAI (Human-Centered Artificial Intelligence)80highorganization-
biographical-claimsno-citationsfew-external-sources
The Foundation Layer80highorganization3
biographical-claimsno-citationslow-quality-score
The Future Society80highorganization-
biographical-claimsno-citationsfew-external-sources
UK AI Safety Institute80highorganization52
biographical-claimsno-citationsfew-external-sources
xAI80highorganization48
biographical-claimsfew-external-sourceswell-cited+1
Dario Amodei80highperson41
biographical-claimsno-citationsfew-external-sources
Geoffrey Hinton80highperson42
biographical-claimsno-citationsfew-external-sources
Holden Karnofsky80highperson40
biographical-claimsno-citationsfew-external-sources
Jared Kaplan80highperson-
biographical-claimslow-citation-densitymostly-unsourced-footnotes
Toby Ord80highperson41
biographical-claimsno-citationsfew-external-sources
Common Writing Principles80highinternal0
low-quality-scorefew-external-sourcessevere-truncation
Early Warnings (1950s-2000)75highhistorical31
specific-factual-claimslow-rigor-scorelow-quality-score+1
80,000 Hours75highorganization45
biographical-claimsno-citations
AI Futures Project75highorganization50
biographical-claimsno-citations
Apollo Research75highorganization58
biographical-claimsno-citations
Advanced Research and Invention Agency (ARIA)75highorganization-
biographical-claimsno-citations
Astralis Foundation75highorganization30
biographical-claimslow-rigor-scorelow-quality-score
Center for AI Safety (CAIS)75highorganization42
biographical-claimsno-citations
Carnegie Endowment for International Peace75highorganization-
biographical-claimsno-citations
Center for AI Policy75highorganization-
biographical-claimsno-citations
Center for a New American Security (CNAS)75highorganization-
biographical-claimsno-citations
Coefficient Giving75highorganization55
biographical-claimsno-citations
CSET (Center for Security and Emerging Technology)75highorganization43
biographical-claimsno-citations
Epoch AI75highorganization51
biographical-claimsno-citations
Future of Humanity Institute (FHI)75highorganization51
biographical-claimsno-citations
Future of Life Institute (FLI)75highorganization46
biographical-claimsno-citations
Forecasting Research Institute75highorganization55
biographical-claimsno-citations
GovAI75highorganization43
biographical-claimsno-citations
LessWrong75highorganization44
biographical-claimsno-citations
Longview Philanthropy75highorganization45
biographical-claimsno-citations
Long-Term Future Fund (LTFF)75highorganization56
biographical-claimsno-citations
Manifold (Prediction Market)75highorganization43
biographical-claimsno-citations
Manifund75highorganization50
biographical-claimsno-citations
Meta AI (FAIR)75highorganization51
biographical-claimsno-citations
Metaculus75highorganization50
biographical-claimsno-citations
Machine Intelligence Research Institute (MIRI)75highorganization50
biographical-claimsno-citations
QURI (Quantified Uncertainty Research Institute)75highorganization48
biographical-claimsno-citations
Survival and Flourishing Fund (SFF)75highorganization59
biographical-claimsno-citations
Vitalik Buterin (Funder)75highorganization45
biographical-claimsno-citations
Ajeya Cotra75highperson55
biographical-claimsno-citations
Demis Hassabis75highperson45
biographical-claimsno-citations
Evan Hubinger75highperson43
biographical-claimsno-citations
Helen Toner75highperson43
biographical-claimsno-citations
Sam Altman75highperson40
biographical-claimsno-citations
Tom Brown75highperson-
biographical-claimsfew-external-sourcesmostly-unsourced-footnotes
Yann LeCun75highperson41
biographical-claimsno-citations
AI-Induced Cyber Psychosis75highrisk37
no-citationslow-rigor-scorelow-quality-score+1
LongtermWiki Strategy Brainstorm75highinternal4
no-citationslow-rigor-scorelow-quality-score+1
LongtermWiki Vision75highinternal2
no-citationslow-rigor-scorelow-quality-score+1
LongtermWiki Value Proposition75highinternal4
no-citationslow-rigor-scorelow-quality-score+1
Parameters Strategy75highinternal3
no-citationslow-rigor-scorelow-quality-score+1
Project Roadmap75highinternal29
no-citationslow-rigor-scorelow-quality-score+1
Agentic AI70highcapability68
severe-truncation
Why Alignment Might Be Hard70highargument69
low-citation-densityconceptual-contentsevere-truncation
Mainstream Era (2020-Present)70highhistorical42
specific-factual-claimsno-citations
Political Stability as an AI Safety Factor70highanalysis-
no-citationsfew-external-sourcesmostly-unsourced-footnotes
Expected Value of AI Safety Research70highanalysis60
no-citationslow-rigor-scorefew-external-sources
Community Building Organizations (Overview)70high-35
no-citationslow-rigor-scorelow-quality-score
US AI Safety Institute (now CAISI)70highorganization91
biographical-claimsno-citationshigh-quality
Ilya Sutskever70highperson26
biographical-claimslow-rigor-scorelow-quality-score+2
AI Watch70highproject23
no-citationslow-rigor-scorelow-quality-score
FlexHEG (Flexible Hardware-Enabled Guarantees)70highproject-
no-citationsfew-external-sourcesmostly-unsourced-footnotes
International AI Safety Summits70highevent63
specific-factual-claimsno-citations
Org Watch70highproject23
low-citation-densitylow-rigor-scorelow-quality-score+1
Stop Stealing Our Chips Act70highpolicy-
no-citationsfew-external-sourcesmostly-unsourced-footnotes
AI-Enabled Political Polarization70highrisk-
no-citationsfew-external-sourcesmostly-unsourced-footnotes
Epistemic Risks (Overview)70high-37
no-citationslow-rigor-scorelow-quality-score
Epistemic Systemic Risk70highrisk-
no-citationsfew-external-sourcesmostly-unsourced-footnotes
Structural Risks (Overview)70high-37
no-citationslow-rigor-scorelow-quality-score
Factor Diagram Naming: Research Report70high-31
no-citationslow-rigor-scorelow-quality-score
Quantitative Claims70high-28
no-citationslow-rigor-scorelow-quality-score
Longtermist Funders (Overview)65high-3
no-citationslow-quality-scorefew-external-sources
Gratified65highorganization25
biographical-claimslow-rigor-scorelow-quality-score+1
TSMC65highorganization-
biographical-claimsfew-external-sourcesmoderately-cited+1
Nick Bostrom65highperson25
biographical-claimslow-rigor-scorelow-quality-score+2
Paris AI Action Summit (February 2025)65highpolicy-
no-citationsmostly-unsourced-footnotes
Executive Order 14179: Removing Barriers to American Leadership in AI65highpolicy-
no-citationsmostly-unsourced-footnotes
Accident Risks (Overview)65high-42
no-citationslow-rigor-score
Showing 1100 of 734
1 / 8

Could not reach wiki-server — showing local data.

Error: This operation was aborted

Scores computed at build time by the canonical scorer (crux/lib/hallucination-risk.ts). Run pnpm crux validate hallucination-risk for a CLI report.