Hallucination Risk
Per-page hallucination risk scores based on citation density, claim type, and source quality. Pages with high risk have many unsourced claims or rely on low-quality sources — prioritize these for citation audits. Run pnpm crux query risk --level=high for the CLI equivalent.
Risk scores computed from citation density, entity type, quality, content integrity, and other signals. 768 pages assessed. 118 high-risk pages.
Total Assessed
768
High Risk
118
Medium Risk
500
Low Risk
150
Avg Score
49
Most Common Risk Factors
no-citations468few-external-sources255biographical-claims184conceptual-content143low-quality-score130high-rigor115well-cited109minimal-content95low-rigor-score55structured-format48
Risk Distribution
High: 15%Medium: 65%Low: 20%
768 results
| Factors | |||||
|---|---|---|---|---|---|
| The MIRI Era (2000-2015) | 90 | high | historical | 31 | specific-factual-claimsno-citationslow-rigor-score+2 |
| Center for Human-Compatible AI (CHAI) | 85 | high | organization | 37 | biographical-claimsno-citationslow-quality-score+1 |
| Paul Christiano | 85 | high | person | 39 | biographical-claimsno-citationslow-quality-score+1 |
| Yoshua Bengio | 85 | high | person | 39 | biographical-claimsno-citationslow-quality-score+1 |
| Conjecture | 85 | high | organization | 37 | biographical-claimsno-citationslow-quality-score+1 |
| The Foundation Layer | 80 | high | organization | 3 | biographical-claimsno-citationslow-quality-score |
| Dario Amodei | 80 | high | person | 41 | biographical-claimsno-citationsfew-external-sources |
| AI Policy Institute | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Partnership on AI | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Geoffrey Hinton | 80 | high | person | 42 | biographical-claimsno-citationsfew-external-sources |
| Alphabet Google AI Governance Structure | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Americans for Responsible Innovation | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Holden Karnofsky | 80 | high | person | 40 | biographical-claimsno-citationsfew-external-sources |
| Common Writing Principles | 80 | high | internal | 0 | low-quality-scorefew-external-sourcessevere-truncation |
| UK AI Governance Actors | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Toby Ord | 80 | high | person | 41 | biographical-claimsno-citationsfew-external-sources |
| RAND Corporation AI Policy Research | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Google DeepMind | 80 | high | organization | 37 | biographical-claimslow-citation-densitylow-quality-score+1 |
| Meta Open-Source Strategy Policy Impact | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Compute Supply Chain Actors | 80 | high | organization | 72 | biographical-claimsno-citationsfew-external-sources |
| Carnegie Endowment for International Peace — AI Program | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Brookings Institution AI and Emerging Technology Initiative | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| UK AI Safety Institute | 80 | high | organization | 52 | biographical-claimsno-citationsfew-external-sources |
| Center for Democracy and Technology | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Shareholder and Board Influence in AI Labs | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| TSMC | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| The Future Society | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Middle East AI Actors | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Industry Consortia and Self-Regulation | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Ada Lovelace Institute | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Large Language Models | 80 | high | capability | 60 | low-citation-densitysevere-truncation |
| xAI | 80 | high | organization | 48 | biographical-claimsfew-external-sourceswell-cited+1 |
| CSIS Wadhwani Center for AI and Advanced Technologies | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| AI Now Institute | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Tom Brown | 80 | high | person | - | biographical-claimsno-citationsfew-external-sources |
| China AI Power Actors | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Stanford HAI (Human-Centered Artificial Intelligence) | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| AI-Induced Cyber Psychosis | 75 | high | risk | 37 | no-citationslow-rigor-scorelow-quality-score+1 |
| Survival and Flourishing Fund (SFF) | 75 | high | organization | 59 | biographical-claimsno-citations |
| Manifund | 75 | high | organization | 50 | biographical-claimsno-citations |
| Forecasting Research Institute | 75 | high | organization | 55 | biographical-claimsno-citations |
| Apollo Research | 75 | high | organization | 58 | biographical-claimsno-citations |
| Ford Foundation | 75 | high | organization | - | biographical-claimsno-citations |
| Advanced Research and Invention Agency (ARIA) | 75 | high | organization | - | biographical-claimsno-citations |
| Center for a New American Security (CNAS) | 75 | high | organization | - | biographical-claimsno-citations |
| Ajeya Cotra | 75 | high | person | 55 | biographical-claimsno-citations |
| Think Tank and Policy Institute Influence on AI | 75 | high | organization | - | biographical-claimsno-citations |
| Astralis Foundation | 75 | high | organization | 30 | biographical-claimslow-rigor-scorelow-quality-score |
| Metaculus | 75 | high | organization | 50 | biographical-claimsno-citations |
| Machine Intelligence Research Institute (MIRI) | 75 | high | organization | 50 | biographical-claimsno-citations |
| LongtermWiki Vision | 75 | high | internal | 2 | no-citationslow-rigor-scorelow-quality-score+1 |
| Future of Humanity Institute (FHI) | 75 | high | organization | 51 | biographical-claimsno-citations |
| parameters-strategy | 75 | high | - | - | no-citationslow-rigor-scorelow-quality-score+1 |
| Greg Brockman | 75 | high | person | - | biographical-claimsno-citations |
| Center for AI Policy | 75 | high | organization | - | biographical-claimsno-citations |
| Jared Kaplan | 75 | high | person | - | biographical-claimsno-citations |
| GovAI | 75 | high | organization | 43 | biographical-claimsno-citations |
| Sam McCandlish | 75 | high | person | - | biographical-claimsno-citations |
| Early Warnings (1950s-2000) | 75 | high | historical | 31 | specific-factual-claimslow-rigor-scorelow-quality-score+1 |
| Center for AI Safety (CAIS) | 75 | high | organization | 42 | biographical-claimsno-citations |
| Manifold (Prediction Market) | 75 | high | organization | 43 | biographical-claimsno-citations |
| Future of Life Institute (FLI) | 75 | high | organization | 46 | biographical-claimsno-citations |
| Long-Term Future Fund (LTFF) | 75 | high | organization | 56 | biographical-claimsno-citations |
| Sam Altman | 75 | high | person | 40 | biographical-claimsno-citations |
| LongtermWiki Value Proposition | 75 | high | internal | 4 | no-citationslow-rigor-scorelow-quality-score+1 |
| LessWrong | 75 | high | organization | 44 | biographical-claimsno-citations |
| Amazon Anthropic Partnership Influence | 75 | high | organization | - | biographical-claimsno-citations |
| Epoch AI | 75 | high | organization | 51 | biographical-claimsno-citations |
| Longview Philanthropy | 75 | high | organization | 45 | biographical-claimsno-citations |
| CSET (Center for Security and Emerging Technology) | 75 | high | organization | 43 | biographical-claimsno-citations |
| xAI and Musk Political Influence | 75 | high | organization | - | biographical-claimsno-citations |
| Carnegie Endowment for International Peace | 75 | high | organization | - | biographical-claimsno-citations |
| Demis Hassabis | 75 | high | person | 45 | biographical-claimsno-citations |
| LongtermWiki Strategy Brainstorm | 75 | high | internal | 4 | no-citationslow-rigor-scorelow-quality-score+1 |
| Microsoft OpenAI Partnership Influence | 75 | high | organization | - | biographical-claimsno-citations |
| Vitalik Buterin (Funder) | 75 | high | organization | 45 | biographical-claimsno-citations |
| European Union AI Governance Actors | 75 | high | organization | - | biographical-claimsno-citations |
| 80,000 Hours | 75 | high | organization | 45 | biographical-claimsno-citations |
| OpenAI Board and Foundation Dynamics | 75 | high | organization | - | biographical-claimsno-citations |
| Military and Defense AI Actors | 75 | high | organization | - | biographical-claimsno-citations |
| Meta AI (FAIR) | 75 | high | organization | 51 | biographical-claimsno-citations |
| Government AI Actors Overview | 75 | high | organization | - | biographical-claimsno-citations |
| SMIC | 75 | high | organization | - | biographical-claimsno-citations |
| Project Roadmap | 75 | high | internal | 29 | no-citationslow-rigor-scorelow-quality-score+1 |
| Freedom House | 75 | high | organization | - | biographical-claimsno-citations |
| QURI (Quantified Uncertainty Research Institute) | 75 | high | organization | 48 | biographical-claimsno-citations |
| Helen Toner | 75 | high | person | 43 | biographical-claimsno-citations |
| ASML | 75 | high | organization | - | biographical-claimsno-citations |
| Yann LeCun | 75 | high | person | 41 | biographical-claimsno-citations |
| Coefficient Giving | 75 | high | organization | 55 | biographical-claimsno-citations |
| Bureau of Industry and Security | 75 | high | organization | - | biographical-claimsno-citations |
| AI Futures Project | 75 | high | organization | 50 | biographical-claimsno-citations |
| Evan Hubinger | 75 | high | person | 43 | biographical-claimsno-citations |
| Ilya Sutskever | 70 | high | person | 26 | biographical-claimslow-rigor-scorelow-quality-score+2 |
| Org Watch | 70 | high | project | 23 | low-citation-densitylow-rigor-scorelow-quality-score+1 |
| AI Watch | 70 | high | project | 23 | no-citationslow-rigor-scorelow-quality-score |
| Expected Value of AI Safety Research | 70 | high | analysis | 60 | no-citationslow-rigor-scorefew-external-sources |
| Why Alignment Might Be Hard | 70 | high | argument | 69 | low-citation-densityconceptual-contentsevere-truncation |
| Factor Diagram Naming: Research Report | 70 | high | - | 31 | no-citationslow-rigor-scorelow-quality-score |
| Epistemic Risks (Overview) | 70 | high | - | 37 | no-citationslow-rigor-scorelow-quality-score |
Showing 1–100 of 768
1 / 8
Scores computed at build time by the canonical scorer (crux/lib/hallucination-risk.ts). Run pnpm crux validate hallucination-risk for a CLI report.