Hallucination Risk
Per-page hallucination risk scores based on citation density, claim type, and source quality. Pages with high risk have many unsourced claims or rely on low-quality sources — prioritize these for citation audits. Run pnpm crux query risk --level=high for the CLI equivalent.
Risk scores computed from citation density, entity type, quality, content integrity, and other signals. 734 pages assessed. 108 high-risk pages.
Total Assessed
734
High Risk
108
Medium Risk
486
Low Risk
140
Avg Score
48
Most Common Risk Factors
no-citations426few-external-sources241biographical-claims169conceptual-content140low-quality-score131high-rigor115well-cited115minimal-content91low-rigor-score58moderately-cited51
Risk Distribution
High: 15%Medium: 66%Low: 19%
734 results
| Factors | |||||
|---|---|---|---|---|---|
| The MIRI Era (2000-2015) | 90 | high | historical | 31 | specific-factual-claimsno-citationslow-rigor-score+2 |
| ASML | 85 | high | organization | - | biographical-claimsno-citationsmostly-unsourced-footnotes |
| Bureau of Industry and Security | 85 | high | organization | - | biographical-claimsno-citationsmostly-unsourced-footnotes |
| Center for Human-Compatible AI (CHAI) | 85 | high | organization | 37 | biographical-claimsno-citationslow-quality-score+1 |
| Conjecture | 85 | high | organization | 37 | biographical-claimsno-citationslow-quality-score+1 |
| Ford Foundation | 85 | high | organization | - | biographical-claimsno-citationsmostly-unsourced-footnotes |
| Freedom House | 85 | high | organization | - | biographical-claimsno-citationsmostly-unsourced-footnotes |
| Paul Christiano | 85 | high | person | 39 | biographical-claimsno-citationslow-quality-score+1 |
| Yoshua Bengio | 85 | high | person | 39 | biographical-claimsno-citationslow-quality-score+1 |
| Rogue AI Scenarios | 85 | high | risk | 55 | low-citation-densityfew-external-sourcessevere-truncation |
| Large Language Models | 80 | high | capability | 60 | low-citation-densitysevere-truncation |
| Ada Lovelace Institute | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| AI Now Institute | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| AI Policy Institute | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Americans for Responsible Innovation | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Brookings Institution AI and Emerging Technology Initiative | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Carnegie Endowment for International Peace — AI Program | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Center for Democracy and Technology | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| CSIS Wadhwani Center for AI and Advanced Technologies | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Google DeepMind | 80 | high | organization | 37 | biographical-claimslow-citation-densitylow-quality-score+1 |
| Partnership on AI | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| RAND Corporation AI Policy Research | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| Stanford HAI (Human-Centered Artificial Intelligence) | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| The Foundation Layer | 80 | high | organization | 3 | biographical-claimsno-citationslow-quality-score |
| The Future Society | 80 | high | organization | - | biographical-claimsno-citationsfew-external-sources |
| UK AI Safety Institute | 80 | high | organization | 52 | biographical-claimsno-citationsfew-external-sources |
| xAI | 80 | high | organization | 48 | biographical-claimsfew-external-sourceswell-cited+1 |
| Dario Amodei | 80 | high | person | 41 | biographical-claimsno-citationsfew-external-sources |
| Geoffrey Hinton | 80 | high | person | 42 | biographical-claimsno-citationsfew-external-sources |
| Holden Karnofsky | 80 | high | person | 40 | biographical-claimsno-citationsfew-external-sources |
| Jared Kaplan | 80 | high | person | - | biographical-claimslow-citation-densitymostly-unsourced-footnotes |
| Toby Ord | 80 | high | person | 41 | biographical-claimsno-citationsfew-external-sources |
| Common Writing Principles | 80 | high | internal | 0 | low-quality-scorefew-external-sourcessevere-truncation |
| Early Warnings (1950s-2000) | 75 | high | historical | 31 | specific-factual-claimslow-rigor-scorelow-quality-score+1 |
| 80,000 Hours | 75 | high | organization | 45 | biographical-claimsno-citations |
| AI Futures Project | 75 | high | organization | 50 | biographical-claimsno-citations |
| Apollo Research | 75 | high | organization | 58 | biographical-claimsno-citations |
| Advanced Research and Invention Agency (ARIA) | 75 | high | organization | - | biographical-claimsno-citations |
| Astralis Foundation | 75 | high | organization | 30 | biographical-claimslow-rigor-scorelow-quality-score |
| Center for AI Safety (CAIS) | 75 | high | organization | 42 | biographical-claimsno-citations |
| Carnegie Endowment for International Peace | 75 | high | organization | - | biographical-claimsno-citations |
| Center for AI Policy | 75 | high | organization | - | biographical-claimsno-citations |
| Center for a New American Security (CNAS) | 75 | high | organization | - | biographical-claimsno-citations |
| Coefficient Giving | 75 | high | organization | 55 | biographical-claimsno-citations |
| CSET (Center for Security and Emerging Technology) | 75 | high | organization | 43 | biographical-claimsno-citations |
| Epoch AI | 75 | high | organization | 51 | biographical-claimsno-citations |
| Future of Humanity Institute (FHI) | 75 | high | organization | 51 | biographical-claimsno-citations |
| Future of Life Institute (FLI) | 75 | high | organization | 46 | biographical-claimsno-citations |
| Forecasting Research Institute | 75 | high | organization | 55 | biographical-claimsno-citations |
| GovAI | 75 | high | organization | 43 | biographical-claimsno-citations |
| LessWrong | 75 | high | organization | 44 | biographical-claimsno-citations |
| Longview Philanthropy | 75 | high | organization | 45 | biographical-claimsno-citations |
| Long-Term Future Fund (LTFF) | 75 | high | organization | 56 | biographical-claimsno-citations |
| Manifold (Prediction Market) | 75 | high | organization | 43 | biographical-claimsno-citations |
| Manifund | 75 | high | organization | 50 | biographical-claimsno-citations |
| Meta AI (FAIR) | 75 | high | organization | 51 | biographical-claimsno-citations |
| Metaculus | 75 | high | organization | 50 | biographical-claimsno-citations |
| Machine Intelligence Research Institute (MIRI) | 75 | high | organization | 50 | biographical-claimsno-citations |
| QURI (Quantified Uncertainty Research Institute) | 75 | high | organization | 48 | biographical-claimsno-citations |
| Survival and Flourishing Fund (SFF) | 75 | high | organization | 59 | biographical-claimsno-citations |
| Vitalik Buterin (Funder) | 75 | high | organization | 45 | biographical-claimsno-citations |
| Ajeya Cotra | 75 | high | person | 55 | biographical-claimsno-citations |
| Demis Hassabis | 75 | high | person | 45 | biographical-claimsno-citations |
| Evan Hubinger | 75 | high | person | 43 | biographical-claimsno-citations |
| Helen Toner | 75 | high | person | 43 | biographical-claimsno-citations |
| Sam Altman | 75 | high | person | 40 | biographical-claimsno-citations |
| Tom Brown | 75 | high | person | - | biographical-claimsfew-external-sourcesmostly-unsourced-footnotes |
| Yann LeCun | 75 | high | person | 41 | biographical-claimsno-citations |
| AI-Induced Cyber Psychosis | 75 | high | risk | 37 | no-citationslow-rigor-scorelow-quality-score+1 |
| LongtermWiki Strategy Brainstorm | 75 | high | internal | 4 | no-citationslow-rigor-scorelow-quality-score+1 |
| LongtermWiki Vision | 75 | high | internal | 2 | no-citationslow-rigor-scorelow-quality-score+1 |
| LongtermWiki Value Proposition | 75 | high | internal | 4 | no-citationslow-rigor-scorelow-quality-score+1 |
| Parameters Strategy | 75 | high | internal | 3 | no-citationslow-rigor-scorelow-quality-score+1 |
| Project Roadmap | 75 | high | internal | 29 | no-citationslow-rigor-scorelow-quality-score+1 |
| Agentic AI | 70 | high | capability | 68 | severe-truncation |
| Why Alignment Might Be Hard | 70 | high | argument | 69 | low-citation-densityconceptual-contentsevere-truncation |
| Mainstream Era (2020-Present) | 70 | high | historical | 42 | specific-factual-claimsno-citations |
| Political Stability as an AI Safety Factor | 70 | high | analysis | - | no-citationsfew-external-sourcesmostly-unsourced-footnotes |
| Expected Value of AI Safety Research | 70 | high | analysis | 60 | no-citationslow-rigor-scorefew-external-sources |
| Community Building Organizations (Overview) | 70 | high | - | 35 | no-citationslow-rigor-scorelow-quality-score |
| US AI Safety Institute (now CAISI) | 70 | high | organization | 91 | biographical-claimsno-citationshigh-quality |
| Ilya Sutskever | 70 | high | person | 26 | biographical-claimslow-rigor-scorelow-quality-score+2 |
| AI Watch | 70 | high | project | 23 | no-citationslow-rigor-scorelow-quality-score |
| FlexHEG (Flexible Hardware-Enabled Guarantees) | 70 | high | project | - | no-citationsfew-external-sourcesmostly-unsourced-footnotes |
| International AI Safety Summits | 70 | high | event | 63 | specific-factual-claimsno-citations |
| Org Watch | 70 | high | project | 23 | low-citation-densitylow-rigor-scorelow-quality-score+1 |
| Stop Stealing Our Chips Act | 70 | high | policy | - | no-citationsfew-external-sourcesmostly-unsourced-footnotes |
| AI-Enabled Political Polarization | 70 | high | risk | - | no-citationsfew-external-sourcesmostly-unsourced-footnotes |
| Epistemic Risks (Overview) | 70 | high | - | 37 | no-citationslow-rigor-scorelow-quality-score |
| Epistemic Systemic Risk | 70 | high | risk | - | no-citationsfew-external-sourcesmostly-unsourced-footnotes |
| Structural Risks (Overview) | 70 | high | - | 37 | no-citationslow-rigor-scorelow-quality-score |
| Factor Diagram Naming: Research Report | 70 | high | - | 31 | no-citationslow-rigor-scorelow-quality-score |
| Quantitative Claims | 70 | high | - | 28 | no-citationslow-rigor-scorelow-quality-score |
| Longtermist Funders (Overview) | 65 | high | - | 3 | no-citationslow-quality-scorefew-external-sources |
| Gratified | 65 | high | organization | 25 | biographical-claimslow-rigor-scorelow-quality-score+1 |
| TSMC | 65 | high | organization | - | biographical-claimsfew-external-sourcesmoderately-cited+1 |
| Nick Bostrom | 65 | high | person | 25 | biographical-claimslow-rigor-scorelow-quality-score+2 |
| Paris AI Action Summit (February 2025) | 65 | high | policy | - | no-citationsmostly-unsourced-footnotes |
| Executive Order 14179: Removing Barriers to American Leadership in AI | 65 | high | policy | - | no-citationsmostly-unsourced-footnotes |
| Accident Risks (Overview) | 65 | high | - | 42 | no-citationslow-rigor-score |
Showing 1–100 of 734
1 / 8
Could not reach wiki-server — showing local data.
Error: This operation was aborted
Scores computed at build time by the canonical scorer (crux/lib/hallucination-risk.ts). Run pnpm crux validate hallucination-risk for a CLI report.