37QualityDraftQuality: 37/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.58.4ImportanceUsefulImportance: 58.4/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.
Summary
This overview taxonomizes epistemic risks from AI into four categories (authentication, information manipulation, cognitive degradation, institutional) and notes that several are already manifesting with current AI; it provides no quantification, sourcing, or actionable interventions.
Content4/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit history2Edit historyTracked changes from improve pipeline runs and manual edits.OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables0/ ~2TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links20/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links0/ ~2Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:3 R:2.5 A:2 C:5.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks1BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Change History2
Clarify overview pages with new entity type3 weeks ago
Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.
Fix conflicting numeric IDs + add integrity checks#1684 weeks ago
Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence:
1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts)
2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection)
3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex
Issues1
StructureNo tables or diagrams - consider adding visual content
Epistemic Risks (Overview)
Overview
Epistemic risks arise when AI systems undermine humanity's ability to know what is true, make informed decisions, and maintain shared understanding of reality. These risks span individual cognition (learned helplessness, sycophancy effects), institutional decision-making (decision capture, expertise atrophy), and societal epistemics (authentication collapse, reality fragmentation). Unlike many AI risks that require highly capable systems, several epistemic risks are already manifesting with current-generation AI.
Authentication and Trust
Risks to the ability to verify information authenticity:
Authentication CollapseRiskAuthentication CollapseComprehensive synthesis showing human deepfake detection has fallen to 24.5% for video and 55% overall (barely above chance), with AI detectors dropping from 90%+ to 60% on novel fakes. Economic im...Quality: 57/100: AI-generated content becomes indistinguishable from authentic content, undermining trust in all media
Trust Cascade FailureRiskAI Trust Cascade FailureAnalysis of how declining institutional trust (media 31%, federal government 17% per 2024-2025 Gallup/Pew data) could create self-reinforcing collapse where no trusted entity can validate others, p...Quality: 55/100: Loss of trust in one domain spreading to undermine trust in institutions broadly
Trust DeclineRiskAI-Driven Trust DeclineUS government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibi...Quality: 55/100: Gradual erosion of institutional and interpersonal trust as AI-generated deception becomes pervasive
Information Manipulation
Risks from AI being used to distort information environments:
Consensus ManufacturingRiskAI-Powered Consensus ManufacturingConsensus manufacturing through AI-generated content is already occurring at massive scale (18M of 22M FCC comments were fake in 2017; 30-40% of online reviews are fabricated). Detection systems ac...Quality: 64/100: Using AI to create the appearance of broad agreement where none exists
Preference ManipulationRiskAI Preference ManipulationDescribes AI systems that shape human preferences rather than just beliefs, distinguishing it from misinformation. Presents a 5-stage manipulation mechanism (profile→model→optimize→shape→lock) and ...Quality: 55/100: AI systems that learn and exploit individual psychological vulnerabilities to shape beliefs and behaviors
Historical RevisionismRiskAI-Enabled Historical RevisionismAnalyzes how AI's ability to generate convincing fake historical evidence (documents, photos, audio) threatens historical truth, particularly for genocide denial and territorial disputes. Projects ...Quality: 43/100: AI-enabled alteration of historical records and narratives at scale
Scientific Knowledge CorruptionRiskScientific Knowledge CorruptionDocuments AI-enabled scientific fraud with evidence that 2-20% of submissions are from paper mills (field-dependent), 300,000+ fake papers exist, and detection tools are losing an arms race against...Quality: 91/100: AI-generated fraudulent research corrupting the scientific literature
Cognitive and Epistemic Degradation
Risks to human cognitive capabilities and epistemic practices:
Epistemic CollapseRiskEpistemic CollapseEpistemic collapse describes the complete erosion of society's ability to establish factual consensus when AI-generated synthetic content overwhelms verification capacity. Current AI detectors achi...Quality: 49/100: Broad societal breakdown in the ability to form reliable beliefs and reach consensus on facts
Epistemic SycophancyRiskEpistemic SycophancyAI sycophancy—where models agree with users rather than provide accurate information—affects all five state-of-the-art models tested, with medical AI showing 100% compliance with illogical requests...Quality: 60/100: AI systems reinforcing rather than correcting human biases and errors
Expertise AtrophyRiskAI-Induced Expertise AtrophyExpertise atrophy—humans losing skills to AI dependence—poses medium-term risks across critical domains (aviation, medicine, programming), creating oversight failures when AI errs or fails. Evidenc...Quality: 65/100: Decline of human expertise as AI automates cognitive tasks, reducing the human capacity to verify AI outputs
Epistemic Learned HelplessnessRiskEpistemic Learned HelplessnessAnalyzes how AI-driven information environments induce epistemic learned helplessness (surrendering truth-seeking), presenting survey evidence showing 36% news avoidance and declining institutional...Quality: 53/100: Humans losing the ability or motivation to evaluate claims independently due to AI dependence
Cyber Psychosis & AI-Induced Psychological HarmRiskAI-Induced Cyber PsychosisSurveys psychological harms from AI interactions including parasocial relationships, AI-induced delusions, manipulation through personalization, reality confusion from synthetic content, and radica...Quality: 37/100: Psychological effects of immersive AI interactions
Institutional and Structural Epistemic Risks
Risks to organizational and societal knowledge structures:
Institutional Decision CaptureRiskAI-Driven Institutional Decision CaptureComprehensive analysis of how AI systems could capture institutional decision-making across healthcare, criminal justice, hiring, and governance through systematic biases. Documents 85% racial bias...Quality: 73/100: Institutions becoming dependent on AI systems for decisions, losing the ability to exercise independent judgment
AI Knowledge MonopolyRiskAI Knowledge MonopolyAnalyzes the risk that 2-3 AI systems could dominate humanity's knowledge access by 2040, projecting 80%+ market concentration with correlated errors and epistemic lock-in. Provides comprehensive m...Quality: 50/100: Concentration of knowledge and information access in AI systems controlled by a few actors
Reality FragmentationRiskAI-Accelerated Reality FragmentationReality fragmentation describes the breakdown of shared epistemological foundations where populations hold incompatible beliefs about basic facts (e.g., 73% Republicans vs 23% Democrats believe 202...Quality: 28/100: AI-enabled personalized information environments creating incompatible worldviews across groups
Legal Evidence CrisisRiskAI-Driven Legal Evidence CrisisOutlines how AI-generated synthetic media (video, audio, documents) could undermine legal systems by making digital evidence unverifiable, creating both wrongful convictions from fake evidence and ...Quality: 43/100: AI-generated content undermining the reliability of evidence in legal proceedings
Key Dynamics
Compounding effects: Epistemic risks tend to compound—authentication collapseRiskAuthentication CollapseComprehensive synthesis showing human deepfake detection has fallen to 24.5% for video and 55% overall (barely above chance), with AI detectors dropping from 90%+ to 60% on novel fakes. Economic im...Quality: 57/100 makes consensus manufacturingRiskAI-Powered Consensus ManufacturingConsensus manufacturing through AI-generated content is already occurring at massive scale (18M of 22M FCC comments were fake in 2017; 30-40% of online reviews are fabricated). Detection systems ac...Quality: 64/100 easier, which accelerates trust declineRiskAI-Driven Trust DeclineUS government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibi...Quality: 55/100, which worsens epistemic collapseRiskEpistemic CollapseEpistemic collapse describes the complete erosion of society's ability to establish factual consensus when AI-generated synthetic content overwhelms verification capacity. Current AI detectors achi...Quality: 49/100.
Already manifesting: Unlike many AI risks that are prospective, several epistemic risks are observably occurring with current AI capabilities. Deepfakes, AI-generated disinformation, and epistemic sycophancy from LLMs are present-day phenomena.
Defense is harder than offense: Generating convincing false content is generally cheaper and easier than detecting it, creating a structural asymmetry that favors epistemic degradation.
AI-Powered Consensus ManufacturingRiskAI-Powered Consensus ManufacturingConsensus manufacturing through AI-generated content is already occurring at massive scale (18M of 22M FCC comments were fake in 2017; 30-40% of online reviews are fabricated). Detection systems ac...Quality: 64/100AI Preference ManipulationRiskAI Preference ManipulationDescribes AI systems that shape human preferences rather than just beliefs, distinguishing it from misinformation. Presents a 5-stage manipulation mechanism (profile→model→optimize→shape→lock) and ...Quality: 55/100Authentication CollapseRiskAuthentication CollapseComprehensive synthesis showing human deepfake detection has fallen to 24.5% for video and 55% overall (barely above chance), with AI detectors dropping from 90%+ to 60% on novel fakes. Economic im...Quality: 57/100AI-Driven Trust DeclineRiskAI-Driven Trust DeclineUS government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibi...Quality: 55/100Epistemic CollapseRiskEpistemic CollapseEpistemic collapse describes the complete erosion of society's ability to establish factual consensus when AI-generated synthetic content overwhelms verification capacity. Current AI detectors achi...Quality: 49/100AI Trust Cascade FailureRiskAI Trust Cascade FailureAnalysis of how declining institutional trust (media 31%, federal government 17% per 2024-2025 Gallup/Pew data) could create self-reinforcing collapse where no trusted entity can validate others, p...Quality: 55/100