Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusRisk
Edited today409 words1 backlinks
37QualityDraft58.4ImportanceUseful
Summary

This overview taxonomizes epistemic risks from AI into four categories (authentication, information manipulation, cognitive degradation, institutional) and notes that several are already manifesting with current AI; it provides no quantification, sourcing, or actionable interventions.

Content4/13
LLM summaryScheduleEntityEdit history2Overview
Tables0/ ~2Diagrams0Int. links20/ ~3Ext. links0/ ~2Footnotes0/ ~2References0/ ~1Quotes0Accuracy0RatingsN:3 R:2.5 A:2 C:5.5Backlinks1
Change History2
Clarify overview pages with new entity type3 weeks ago

Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.

Fix conflicting numeric IDs + add integrity checks#1684 weeks ago

Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence: 1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts) 2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection) 3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex

Issues1
StructureNo tables or diagrams - consider adding visual content

Epistemic Risks (Overview)

Overview

Epistemic risks arise when AI systems undermine humanity's ability to know what is true, make informed decisions, and maintain shared understanding of reality. These risks span individual cognition (learned helplessness, sycophancy effects), institutional decision-making (decision capture, expertise atrophy), and societal epistemics (authentication collapse, reality fragmentation). Unlike many AI risks that require highly capable systems, several epistemic risks are already manifesting with current-generation AI.

Authentication and Trust

Risks to the ability to verify information authenticity:

  • Authentication Collapse: AI-generated content becomes indistinguishable from authentic content, undermining trust in all media
  • Trust Cascade Failure: Loss of trust in one domain spreading to undermine trust in institutions broadly
  • Trust Decline: Gradual erosion of institutional and interpersonal trust as AI-generated deception becomes pervasive

Information Manipulation

Risks from AI being used to distort information environments:

  • Consensus Manufacturing: Using AI to create the appearance of broad agreement where none exists
  • Preference Manipulation: AI systems that learn and exploit individual psychological vulnerabilities to shape beliefs and behaviors
  • Historical Revisionism: AI-enabled alteration of historical records and narratives at scale
  • Scientific Knowledge Corruption: AI-generated fraudulent research corrupting the scientific literature

Cognitive and Epistemic Degradation

Risks to human cognitive capabilities and epistemic practices:

  • Epistemic Collapse: Broad societal breakdown in the ability to form reliable beliefs and reach consensus on facts
  • Epistemic Sycophancy: AI systems reinforcing rather than correcting human biases and errors
  • Expertise Atrophy: Decline of human expertise as AI automates cognitive tasks, reducing the human capacity to verify AI outputs
  • Epistemic Learned Helplessness: Humans losing the ability or motivation to evaluate claims independently due to AI dependence
  • Cyber Psychosis & AI-Induced Psychological Harm: Psychological effects of immersive AI interactions

Institutional and Structural Epistemic Risks

Risks to organizational and societal knowledge structures:

  • Institutional Decision Capture: Institutions becoming dependent on AI systems for decisions, losing the ability to exercise independent judgment
  • AI Knowledge Monopoly: Concentration of knowledge and information access in AI systems controlled by a few actors
  • Reality Fragmentation: AI-enabled personalized information environments creating incompatible worldviews across groups
  • Legal Evidence Crisis: AI-generated content undermining the reliability of evidence in legal proceedings

Key Dynamics

Compounding effects: Epistemic risks tend to compound—authentication collapse makes consensus manufacturing easier, which accelerates trust decline, which worsens epistemic collapse.

Already manifesting: Unlike many AI risks that are prospective, several epistemic risks are observably occurring with current AI capabilities. Deepfakes, AI-generated disinformation, and epistemic sycophancy from LLMs are present-day phenomena.

Defense is harder than offense: Generating convincing false content is generally cheaper and easier than detecting it, creating a structural asymmetry that favors epistemic degradation.

Related Pages

Top Related Pages

Risks

AI-Powered Consensus ManufacturingAI Preference ManipulationAuthentication CollapseAI-Driven Trust DeclineEpistemic CollapseAI Trust Cascade Failure