Forethought Foundation's five proposed technologies for improving collective epistemics: community notes for everything, rhetoric highlighting, rel...
X.com presents a deeply mixed epistemic profile. Community Notes demonstrates genuine innovation in crowdsourced fact-checking, reducing repost vir...
Organizations advancing forecasting methodology, prediction aggregation, and epistemic infrastructure to improve decision-making on AI safety and e...
A proposed suite of open benchmarks evaluating AI models on epistemic virtues: calibration, clarity, bias resistance, sycophancy avoidance, and man...
This overview taxonomizes epistemic risks from AI into four categories (authentication, information manipulation, cognitive degradation, institutio...
Structures 9 epistemic cruxes determining AI safety prioritization strategy, with probabilistic analysis showing detection-generation arms race cur...
This page synthesizes post-FTX critiques of EA's epistemic and governance failures, identifying interlocking problems including donor hero-worship,...
This article synthesizes epistemic risk and systemic risk into a coherent 'epistemic systemic risk' concept, noting it remains an emerging, not-yet...
Epistemic collapse describes the complete erosion of society's ability to establish factual consensus when AI-generated synthetic content overwhelm...
Comprehensive analysis of epistemic infrastructure showing AI fact-checking achieves 85-87% accuracy at \$0.10-\$1.00 per claim versus \$50-200 for...
Maps causal relationships between 22 AI safety parameters, identifying 7 feedback loops and 4 clusters. Finds epistemic-health and institutional-qu...
Analyzes how AI-driven information environments induce epistemic learned helplessness (surrendering truth-seeking), presenting survey evidence show...
Models epistemic collapse as threshold phenomenon where society loses ability to establish shared facts, estimating 75-80% combined probability of ...
Comprehensive analysis of epistemic security finds human deepfake detection at near-chance levels (55.5%), AI detection dropping 45-50% on novel co...
AI sycophancy—where models agree with users rather than provide accurate information—affects all five state-of-the-art models tested, with medical ...
Curated editorial overview of 14 near-term AI risks organized by urgency across governance, misuse, epistemic, and technical domains. Includes a qu...
This is a navigation/index page listing epistemic infrastructure approaches (prediction markets, deliberation tools, content authentication, deepfa...
An index/overview page documenting the epistemic track records of four AI figures (LeCun, Altman, Yudkowsky, Musk) with brief characterizations of ...
A well-organized directory of epistemic tools (forecasting platforms, knowledge coordination systems, verification tools) relevant to AI safety res...
Reality fragmentation describes the breakdown of shared epistemological foundations where populations hold incompatible beliefs about basic facts (...
Shared writing principles referenced by all domain-specific style guides. Three pillars: epistemic honesty (hedge uncertain claims, use ranges, sou...
Scenarios involving permanent entrenchment of values, power structures, or epistemic conditions.
Root factor measuring humanity's collective ability to navigate AI transition through governance, epistemics, and adaptability.
QURI develops Squiggle (probabilistic programming language with native distribution types), SquiggleAI (Claude Sonnet 3.5-powered model generation)...
Analyzes the risk that 2-3 AI systems could dominate humanity's knowledge access by 2040, projecting 80%+ market concentration with correlated erro...
FLF's inaugural 12-week fellowship (July-October 2025) combined research fellowship with startup incubator format. 30 fellows received \$25-50K sti...
A proposed epistemic infrastructure making knowledge provenance transparent and traversable—enabling anyone to see the chain of citations, original...
Comprehensive tracking of Eliezer Yudkowsky's predictions shows clear early errors (Singularity by 2021, nanotech timelines), vindication on AI gen...
Comprehensive documentation of Elon Musk's prediction track record showing systematic overoptimism on timelines (FSD predictions missed by 6+ years...
US government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar'...
Comprehensive compilation of Yann LeCun's predictions showing he was correct on long-term architectural intuitions (neural networks, self-supervise...
Prediction markets achieve Brier scores of 0.16-0.24 (15-25% better than polls) by aggregating dispersed information through financial incentives, ...
A comprehensive structured mapping of AI safety solution uncertainties across technical, alignment, governance, and agentic domains, using probabil...
Provides a strategic framework for AI safety resource allocation by mapping 13+ interventions against 4 risk categories, evaluating each on ITN dim...
Comprehensive biographical profile of Leopold Aschenbrenner, covering his trajectory from Columbia valedictorian to OpenAI researcher to \$1.5B hed...
Internal strategy document exploring ambitious value pathways for LongtermWiki, including improving longtermist prioritization (Coefficient integra...
Manifest is a 2024 forecasting conference that generated significant controversy within EA/rationalist communities due to speaker selection includi...
Grokipedia is xAI's AI-generated encyclopedia that grew from 800K to 6M+ articles in three months (Oct 2025–Jan 2026), but was documented by multip...
A comprehensive survey of AI-assisted knowledge management tools (Obsidian plugins, Notion AI, NotebookLM, RAG frameworks) with specific cost figur...
RoastMyPost is an LLM tool (Claude Sonnet 4.5 + Perplexity) that evaluates written content through multiple specialized AI agents—fact-checking, lo...
A well-structured but loosely-defined conceptual framework synthesizing multi-agent coordination risks, epistemic fragmentation, and organizational...
Post-2024 analysis shows AI disinformation had limited immediate electoral impact (cheap fakes used 7x more than AI content), but creates concernin...
Documents AI-enabled scientific fraud with evidence that 2-20% of submissions are from paper mills (field-dependent), 300,000+ fake papers exist, a...