Edited today1.9k words3 backlinksUpdated every 6 weeksDue in 6 weeks
91QualityComprehensiveQuality: 91/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.37.5ImportanceReferenceImportance: 37.5/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.66ResearchModerateResearch Value: 66/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Summary
Documents AI-enabled scientific fraud with evidence that 2-20% of submissions are from paper mills (field-dependent), 300,000+ fake papers exist, and detection tools are losing an arms race against AI generation. Paper mill output doubles every 1.5 years vs. retractions every 3.5 years. Projects 2027-2030 scenarios ranging from controlled degradation (40% probability) to epistemic collapse (20% probability) affecting medical treatments and policy decisions. Wiley/Hindawi scandal resulted in 11,300+ retractions and \$35-40M losses.
Content9/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.EntityEntityYAML entity definition with type, description, and related entries.Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables21/ ~8TablesData tables for structured comparisons and reference material.Diagrams1/ ~1DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Int. links36/ ~15Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links32/ ~10Ext. linksLinks to external websites, papers, and resources outside the wiki.Footnotes0/ ~6FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences21/ ~6ReferencesCurated external resources linked via <R> components or cited_by in YAML.Quotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:4.5 R:5 A:3.5 C:6RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks3BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Documents AI-enabled scientific fraud with evidence that 2-20% of submissions are from paper mills (field-dependent), 300,000+ fake papers exist, and detection tools are losing an arms race against AI generation. Paper mill output doubles every 1.5 years vs. retractions every 3.5 years. Projects 2027-2030 scenarios ranging from controlled degradation (40% probability) to epistemic collapse (20% probability) affecting medical treatments and policy decisions. Wiley/Hindawi scandal resulted in 11,300+ retractions and \$35-40M losses.
Documents AI-enabled scientific fraud with evidence that 2-20% of submissions are from paper mills (field-dependent), 300,000+ fake papers exist, and detection tools are losing an arms race against AI generation. Paper mill output doubles every 1.5 years vs. retractions every 3.5 years. Projects 2027-2030 scenarios ranging from controlled degradation (40% probability) to epistemic collapse (20% probability) affecting medical treatments and policy decisions. Wiley/Hindawi scandal resulted in 11,300+ retractions and \$35-40M losses.
Scientific knowledge corruption represents the systematic degradation of research integrity through AI-enabled fraud, fake publications, and data fabrication. According to PNAS research (2025), paper mill output is doubling every 1.5 years while retractions double only every 3.5 years. Northwestern University researcher Reese Richardson warns: "You can see a scenario in a decade or less where you could have more than half of [studies being published] each year being fraudulent."
This isn't a future threat—it's already happening. Current estimates suggest 2-20% of journal submissions come from paper mills depending on field, with over 300,000 fake papers already in the literature. The Retraction Watch database now contains over 63,000 retractions, with 2023 marking a record high of over 10,000 retractions. AI tools are rapidly industrializing fraud production, creating an arms race between detection and generation that detection appears to be losing.
The implications extend far beyond academia: corrupted medical research could lead to harmful treatments, while fabricated policy research could undermine evidence-based governance and public trust in science itself.
Scientific Corruption Cascade
Loading diagram...
Risk Assessment
Factor
Assessment
Evidence
Timeline
Current Prevalence
High
300,000+ fake papers identified
Already present
Growth Rate
Accelerating
Paper mill adoption of AI tools
2024-2026
Detection Capacity
Insufficient
Detection tools lag behind AI generation
Worsening
Impact Severity
Severe
Medical/policy decisions at risk
2025-2030
Trend Direction
Deteriorating
Arms race favors fraudsters
Next 5 years
Responses That Address This Risk
Response
Mechanism
Effectiveness
AI Content AuthenticationApproachAI Content AuthenticationContent authentication via C2PA and watermarking (10B+ images) offers superior robustness to failing detection methods (55% accuracy), with EU AI Act mandates by August 2026 driving adoption among ...Quality: 58/100
Cryptographic provenance for research outputs
Medium-High (if adopted)
AI-Era Epistemic SecurityApproachAI-Era Epistemic SecurityComprehensive analysis of epistemic security finds human deepfake detection at near-chance levels (55.5%), AI detection dropping 45-50% on novel content, but content authentication (C2PA) market gr...Quality: 63/100
Systematic protection of knowledge infrastructure
Medium
AI-Era Epistemic InfrastructureApproachAI-Era Epistemic InfrastructureComprehensive analysis of epistemic infrastructure showing AI fact-checking achieves 85-87% accuracy at \$0.10-\$1.00 per claim versus \$50-200 for human verification, while Community Notes reduces...Quality: 59/100
Traditional paper mills produce 400-2,000 papers annually. AI-enhanced mills could scale to hundreds of thousands:
Stage
Traditional
AI-Enhanced
Text generation
Human ghostwriters
GPT-4/Claude automated
Data fabrication
Manual creation
Synthetic datasets
Image creation
Photoshop manipulation
Diffusion model generation
Citation networks
Manual cross-referencing
Automated citation webs
Evidence: Paper mills now advertise "AI-powered research services" openly.
Vector 2: Review Process Compromise
Component
Attack Method
Detection Rate
Peer review
AI-generated reviews
Unknown (recently discovered)
Editorial assessment
Overwhelm with volume
Limited editorial capacity
Post-publication review
Fake comments/endorsements
Minimal monitoring
Vector 3: Preprint Flooding
Preprint servers↗📄 paper★★★☆☆arXivShlegeris et al. (2024)monitoringcontainmentdefense-in-depthscientific-integrity+1Source ↗ have minimal review processes, making them vulnerable:
ArXiv: ~200,000 papers/year, minimal screening
medRxiv: Medical preprints, used by media/policymakers
bioRxiv: Biology preprints, influence grant funding
Attack scenario: AI generates 10,000+ fake preprints monthly, drowning real research.
AI detection tools deployment vs. improved AI generation
Paper mills adopt GPT-4/Claude for content generation
First major scandals of AI-generated paper acceptance
2025-2027: Scale Transition
Fraud production scales from thousands to hundreds of thousands annually
Detection systems overwhelmed
Research communities begin fragmenting into "trusted" networks
2027-2030: Potential Collapse Scenarios
Scenario
Probability
Characteristics
Controlled degradation
40%
Gradual decline, institutional adaptation
Bifurcated system
35%
"High-trust" vs. "open" research tiers
Epistemic collapse
20%
Public loses confidence in scientific literature
Successful defense
5%
Detection keeps pace with generation
Key Uncertainties & Research Gaps
Key Questions
?What is the true current rate of AI-generated content in scientific literature?
?Can detection methods fundamentally keep pace with AI generation, or is this an unwinnable arms race?
?At what point does corruption become so pervasive that scientific literature becomes unreliable for policy?
?How will different fields (medicine vs. social science) be differentially affected?
?What threshold of corruption would trigger institutional collapse vs. adaptation?
?Can blockchain/cryptographic methods provide solutions for research integrity?
?How will this interact with existing problems like the replication crisis?
Critical Research Needs
Research Area
Priority
Current Gap
Baseline measurement
High
Unknown true fraud rates
Detection technology
High
Fundamental limitations unclear
Institutional resilience
Medium
Adaptation capacity unknown
Cross-field variation
Medium
Differential impact modeling
Public trust dynamics
Medium
Tipping point identification
Related Risks & Interactions
This risk intersects with several other epistemic risks:
Epistemic collapseRiskEpistemic CollapseEpistemic collapse describes the complete erosion of society's ability to establish factual consensus when AI-generated synthetic content overwhelms verification capacity. Current AI detectors achi...Quality: 49/100: Scientific corruption could trigger broader epistemic system failure
Expertise atrophyRiskAI-Induced Expertise AtrophyExpertise atrophy—humans losing skills to AI dependence—poses medium-term risks across critical domains (aviation, medicine, programming), creating oversight failures when AI errs or fails. Evidenc...Quality: 65/100: Researchers may lose skills if AI does the work
Trust cascadeRiskAI Trust Cascade FailureAnalysis of how declining institutional trust (media 31%, federal government 17% per 2024-2025 Gallup/Pew data) could create self-reinforcing collapse where no trusted entity can validate others, p...Quality: 55/100: Scientific fraud could undermine trust in all expertise
Epistemic CollapseRiskEpistemic CollapseEpistemic collapse describes the complete erosion of society's ability to establish factual consensus when AI-generated synthetic content overwhelms verification capacity. Current AI detectors achi...Quality: 49/100AI Trust Cascade FailureRiskAI Trust Cascade FailureAnalysis of how declining institutional trust (media 31%, federal government 17% per 2024-2025 Gallup/Pew data) could create self-reinforcing collapse where no trusted entity can validate others, p...Quality: 55/100
Concepts
Epistemic OverviewEpistemic OverviewThis overview taxonomizes epistemic risks from AI into four categories (authentication, information manipulation, cognitive degradation, institutional) and notes that several are already manifestin...Quality: 37/100