40QualityAdequateQuality: 40/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.58.5ImportanceUsefulImportance: 58.5/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.
Summary
A high-level taxonomy of AI misuse risks organized into weapons/violence, information manipulation, and surveillance categories, with brief notes on asymmetric amplification and dual-use challenges; no quantitative estimates, primary sources, or detailed mitigation guidance provided.
Content4/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit history2Edit historyTracked changes from improve pipeline runs and manual edits.OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables0/ ~1TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links11/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links0/ ~2Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:2.5 R:3 A:3.5 C:5.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks1BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Change History2
Clarify overview pages with new entity type3 weeks ago
Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.
Fix conflicting numeric IDs + add integrity checks#1684 weeks ago
Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence:
1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts)
2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection)
3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex
Issues1
StructureNo tables or diagrams - consider adding visual content
Misuse Risks (Overview)
Overview
Misuse risks arise when AI systems are deliberately used by humans for harmful purposes. Unlike accident risks (unintended failures) or structural risks (systemic effects), misuse risks involve intentional weaponization of AI capabilities. These risks are particularly concerning because AI can dramatically lower the barriers to conducting attacks that previously required specialized expertise, large organizations, or significant resources.
Weapons and Violence
AI lowering barriers to development and deployment of weapons:
BioweaponsRiskBioweapons RiskComprehensive synthesis of AI-bioweapons evidence through early 2026, including the FRI expert survey finding 5x risk increase from AI capabilities (0.3% → 1.5% annual epidemic probability), Anthro...Quality: 91/100: AI systems assisting in the design, synthesis, or deployment of biological weapons agents
CyberweaponsRiskCyberweapons RiskComprehensive analysis showing AI-enabled cyberweapons represent a present, high-severity threat with GPT-4 exploiting 87% of one-day vulnerabilities at \$8.80/exploit and the first documented AI-o...Quality: 91/100: AI-powered offensive cyber capabilities including automated vulnerability discovery and exploit generation
Autonomous WeaponsRiskAutonomous WeaponsComprehensive overview of lethal autonomous weapons systems documenting their battlefield deployment (Libya 2020, Ukraine 2022-present) with AI-enabled drones achieving 70-80% hit rates versus 10-2...Quality: 56/100: Weapons systems that select and engage targets without meaningful human control
Information Manipulation
AI used to deceive, manipulate, or distort information:
DeepfakesRiskDeepfakesComprehensive overview of deepfake risks documenting \$60M+ in fraud losses, 90%+ non-consensual imagery prevalence, and declining detection effectiveness (65% best accuracy). Reviews technical cap...Quality: 50/100: AI-generated synthetic media (video, audio, images) used for deception, blackmail, or manipulation
DisinformationRiskAI DisinformationPost-2024 analysis shows AI disinformation had limited immediate electoral impact (cheap fakes used 7x more than AI content), but creates concerning long-term epistemic erosion with 82% higher beli...Quality: 54/100: AI-generated or AI-amplified false information campaigns at scale
AI-Powered FraudRiskAI-Powered FraudComprehensive reference on AI-enabled fraud covering technical pipelines, case studies, and countermeasures, anchored by FBI IC3 2024 data (\$16.6B total reported losses, +33% YoY); critically note...Quality: 69/100: AI used for financial fraud, impersonation, and social engineering at unprecedented scale
Surveillance and Control
AI enabling surveillance and authoritarian control:
Mass SurveillanceRiskAI Mass SurveillanceComprehensive analysis of AI-enabled mass surveillance documenting deployment in 97 of 179 countries, with detailed evidence of China's 600M cameras and Xinjiang detention of 1-1.8M Uyghurs. NIST s...Quality: 64/100: AI dramatically expanding the scope and effectiveness of surveillance systems
Authoritarian ToolsRiskAI Authoritarian ToolsComprehensive analysis documenting AI-enabled authoritarian tools across surveillance (350M+ cameras in China analyzing 25.9M faces daily per district), censorship (22+ countries mandating AI conte...Quality: 91/100: AI capabilities used to monitor, control, and suppress populations
Emerging Threats
AI-Enabled Untraceable Misuse: AI simultaneously amplifying harmful capabilities while obscuring attribution, enabling attacks with reduced risk of identification
Risk Characteristics
Asymmetric capability amplification: AI often provides greater capability amplification to attackers than to defenders. A small team with AI tools can potentially cause harm that previously required nation-state resources.
Dual-use challenge: The same AI capabilities useful for beneficial purposes (drug discovery, cybersecurity defense, content creation) can be repurposed for harmful ones (bioweapons, cyberattacks, deepfakes). This makes risk mitigation through capability restriction difficult without also limiting beneficial uses.
Evolving threat landscape: As AI capabilities advance, the set of feasible misuse attacks expands. BioweaponsRiskBioweapons RiskComprehensive synthesis of AI-bioweapons evidence through early 2026, including the FRI expert survey finding 5x risk increase from AI capabilities (0.3% → 1.5% annual epidemic probability), Anthro...Quality: 91/100 risk depends on AI systems' ability to provide actionable synthesis guidance; cyberweaponsRiskCyberweapons RiskComprehensive analysis showing AI-enabled cyberweapons represent a present, high-severity threat with GPT-4 exploiting 87% of one-day vulnerabilities at \$8.80/exploit and the first documented AI-o...Quality: 91/100 risk scales with AI coding capability; deepfakeRiskDeepfakesComprehensive overview of deepfake risks documenting \$60M+ in fraud losses, 90%+ non-consensual imagery prevalence, and declining detection effectiveness (65% best accuracy). Reviews technical cap...Quality: 50/100 quality improves with generative model advancement.
AI DisinformationRiskAI DisinformationPost-2024 analysis shows AI disinformation had limited immediate electoral impact (cheap fakes used 7x more than AI content), but creates concerning long-term epistemic erosion with 82% higher beli...Quality: 54/100Autonomous WeaponsRiskAutonomous WeaponsComprehensive overview of lethal autonomous weapons systems documenting their battlefield deployment (Libya 2020, Ukraine 2022-present) with AI-enabled drones achieving 70-80% hit rates versus 10-2...Quality: 56/100Key Near-Term AI RisksRiskKey Near-Term AI RisksCurated editorial overview of 14 near-term AI risks organized by urgency across governance, misuse, epistemic, and technical domains. Includes a quick-assessment table, per-risk editorial summaries...Quality: 45/100DeepfakesRiskDeepfakesComprehensive overview of deepfake risks documenting \$60M+ in fraud losses, 90%+ non-consensual imagery prevalence, and declining detection effectiveness (65% best accuracy). Reviews technical cap...Quality: 50/100