Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusRisk
Edited today366 words1 backlinks
40QualityAdequate58.5ImportanceUseful
Summary

A high-level taxonomy of AI misuse risks organized into weapons/violence, information manipulation, and surveillance categories, with brief notes on asymmetric amplification and dual-use challenges; no quantitative estimates, primary sources, or detailed mitigation guidance provided.

Content4/13
LLM summaryScheduleEntityEdit history2Overview
Tables0/ ~1Diagrams0Int. links11/ ~3Ext. links0/ ~2Footnotes0/ ~2References0/ ~1Quotes0Accuracy0RatingsN:2.5 R:3 A:3.5 C:5.5Backlinks1
Change History2
Clarify overview pages with new entity type3 weeks ago

Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.

Fix conflicting numeric IDs + add integrity checks#1684 weeks ago

Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence: 1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts) 2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection) 3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex

Issues1
StructureNo tables or diagrams - consider adding visual content

Misuse Risks (Overview)

Overview

Misuse risks arise when AI systems are deliberately used by humans for harmful purposes. Unlike accident risks (unintended failures) or structural risks (systemic effects), misuse risks involve intentional weaponization of AI capabilities. These risks are particularly concerning because AI can dramatically lower the barriers to conducting attacks that previously required specialized expertise, large organizations, or significant resources.

Weapons and Violence

AI lowering barriers to development and deployment of weapons:

  • Bioweapons: AI systems assisting in the design, synthesis, or deployment of biological weapons agents
  • Cyberweapons: AI-powered offensive cyber capabilities including automated vulnerability discovery and exploit generation
  • Autonomous Weapons: Weapons systems that select and engage targets without meaningful human control

Information Manipulation

AI used to deceive, manipulate, or distort information:

  • Deepfakes: AI-generated synthetic media (video, audio, images) used for deception, blackmail, or manipulation
  • Disinformation: AI-generated or AI-amplified false information campaigns at scale
  • AI-Powered Fraud: AI used for financial fraud, impersonation, and social engineering at unprecedented scale

Surveillance and Control

AI enabling surveillance and authoritarian control:

  • Mass Surveillance: AI dramatically expanding the scope and effectiveness of surveillance systems
  • Authoritarian Tools: AI capabilities used to monitor, control, and suppress populations

Emerging Threats

  • AI-Enabled Untraceable Misuse: AI simultaneously amplifying harmful capabilities while obscuring attribution, enabling attacks with reduced risk of identification

Risk Characteristics

Asymmetric capability amplification: AI often provides greater capability amplification to attackers than to defenders. A small team with AI tools can potentially cause harm that previously required nation-state resources.

Dual-use challenge: The same AI capabilities useful for beneficial purposes (drug discovery, cybersecurity defense, content creation) can be repurposed for harmful ones (bioweapons, cyberattacks, deepfakes). This makes risk mitigation through capability restriction difficult without also limiting beneficial uses.

Evolving threat landscape: As AI capabilities advance, the set of feasible misuse attacks expands. Bioweapons risk depends on AI systems' ability to provide actionable synthesis guidance; cyberweapons risk scales with AI coding capability; deepfake quality improves with generative model advancement.

Mitigation Approaches

Key approaches to misuse risk reduction include:

  • Pre-deployment safety evaluations and red-teaming
  • Output filtering and refusal training
  • Know-your-customer requirements for AI API access
  • DNA synthesis screening for bioweapons risks
  • International coordination on AI-enabled weapons

Related Pages

Top Related Pages

Risks

AI DisinformationAutonomous WeaponsKey Near-Term AI RisksDeepfakes