Skip to content
Longterm Wiki
Updated 2026-04-25HistoryData
Page StatusContent
Edited today2.1k words1 backlinks
Content1/13
SummaryScheduleEntityEdit historyOverview
Tables2/ ~8Diagrams0/ ~1Int. links14/ ~17Ext. links6/ ~10Footnotes1/ ~6References3/ ~6Quotes0Accuracy0Backlinks1
Issues1
Links3 links could use <R> components

AI Cyber Damage: Bounding the Tail

Analysis

AI Cyber Damage: Bounding the Tail

Probability-weighted synthesis answer to "How likely is AI-enabled cyber damage to exceed 10% of global GDP by year Y?" — pulls from damage estimates, insurance market signals, tail-risk catalog, and incident base rate.

Model TypeSynthesis / Probability Estimate
Target RiskCyber Offense / Cyberweapons
Headline Probability5-20% (>10% GDP cyber damage in any year through 2035)
Related
Risks
Cyberweapons RiskCatastrophic Cyber Tail Risk
Analyses
AI Cyber Damage EstimatesCyber Insurance Market Signals
2.1k words · 1 backlinks

The question

A natural question for AI safety policy: given the rise of AI-enhanced cyber capabilities, how likely is it that AI-enabled cyber damage exceeds 10% of global GDP by year Y? This page is the synthesis answer, drawing from four prior pages in the QUA-715 epic:

  • AI Cyber Damage Estimates — methodology comparison across the major damage-estimate sources
  • Cyber Insurance Market Signals — revealed-preference market evidence on insurable damage
  • Catastrophic Cyber Tail Risk — catalog of systemic single points of failure
  • The seeded cyber incident event entities (NotPetya, WannaCry, SolarWinds, Colonial Pipeline, CDK Global, Change Healthcare, Anthropic-disclosed Sept 2025) for empirical anchoring

Decomposition

The 10%-of-GDP question decomposes into three sub-questions:

  1. P(annual baseline exceeds 10% GDP without a catastrophic single event). Could ordinary cyber damage at scale, growing at observed rates, reach this threshold by linear scaling alone?
  2. P(single catastrophic event exceeds 10% GDP). Could one event in a systemic single point of failure reach this threshold by itself?
  3. P(sustained AI-enabled escalation pushes annual ≥ 10% GDP by year Y). Could the trajectory of AI-enabled offense, even without one catastrophic event, push annual aggregate damage above the threshold over time?

Each sub-question has different evidence and different bounds.

Sub-question 1: Annual baseline

P(annual baseline >10% GDP without catastrophic event) ≈ 1-3% by 2030, rising to 5-15% by 2040.

Reasoning:

  • The methodology-credible upper bound for current annual damage is bounded above by the Cybersecurity Ventures top-down forecast: $10.5T/yr in 2025 (~9.5% of global GDP). However, that figure includes productivity loss, IP theft, and reputational harm — categories largely already captured in national accounts. Methodologically explicit aggregates (Anderson 2012/2019, Romanosky 2016) put aggregate damage in the tens to low hundreds of billions, well below 1% of GDP.
  • AI-enabled offense most plausibly multiplies the volume and success rate of attacks (phishing, vulnerability discovery, exploit generation), but the September 2025 Anthropic-disclosed campaign demonstrated that even when an AI agent autonomously executes 80-90% of tactical operations at machine speed (multiple requests per second), only a small number of confirmed breaches resulted from ~30 attempted targets. The bottleneck is target validation and post-exploitation persistence, not raw operation speed; aggregate damage scales sub-linearly with attempted-operation speed.
  • Cybersecurity Ventures' own 2025 revision dropped the assumed growth rate from 15%/yr to 2.5%/yr — a tacit admission that linear extrapolation overshoots. Even at 2.5%/yr from $10.5T (2025), the trajectory crosses 10% of global GDP only if global GDP itself grows slower than projected (or measurement scope expands).

Confidence: medium. Heavily dependent on whether the loss-of-productivity component continues to be counted as a separate damage category vs. recognized as already-captured in GDP itself.

Sub-question 2: Single catastrophic event

P(single event exceeds 10% GDP, in any year through 2035) ≈ 5-15%.

Reasoning, drawing on Catastrophic Cyber Tail Risk:

  • Of 8 cataloged systemic single points of failure, only payment systems approach the 10% GDP threshold within E2519's cited range. The aggressive high-end estimate for a multi-day SWIFT or Fedwire disruption is $500B-$10T (0.5-9% of global GDP) — even the upper bound is just below the 10% line, not above it.
  • Three other systems (hyperscaler cloud, ICS, OS/browser monoculture) reach the 10% threshold only under above-aggressive assumptions that exceed E2519's table bounds: simultaneous multi-target attacks, multi-week duration, and full data corruption rather than mere unavailability.
  • Historical base rate: largest single events to date — NotPetya ≈$10B (White House attribution per WIRED), SolarWinds remediation projected up to $100B across 18,000 affected entities (Rendition Infosec/Roll Call, 2021) (a forward-looking cleanup-spend projection rather than realized damages, with documented insured losses ≈$90M and per-company costs averaging $12M), Change Healthcare ≈$2.87B+ direct costs to UnitedHealth (FY2024) — are 2-3 orders of magnitude below the threshold.
  • Cyber insurance market signal is consistent with this estimate. The market caps insurable cyber loss at ≈$20-46B per Munich Re's 200-year return period estimate (a probability-weighted estimate). This says insurers do not believe events ~50× larger are pricing-feasible to underwrite — but it does not say they believe such events are impossible. The market handles such events by exclusion (war / state-actor exclusions, capacity ceilings), shifting them to the sovereign-default / war risk bucket.
  • AI-enabled offense most plausibly affects this probability via the attribution axis — making attribution harder reduces deterrence and raises the probability that state-sponsored actors attempt catastrophic actions against another state's infrastructure.

Confidence: low-medium. The estimate is dominated by a small number of historically rare event types where even one occurrence would dramatically update the prior.

Sub-question 3: Sustained escalation trajectory

P(annual ≥ 10% GDP by 2030) ≈ 1-3%; by 2035 ≈ 3-8%; by 2040 ≈ 5-15%.

Reasoning:

  • AI capability trajectory matters. Autonomous Cyber Attack Timeline (E87) projects Level 4 (full autonomy) capability between 2029-2033. Past Level 4, the offense-defense balance shifts more sharply (per E88), and aggregate damages plausibly grow faster.
  • Even at Level 4 with damage projections of $3-5T/yr (per E87), this is at most ~3-5% of projected 2030 global GDP (≈$140T) — well below the 10% threshold. The threshold requires either capability outrunning E87's projections, or the long tail of Sub-question 2 manifesting in the same year as the high-aggregate scenario.
  • Defense investment is the largest source of variance. Per E88, defensive investment is currently underfunded by 3-10x relative to offense; a coordinated defensive AI investment program (analogous to NIST CSF or DARPA initiatives) could meaningfully reduce both the linear scaling rate and the probability of catastrophic single events.

Confidence: low. The estimate compounds AI capability trajectory uncertainty, defense investment uncertainty, and AI-attribution-on-state-conflict uncertainty.

Combined estimate

Headline answer: P(AI-enabled cyber damage exceeds 10% of global GDP in any single year through 2035) ≈ 5-20%, low-to-medium confidence.

This is dominated by sub-question 2 (catastrophic single event) for near-term (through 2030), then increasingly augmented by sub-question 3 (sustained escalation) for the late 2030s. Sub-question 1 (linear baseline scaling alone) does not plausibly cross the threshold.

The estimate is materially uncertain. The 5-20% spread reflects genuine disagreement on both the catastrophic-event probability (5-15% over the 10-year window) and the AI capability trajectory (E87 conservative vs aggressive scenarios differ by 2-3 years and ~2x in projected damage). Confidence is bounded by the lowest-confidence input: sub-question 3 carries low confidence (compounded AI-trajectory + defense-investment + AI-attribution uncertainty), so the headline range should be read as low-to-medium confidence overall, not as actuarial.

What would meaningfully update the estimate

Update directionEvidence that would shift the estimate upEvidence that would shift the estimate down
Catastrophic event probabilityA successful attack producing >$100B in single-event damage (i.e., 10x larger than NotPetya); growing capability disparity at frontier AI labs; failure of CISA / FBI critical-infrastructure designationMajor coordination success on cyber arms control; demonstrated effectiveness of AI defense at machine speed; visible capability plateau in offensive AI
Linear baseline scalingCybersecurity Ventures revising upward (instead of recent 15%→2.5% revision); IBM breach cost growth accelerating; new AI-orchestrated breaches at 100x prior scaleCybersecurity Ventures continued downward methodology revision; flattening of breach cost trends; plateau in AI-orchestrated capability
Capability trajectory (E87)Earlier-than-projected Level 4 autonomous capability; documented zero-day discovery by autonomous agentsDemonstrated bottleneck in Level 4 capability; defensive AI matching offensive AI at machine speed

Cruxes the estimate is sensitive to

CruxIf trueIf false
C1: AI offense multiplier (≈2x vs ~10x)E87 conservative scenario; estimate stays at ~5%E87 aggressive scenario; estimate moves to ≈15-20%
C2: Largest plausible single-event loss todayIf $100-500B is the cap, estimate stays lowIf $1T+ events are demonstrated, estimate moves to ≈20-30%
C3: Annual baseline (which estimates to trust)Anderson/Romanosky methodology dominant; estimate stays lowCybersecurity Ventures methodology becomes accepted as canonical; estimate moves up via baseline expansion
C4: Insurability of catastrophic eventsMarket correctly prices tail at <$50B; sovereign backstop adequateMarket is systematically under-pricing tail; uninsured exposure is much larger than market believes
C5: Linear vs cascade compoundingLinear scaling dominates; estimate stays at ≈5%Cascade scenarios (especially payment systems) drive the estimate; estimate moves to ≈15-25%

Comparison with adjacent published estimates

No published source uses the "P(>10% global GDP through 2035)" framing for other AI risk categories — the published literature uses different thresholds (existential, transformative, full disempowerment) and different time horizons (typically 2070 or 2100). Direct numerical comparison would require reframing those estimates onto this page's threshold, which the cited sources do not authorize. The closest adjacent estimates from the literature are:

  • AI-enabled bioweapons. Toby Ord, The Precipice (2020), estimates ≈3% (1/30) probability of existential catastrophe from engineered pandemics this century — framed as existential, not 10%-GDP.1 RAND (RRA2977-2, 2024) found no statistically significant uplift from current LLMs for biological-attack viability.
  • AI alignment failure / disempowerment. Joseph Carlsmith, "Is Power-Seeking AI an Existential Risk?" (2021/2022), estimates >10% probability of full disempowerment by 2070 — framed as full disempowerment over a much longer window than 2035. Toby Ord estimates ≈10% existential risk from unaligned AI this century (also framed existentially / century-scale).1
  • AI labor-market impact. Goldman Sachs (2023) projects +7% global GDP from AI productivity over a decade; Acemoglu (2024) estimates +1.1% TFP over a decade. The published literature is overwhelmingly about productivity gains, not GDP destruction; no published source places a probability on net negative GDP impact from labor displacement on the 10%-by-2035 framing.

Because none of these estimates use the same threshold or time horizon as the headline cyber estimate, this page does not attempt a side-by-side numerical comparison. The qualitative picture is that cyber is among the more near-term, observed-and-measurable AI risks, while bioweapons and alignment failure are framed in the literature as longer-horizon catastrophic / existential outcomes. Reframing those estimates onto a 2035 / 10%-GDP basis would be a separate analysis, not a citation.

Limitations

  • Probability estimates are illustrative, not actuarial. The 5-20% headline is an attempt at calibrated reasoning, not a model output. Treat ranges as wide.
  • The 10% GDP threshold is arbitrary. Different policy thresholds (1%, 3%, 5%) produce materially different probability estimates. The 10% threshold corresponds roughly to "events large enough to require sovereign-scale response" rather than market-clearing recovery.
  • No published source uses the same threshold + horizon framing. Numerical comparisons with bio, alignment, or labor-displacement risk would require reframing those estimates onto this page's basis (10% GDP / 2035) — work the cited sources do not perform. The "Comparison with adjacent published estimates" section deliberately avoids a side-by-side numerical table for this reason.
  • The synthesis treats the four prior pages as authoritative. Errors in damage methodology, insurance market interpretation, or tail-risk cataloging propagate. Cross-checks recommended.
  • AI offense / defense balance assumes current trajectory. Discontinuities (AI breakthroughs, defensive AI breakthroughs, international agreements, large-scale war) could move estimates by 2-3x in either direction.
  • No discount rate applied. Damage in 2040 is treated as comparable to damage in 2027.

Conclusions for downstream pages

The headline estimate (5-20% probability of >10% GDP cyber damage in any year through 2035, low-to-medium confidence) implies several positions for the wiki to take:

  1. Cyber damage is a near-term, observed-and-measurable AI risk with a tail dominated by single-event scenarios in payment systems and other systemic chokepoints. It does not require speculation about transformative AI capability to motivate concern.
  2. The market signal does not invalidate the threshold-event scenario — it confirms that such events are uninsurable, which is consistent with their being plausible but rare.
  3. The most actionable interventions are infrastructure-level: payment system resilience, hyperscaler diversification, ICS air-gapping, healthcare clearinghouse competition, OS/browser diversity. These are the cascade-prevention levers that bound the catastrophic single-event probability.
  4. Defensive AI investment is a high-leverage policy lever — the 3-10x underfunding gap (per E88) means modest increases in defensive AI investment plausibly halve the headline estimate.

For continuing analysis, see the QUA-715 epic and its child issues. The wiki page Cyber Offense (E82) is the entry point for readers exploring this domain.

Sources

This page is a synthesis; primary sources are cited on the four prior pages:

  • AI Cyber Damage Estimates
  • Cyber Insurance Market Signals
  • Catastrophic Cyber Tail Risk
  • Cyberweapons (E86), E87, E88

Plus the cyber incident event entities (NotPetya / WannaCry / SolarWinds / Colonial / CDK / Change Healthcare / Anthropic-disclosed) for empirical anchoring.

Footnotes

  1. Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. Bloomsbury, 2020 — see Chapter 5 estimates. 2

References

Anthropic reports detecting a sophisticated September 2025 espionage campaign in which a suspected Chinese state-sponsored group weaponized Claude Code as an autonomous agent to attack roughly thirty global targets including tech companies, financial institutions, and government agencies. This is described as the first documented large-scale cyberattack executed without substantial human intervention, leveraging AI capabilities in intelligence, agency, and tool use. Anthropic responded by banning accounts, notifying victims, coordinating with authorities, and expanding detection capabilities.

★★★★☆
2RAND Corporation studyRAND Corporation·2024

This RAND Corporation research report examines the risk of AI systems providing meaningful uplift to actors seeking to develop biological weapons, focusing on how to assess capability thresholds and decompose the problem for evaluation purposes. It likely provides a framework for analyzing when AI crosses dangerous capability boundaries in the bioweapons domain and how to structure risk assessments accordingly.

★★★★☆
3Is Power-Seeking AI an Existential Risk?arXiv·Joseph Carlsmith·2022·Paper

This report examines the core argument for existential risk from misaligned AI by presenting two main components: first, a backdrop picture establishing that intelligent agency is an extremely powerful force and that creating superintelligent agents poses significant risks, particularly because misaligned agents would have instrumental incentives to seek power over humans; second, a detailed six-premise argument evaluating whether creating such agents would lead to existential catastrophe by 2070. The work provides a structured analysis of why power-seeking behavior in advanced AI systems represents a fundamental existential concern.

★★★☆☆

Related Wiki Pages

Top Related Pages

Risks

AI-Enabled Cyberattacks

Analysis

Cyber Offense-Defense Balance ModelAI Compute Scaling MetricsBioweapons Attack Chain Model