Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusRisk
Edited today4.5k words6 backlinksUpdated every 3 weeksDue in 3 weeks
69QualityGood •58.3ImportanceUseful69ResearchModerate
Summary

Comprehensive reference on AI-enabled fraud covering technical pipelines, case studies, and countermeasures, anchored by FBI IC3 2024 data (\$16.6B total reported losses, +33% YoY); critically notes that AI-specific share of losses is not disaggregated in official statistics and removes unverified projections. Detection technology shows severe generalization gaps (~50% AUC drop on real-world deepfakes vs. benchmarks), with human detection barely above chance (~55-60%).

Content6/13
LLM summaryScheduleEntityEdit historyOverview
Tables6/ ~18Diagrams0/ ~2Int. links10/ ~36Ext. links23/ ~23Footnotes0/ ~14References23/ ~14Quotes0Accuracy0RatingsN:4.5 R:8 A:6.5 C:8.5Backlinks6
Issues2
QualityRated 69 but structure suggests 93 (underrated by 24 points)
Links3 links could use <R> components

AI-Powered Fraud

Risk

AI-Powered Fraud

Comprehensive reference on AI-enabled fraud covering technical pipelines, case studies, and countermeasures, anchored by FBI IC3 2024 data (\$16.6B total reported losses, +33% YoY); critically notes that AI-specific share of losses is not disaggregated in official statistics and removes unverified projections. Detection technology shows severe generalization gaps (~50% AUC drop on real-world deepfakes vs. benchmarks), with human detection barely above chance (~55-60%).

CategoryMisuse Risk
SeverityHigh
Likelihoodvery-high
Timeframe2025
MaturityGrowing
StatusRapidly growing
Key RiskScale and personalization
Related
Risks
DeepfakesAI Disinformation
4.5k words · 6 backlinks
Risk

AI-Powered Fraud

Comprehensive reference on AI-enabled fraud covering technical pipelines, case studies, and countermeasures, anchored by FBI IC3 2024 data (\$16.6B total reported losses, +33% YoY); critically notes that AI-specific share of losses is not disaggregated in official statistics and removes unverified projections. Detection technology shows severe generalization gaps (~50% AUC drop on real-world deepfakes vs. benchmarks), with human detection barely above chance (~55-60%).

CategoryMisuse Risk
SeverityHigh
Likelihoodvery-high
Timeframe2025
MaturityGrowing
StatusRapidly growing
Key RiskScale and personalization
Related
Risks
DeepfakesAI Disinformation
4.5k words · 6 backlinks

Overview

AI tools have reduced the cost and skill required to conduct certain categories of fraud, particularly impersonation-based attacks. Traditional fraud required manual effort for each target; current AI tools allow personalized attacks to be constructed at much larger scale. Voice cloning services can generate convincing speech from short audio samples, large language models can generate tailored phishing messages, and video synthesis tools enable real-time impersonation in virtual meetings.

The FBI's Internet Crime Complaint Center (IC3) recorded $16.6 billion in total reported fraud and cybercrime losses in 2024, a 33% increase from 2023, with cyber-enabled fraud accounting for approximately 83% of all losses.1 Investment fraud was the costliest single category at $6.57 billion, followed by Business Email Compromise (BEC) at $2.77 billion.1 These figures reflect only reported incidents; the actual volume of losses is likely higher due to under-reporting.

A notable documented case is the February 2024 Arup incident, in which an employee at the firm's Hong Kong office was deceived into transferring $25.6 million after a video call in which every participant — including a synthesized version of the company's CFO — was AI-generated.2 This case, described by Arup's CIO as "technology-enhanced social engineering," illustrates how deepfake tools can be used to defeat informal trust cues that organizations have historically relied upon for internal authentication.3

Important caveats on the data: Fraud statistics carry significant methodological limitations. IC3 figures reflect only reported incidents; regulatory bodies estimate actual losses are substantially higher, though precise multipliers are uncertain. Definitions of "AI-enabled" fraud vary across studies, and some cited loss figures conflate AI-assisted and traditional fraud. Growth rates should be interpreted with caution.

How AI Changes Fraud: Technical Pipeline

Understanding how AI-assisted fraud works helps clarify both the threat and the countermeasures. A typical AI-assisted impersonation attack involves several stages:

1. Target selection and reconnaissance. AI tools can process social media profiles, corporate websites, LinkedIn, and public records to identify high-value targets and build detailed profiles of their relationships, communication patterns, and financial authority. The Alan Turing Institute's Centre for Emerging Technology and Security (CETaS) documents how AI-powered systems can automate this profiling — a process that previously required extensive manual research — enabling scammers to identify and prioritize vulnerable targets at scale.4

2. Content harvesting. Voice cloning requires audio samples of the target or the person being impersonated. These can be drawn from public speeches, earnings calls, YouTube videos, podcasts, or voicemail greetings. Commercial voice cloning platforms such as ElevenLabs can generate convincing output from minimal input; the fake Biden robocall documented in 2024 reportedly cost approximately $1 and took 20 minutes to produce.5

3. Synthetic media generation. Depending on the attack, this may involve generating audio (voice cloning), text (LLM-written phishing emails or chat messages), or video (face-swap or full-synthesis deepfakes). Each of these technologies has matured substantially since 2020, and consumer-grade hardware is sufficient for basic attacks.

4. Attack execution. Delivery methods include phone calls with real-time voice conversion, email or messaging platforms with LLM-generated content, and video conference calls using live deepfake technology. The Arup case used the latter: all participants in a video call appeared to be real executives, and the employee's initial doubts were overcome by the apparent realism of the interaction.3

5. Money movement. Successful attacks typically involve rapid fund transfers to accounts that quickly move funds through cryptocurrency, wire transfers, or money mule networks, reducing recovery prospects. The FBI's Recovery Asset Team froze $561.6 million in 2024, a fraction of reported losses.1

Dual-use context: The same underlying technologies — voice synthesis, video manipulation, LLM text generation — have legitimate applications in accessibility tools, virtual assistants, film production, and customer service. This dual-use character complicates regulatory responses and means that restricting these tools involves tradeoffs with legitimate uses.

Technical Capabilities and Attack Vectors

Voice Cloning Technology

Voice cloning tools have become widely accessible. Commercial platforms advertise cloning from short audio samples, and the resulting audio can be used in pre-recorded messages or, with additional processing, in real-time phone conversations.

Detection limitations: A peer-reviewed study published in Nature Scientific Reports (2025) found that human participants correctly identified a voice as AI-generated only about 60% of the time — barely above chance — while perceiving AI-generated voices as matching their real counterparts approximately 80% of the time.5 These results held even with commercially available tools. A meta-analysis of 56 academic papers published in Computers in Human Behavior Reports (November 2024) found that overall human deepfake detection accuracy was 55.54% (95% CI [48.87, 62.10]), not significantly above chance.6

Automated detection systems face a different but related problem: generalization. The ASVspoof challenge — the largest international competition in spoofed speech detection — found that state-of-the-art models achieving under 1% error rates on controlled benchmark datasets showed error rates above 13% on out-of-domain data.7 A 2025 benchmark study (Deepfake Detection) found that open-source detection models experienced an average drop in AUC of approximately 48% for audio when evaluated on real-world deepfakes collected in 2024, compared to performance on academic benchmarks.8

Key developments:

  • Commercial platforms enable high-quality voice cloning with minimal audio input
  • Real-time voice conversion allows live phone conversations in another person's voice
  • Multi-language support enables attacks across linguistic and geographic boundaries

Deepfake Video Capabilities

Video deepfake technology enables real-time manipulation in video call contexts, though producing high-quality, real-time deepfakes remains more technically demanding than voice cloning alone.

  • Live video calls: Impersonate executives during virtual meetings
  • Multi-person synthesis: Create entire fake meeting environments, as documented in the Arup case2
  • Generalization gap: A 2025 benchmark found video detection models experienced approximately 50% drops in AUC when evaluated on real-world 2024 deepfakes versus academic datasets8
  • Research published in Nature Communications (December 2024) by MIT Media Lab researchers found that deepfakes using state-of-the-art text-to-speech audio were harder for participants to detect than deepfakes using voice actor audio, and that audio-visual information enabled more accurate detection than text alone — suggesting humans rely on how something is said rather than what is said.9

Personalized Phishing at Scale

TechnologyCapabilityScale PotentialDetection Rate
LLM-generated emailsContextual emailsMillions/day15-25% by filters
Social scrapingPersonal detailsAutomatedLimited
Template variationUnique messagesHighVery Low
Multi-languageGlobal targeting100+ languagesVaries

LLMs remove traditional warning signs of phishing (poor grammar, generic content, contradictory backstories) and allow operators to manage more targets simultaneously. However, no published peer-reviewed study has directly quantified the share of phishing attacks that are AI-generated versus human-composed, making precise comparisons difficult.10

Synthetic Identity Fraud

Distinct from impersonation attacks (which target real people), synthetic identity fraud involves creating entirely fictional identities by combining real and fabricated personal data. AI tools, particularly generative models, have expanded the capabilities available for this vector.

Synthetic identities — constructed from a mix of legitimate personal data and fabricated details — can be used to open credit accounts, apply for government benefits, intercept tax returns, or bypass Know Your Customer (KYC) verification processes.11 INTERPOL's Innovation Centre notes that AI-generated synthetic IDs can be used to bypass online liveness checks, compromising the reliability of identity verification procedures that financial institutions rely upon.11

Unlike impersonation fraud, synthetic identity fraud does not require an existing target's voice or face; the fraudulent persona is entirely constructed. This makes it resistant to defenses that focus on verifying whether a specific person's biometrics match — there is no "real" version to compare against.

Data limitations: Official statistics do not yet consistently distinguish synthetic identity fraud from other identity fraud categories, making loss estimates for this specific subcategory unreliable.

Major Case Studies and Attack Patterns

High-Value Business Attacks

CaseAmountMethodOutcomeKey Learning
Arup Engineering (Feb 2024)$25.6MDeepfake video meeting — all participants synthesizedFunds transferredNo systems compromised; only people were deceived3
FerrariAttemptedVoice cloning + WhatsAppThwartedPersonal questions the AI could not answer defeated the attack
WPPAttemptedTeams meeting + voice clone + YouTube footageThwartedEmployee suspicion triggered verification12
Hong Kong Bank (2020)$35MVoice cloningFunds transferredEarly documented case of audio-only impersonation

The PRMIA risk case study of the Arup incident identifies it as one of the first publicly confirmed cases of real-time deepfake impersonation driving a major corporate fraud, and characterizes the attack as demonstrating that "trust itself becomes an attack surface."13 The University of Nebraska Omaha's National Counterterrorism Innovation, Technology, and Education Center (NCITE) similarly documents it as illustrating how deepfake technology disrupts organizational trust and authority structures.12

Selection bias note: The cases documented above reflect high-profile, high-value attacks that became public. No systematic data exists on the base rate of AI-assisted fraud attempts, how many fail early in the process, or the ratio of successful to unsuccessful attacks. High-profile cases may not be representative of the broader distribution.

Attack Pattern Analysis

Business Email Compromise Evolution:

  • Traditional BEC: Template emails, basic impersonation
  • AI-enhanced BEC: Personalized content, contextual awareness, grammatically correct multilingual messages
  • FBI data: BEC losses totaled $2.77 billion in 20241

Voice Phishing Sophistication:

  • Phase 1 (2019–2021): Basic voice cloning, pre-recorded messages
  • Phase 2 (2022–2023): Real-time generation, conversational AI
  • Phase 3 (2024+): Multi-modal attacks combining voice, video, and text

Romance and Investment Fraud: The FTC reported over 64,000 romance scam complaints in 2023 with losses totaling $1.1 billion; the FBI received 17,823 romance scam complaints costing victims nearly $653 million in the same year.14 Investment fraud (including "pig butchering" schemes that merge romance and investment fraud) reached $6.57 billion in losses in 2024 per FBI data, the highest of any fraud category.1 The Alan Turing Institute's CETaS documents how LLM-generated text facilitates scalable engagement, while deepfake media adds apparent authenticity.4

Critically, no published peer-reviewed study directly quantifies the share of romance or investment scams that are AI-enabled versus human-operated.10 Official statistics from FBI and FTC do not yet disaggregate AI-assisted from traditional methods in these categories.

Financial Impact and Documented Losses

2024 FBI IC3 Data (Authoritative)

The FBI IC3 2024 Annual Report, released April 24, 2025, provides the most authoritative available data on reported cybercrime losses in the United States:1

Fraud Category2024 Reported LossesYear-over-Year Change
Investment fraud (incl. crypto)$6.57 billion+47% (crypto component)
Business Email Compromise$2.77 billionIncrease from 2023
Tech support fraud$1.46 billion
Elder fraud (60+)$4.8–4.9 billion+43%
All crypto-related losses$9.32 billion+66%
Total reported losses$16.6 billion+33%

Total complaints: 859,532. The FBI's Recovery Asset Team froze $561.6 million, representing a small fraction of reported losses.

Important methodological notes:

  • These are reported losses only; actual losses are likely higher due to under-reporting, stigma, and victims not recognizing fraud
  • The IC3 data covers all cybercrime, not only AI-assisted fraud; the AI-assisted share is not disaggregated in official reporting
  • Figures reflect US complaints; global loss estimates from industry sources vary in methodology and should be treated with greater uncertainty

Regional Data

RegionAvailable DataNotes
United States$16.6B total reported (2024)FBI IC3 — authoritative1
Global deepfake fraudFourfold increase in detected fraud deepfakes, 2023–2024Per Deepfake-Eval-2024 benchmark8
Korea/China voice phishingUSD 1.1B attributed to one syndicate (INTERPOL Operation HAECHI V)One operation, July–Nov 202415
EuropeNo authoritative aggregate figure foundEstimates from industry sources vary

Note on figures removed: The original version of this page cited a "$25B global voice-based fraud" figure and a "€5.1B Europe estimate" without sources. These figures could not be verified against named primary sources and have been removed pending sourcing. The "233% growth projection" and "$40B by 2027" figures similarly lacked traceable primary sources and are not reproduced here as forward projections.

Countermeasures and Defense Strategies

Technical Defenses

Detection technology limitations: A Springer Nature Artificial Intelligence Review paper (May 2024) identifies three categories of challenges for current deepfake detection: data challenges (unbalanced datasets), training challenges (computational requirements), and reliability challenges (overconfidence in detection methods). It concludes that current detection approaches are insufficient for real-world scenarios.16 The Deepfake-Eval-2024 benchmark found that commercial detection models outperform off-the-shelf open-source models, but none yet reach the accuracy of forensic analysts on real-world content.8

Real-time detection presents an additional challenge: state-of-the-art detection models (Transformers, hybrid architectures) are computationally expensive, making synchronous detection during an audio or video call technically difficult.17

ApproachNotes on EffectivenessImplementation CostKnown Limitations
AI DetectionPerforms well on benchmark datasets; performance degrades substantially on real-world content8HighGeneralization gap; arms race dynamic
Multi-factor Auth (FIDO2)Resistant to phishing and Adversary-in-the-Middle attacks by design18MediumOverlay attacks can deceive users in ≈95% of cross-service attempts in lab conditions19
Behavioral AnalysisLimited published real-world effectiveness dataHighFalse positive costs can be significant
Code Words / Out-of-band VerificationLow-cost; effective when followed consistentlyLowDepends on human compliance; no large-scale empirical studies of real-world effectiveness
PKI Digital SigningProvides authentication, integrity, and non-repudiation for documentsMedium-HighAddresses document/email authenticity; does not address live audio/video impersonation

On FIDO2 as a countermeasure: FIDO2 (WebAuthn) combines local authentication (biometrics or PIN) with public-key cryptography, storing only public keys on servers. Research published at ACM CCS 2023 provides empirical evaluation of real-world FIDO2 deployments and finds that compromised clients present practical threats when configurations are weak.18 Separately, research at ACM CCS 2024 identified a "FIDOLA" attack in which screen overlays can deceive users into approving fraudulent authentication requests; a user study found approximately 95.55% of cross-service attacks were approved when a screen overlay was presented.19 These findings illustrate that even strong cryptographic authentication has implementation-dependent weaknesses.

On detection vendor claims: Published peer-reviewed benchmarks and vendor marketing claims for deepfake detection accuracy differ substantially. The analysis above draws on academic benchmarks rather than vendor-reported figures.

Organizational Protocols

Financial Controls:

  • Mandatory dual authorization for transfers above defined thresholds
  • Out-of-band verification for unusual requests (callback to a known number, not one provided in the request)
  • Time delays for large transactions
  • The PRMIA Arup case study recommends shifting organizational culture toward "verify first, then act" rather than automatic compliance with apparent authority13

Training and Awareness:

  • Regular deepfake awareness sessions, including simulated deepfake exercises
  • Incident reporting systems that remove stigma from employees who raise concerns
  • Executive protection protocols

Consumer and Individual Defenses

Most published guidance focuses on enterprise defenses. Individual-level defenses are less systematically studied, but practical measures include:

  • Pre-arranged family code words for verifying unexpected urgent requests, particularly relevant for grandparent scams targeting elderly populations
  • Skepticism toward urgency: Fraudulent calls frequently invoke time pressure to prevent verification; any request demanding immediate action should trigger additional verification
  • Independent verification: Hang up and call back using a number independently looked up, not one provided by the caller
  • Awareness of pig-butchering patterns: Investment opportunities introduced through romantic or social relationships, particularly involving cryptocurrency, warrant heightened scrutiny
  • Elderly populations are disproportionately targeted: FBI data shows those 60+ accounted for $4.8–4.9 billion in reported losses in 2024, a 43% year-over-year increase.1

Regulatory and Law Enforcement Responses

EU AI Act

The EU AI Act entered into force in August 2024, with phased implementation. Under Article 50, AI systems that generate synthetic content (including deepfakes) must mark outputs as artificially generated; companies must inform users when AI is used for emotion recognition or biometric categorization.20 Deepfakes fall under the "limited risk" category — they are not banned but are subject to transparency requirements.

Key implementation details:2122

  • Prohibitions on unacceptable-risk AI systems enforceable from February 2, 2025
  • Penalties applicable from August 2, 2025 (except GPAI obligations, which take effect August 2, 2026)
  • Article 99 penalties: up to €35 million or 7% of worldwide annual turnover for the most serious violations
  • National market surveillance authorities conduct most compliance investigations; the EU does not directly investigate except in limited circumstances
  • As of mid-2025, no publicly documented enforcement cases or fines have been issued directly under the EU AI Act

Implementation gaps: Spain has created a dedicated regulatory body; France and Germany had not yet established dedicated AI law enforcement bodies as of 2024.21 A first draft Code of Practice (early 2026) requires deepfakes to be disclosed clearly at the moment of first exposure.23

INTERPOL Operations

INTERPOL's Operation HAECHI V (July–November 2024) targeted seven types of cyber-enabled fraud across 40 countries and territories, resulting in 5,500+ arrests and seizures of over USD 400 million in assets.15 The operation targeted voice phishing, romance scams, online sextortion, investment fraud, illegal online gambling, BEC, and e-commerce fraud.

A Korean/Beijing coordinated action within Operation HAECHI V dismantled a voice phishing syndicate responsible for approximately USD 1.1 billion in losses affecting over 1,900 victims.15

INTERPOL's Innovation Centre also published a 2024 report, Beyond Illusions: Unmasking the Threat of Synthetic Media for Law Enforcement, providing guidance to member countries on synthetic media detection.11 The report notes that "AI is needed to detect AI" — that training models to identify inconsistencies is currently the primary detection approach — and identifies education gaps among agencies unfamiliar with synthetic media forensics.

Other Regulatory Developments

  • NIST AI Risk Management Framework addresses authentication challenges in its AI RMF guidance
  • California AB 2273 requires deepfake labeling
  • France adopted Article 226-8-1 amending the Penal Code to criminalize non-consensual sexual deepfakes, carrying up to 2 years' imprisonment and €60,000 fine23
  • The EU AI Act represents the most comprehensive regulatory framework to date, though enforcement is in early stages

Current State and Trajectory

Technology Development

The following table describes documented capabilities as of 2024. Forward projections beyond 2025 carry substantial uncertainty and should be treated as scenarios rather than forecasts. Entries for 2026+ reflect directional trends identified in research literature, not specific predictions.

PeriodVoice CloningVideo DeepfakesDetection State
2024 (documented)Cloning from short samples; real-time conversion available commerciallyReal-time video manipulation in call contexts; quality variesAcademic models: severe generalization gap (≈50% AUC drop on real-world content)8; human detection: ~60% accuracy5
Near-term trendReducing data requirements; broader language supportImproving realism; reducing computational requirementsOngoing arms race; commercial tools outperform open-source but below forensic analyst level8
Longer-term uncertaintyUnclear; depends on both generation and detection research trajectoriesUnclearActive research area; no consensus on whether detection can keep pace

Why the original 2027 projections were removed: The prior version of this page included a table projecting "perfect mimicry," "indistinguishable" deepfakes, and "humanity-scale" attacks by 2027, presented in a factual table format without attribution to any forecaster or scenario analysis. These entries have been removed because they were speculative assertions without traceable sources, presented in a format that implied factual certainty.

Emerging Threat Vectors

Multi-modal attacks combine voice, video, and text for coordinated deception campaigns, as seen in the Arup case. Cross-platform persistence maintains fraudulent relationships across multiple communication channels. AI-generated personas can create synthetic identities with fabricated social media histories.

Fraud-as-a-service: Voice cloning tools, LLM-based phishing kits, and deepfake services are available in criminal markets, reducing the technical skill required for sophisticated attacks. The Alan Turing Institute's CETaS documents how AI-assisted coding tools have reduced the skill required to launch fake investment platforms, enabling mass-production of fraudulent sites.4

Key Uncertainties and Expert Disagreements

Technical Cruxes

Detection Feasibility: A 2025 benchmark study found that state-of-the-art open-source detection models experience roughly 50% AUC degradation when evaluated on real-world 2024 deepfakes versus controlled academic datasets.8 A 2024 Springer review concludes current approaches are "insufficient for real-world scenarios."16 The MIT Media Lab's research on human detection of political speech deepfakes found that state-of-the-art TTS audio is harder to detect than voice actor audio.9 Whether detection technology can keep pace with generation improvements is an open empirical question; the research literature does not support confident optimism in either direction.

Authentication Crisis: Traditional identity verification (voice, appearance) becomes less reliable as synthesis quality improves. FIDO2 and PKI-based cryptographic authentication offer stronger theoretical guarantees, but deployment at scale faces usability, infrastructure, and implementation security challenges.1819

Economic Impact Debates

Measurement challenges: Romance scam, investment fraud, and AI-assisted fraud statistics face systematic under-reporting (stigma, unrecognized fraud), possible double-counting across categories, and variable definitions of "AI-enabled." The FBI IC3 figures are the most authoritative available for the US, but they explicitly note that reported complaints represent a fraction of actual incidents.

Market adaptation: The pace at which organizations adopt stronger verification protocols is uncertain, and depends on cost, regulatory pressure, and incident experience. Human factors — including organizational culture around deference to authority — may slow adoption regardless of available technology.

Insurance coverage: Cyber insurance policies are increasingly scrutinizing AI-enabled fraud coverage. Debate continues over liability allocation between victims, platforms, and AI providers. No authoritative industry-wide data on AI fraud exclusions was identified for this page.

Policy Disagreements

Regulation vs. innovation tradeoffs: Proposals for mandatory deepfake watermarking, transparency disclosure requirements, and restrictions on voice cloning services involve tradeoffs with legitimate uses of the same underlying technologies. Critics of heavy regulation argue it may hamper legitimate AI research and development without meaningfully constraining criminal actors who operate outside legal jurisdictions.

International coordination: Cross-border fraud requires coordinated response, but jurisdictional challenges persist. INTERPOL's operations represent enforcement-level coordination; policy-level coordination remains fragmented.15

Measurement Limitations and Interpretive Cautions

This section consolidates important caveats that apply across the page:

  1. AI-specific share unknown: Official statistics (FBI IC3, FTC) do not disaggregate AI-assisted from traditional fraud. Claims about the percentage of fraud that is AI-enabled are generally based on industry surveys or extrapolation rather than primary government data.

  2. Under-reporting: Fraud statistics systematically under-count actual incidents due to stigma, unrecognized fraud, and reporting barriers. The IC3 explicitly notes this limitation.

  3. Vendor vs. academic benchmarks: Detection accuracy figures from commercial vendors typically reflect performance on controlled test sets. Academic benchmarks using real-world deepfakes show substantially worse performance.8

  4. Consumer survey methodology: Survey-based statistics (e.g., "1 in 4 adults experienced AI voice scam" from a May 2023 MSI Research consumer survey commissioned by McAfee) reflect self-reported experiences in a general-population survey, not verified incident reports. Survey-based and complaint-based estimates measure different things and should not be treated as equivalent.

  5. Growth rate reliability: Year-over-year fraud growth rates depend on consistent definitions across periods and consistent reporting rates. Changes in public awareness or reporting behavior can affect apparent growth rates independent of actual fraud volume changes.

  6. Projection uncertainty: Industry projections for future fraud losses involve substantial uncertainty and often rely on extrapolation from recent growth rates, which may not continue. No independent evaluation of the accuracy of past fraud projections was found.

This fraud escalation connects to broader patterns of AI-enabled deception and social manipulation:

  • Authentication Collapse — Breakdown of identity verification as synthesis quality increases
  • AI Trust Cascade Failure — Erosion of social trust due to synthetic media
  • Autonomous Weapons — Similar dual-use technology concerns in a different domain
  • Deepfakes — Overlapping synthetic media threats

The acceleration in fraud capabilities raises questions relevant to broader AI Misuse Risk Cruxes and the need for robust AI Governance and Policy responses.

Footnotes

  1. FBI Internet Crime Complaint Center, 2024 Annual Report, April 24, 2025. 2 3 4 5 6 7 8

  2. National Counterterrorism Innovation, Technology, and Education Center (NCITE), University of Nebraska Omaha, "Deepfakes and Fraud: Real-World Examples of AI Misuse", 2024. 2

  3. World Economic Forum, "Cybercrime: Lessons learned from a $25m deepfake attack", February 2025. Quotes Arup CIO Rob Greig. 2 3

  4. Alan Turing Institute Centre for Emerging Technology and Security (CETaS), "Automating Deception: AI's Evolving Role in Romance Fraud", 2024. 2 3

  5. "People are poorly equipped to detect AI-powered voice clones", Nature Scientific Reports, 2025. Participants identified AI-generated voices as AI-generated approximately 60% of the time and perceived AI voices as matching their real counterparts approximately 80% of the time. 2 3

  6. Diel et al., "Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers", Computers in Human Behavior Reports, November 2024. Total deepfake detection accuracy: 55.54% (95% CI [48.87, 62.10]); not significantly above chance.

  7. Wang et al., "ASVspoof 5: Design, collection and validation of resources for spoofing, deepfake, and adversarial attack detection", Computer Speech & Language, 2025. WavLM+AASIST showed EER of 13.08% on out-of-domain data versus 0.83% on ASVspoof 2019 benchmark.

  8. "Deepfake-Eval-2024: A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024", 2025. Average AUC drop of 50% for video, 48% for audio, 45% for image models on real-world 2024 deepfakes versus academic benchmarks. Also documents fourfold increase in deepfakes detected in fraud between 2023 and 2024. 2 3 4 5 6 7 8 9

  9. Matthew Groh, Aruna Sankaranarayanan, Nikhil Singh, Dong Young Kim, Andrew Lippman, Rosalind Picard (MIT Media Lab), "Human detection of political speech deepfakes across transcripts, audio, and video", Nature Communications, December 2024. Five pre-registered randomized experiments, N=2,215 participants. 2

  10. Bitdefender, "New Research Shows How AI Is Powering Romance-Baiting Scams", 2024. Notes explicitly that no published peer-reviewed study directly quantifies the share of romance scams that are AI-enabled versus human-operated. 2

  11. INTERPOL Innovation Centre, "Beyond Illusions: Unmasking the Threat of Synthetic Media for Law Enforcement", 2024. 2 3

  12. NCITE, University of Nebraska Omaha, "Deepfakes and Fraud: Real-World Examples of AI Misuse", 2024. Documents both Arup and WPP cases. 2

  13. Professional Risk Managers' International Association (PRMIA), "The Arup Deepfake Fraud", 2024. 2

  14. Malwarebytes, "Romance scams costlier than ever", September 2024. Citing FTC and FBI 2023 data.

  15. INTERPOL, "INTERPOL financial crime operation makes record 5,500 arrests, seizures worth over USD 400 million", 2024. Operation HAECHI V, July–November 2024. 2 3 4

  16. "Deepfake video detection: challenges and opportunities", Springer Nature Artificial Intelligence Review, May 2024. 2

  17. "Unmasking Digital Deceptions: An Integrative Review of Deepfake Detection, Multimedia Forensics, and Cybersecurity Challenges", PubMed Central, 2025. Notes domain overfitting: CNN trained on DFDC achieves >90% accuracy on its test set but drops to ~60% on WildDeepfake dataset.

  18. "Evaluating the Security Posture of Real-World FIDO2 Deployments", ACM SIGSAC CCS 2023. Finds compromised clients present practical threats due to weak configurations. 2 3

  19. "Breaching Security Keys without Root: FIDO2 Deception Attacks via Overlays", ACM SIGSAC CCS 2024. User study found approximately 95.55% of cross-service attacks were approved when a screen overlay was presented. 2 3

  20. EU AI Act Article 50, 2024. Transparency obligations for providers and deployers of certain AI systems.

  21. Fragale and Grilli, "Deepfake, Deep Trouble: The European AI Act and the Fight Against AI-Generated Misinformation", Columbia Journal of European Law, 2024. 2

  22. Orrick, "The EU AI Act: Oversight and Enforcement", September 2024.

  23. TechPolicy.Press, "What the EU's New AI Code of Practice Means for Labeling Deepfakes", January 2026. 2

References

The EU AI Act establishes a comprehensive regulatory framework for artificial intelligence, classifying AI systems by risk levels and imposing transparency and safety requirements.

6mandatory deepfake watermarkingWhite House·Government
★★★★☆
7KnowBe4knowbe4.com
9Attestivattestiv.com
★★★★★
12AB 2273leginfo.legislature.ca.gov·Government
14$16.6 billion in 2024ic3.gov·Government
★★★☆☆
22Deepfake-Eval-2024 benchmarkarXiv·Nuria Alina Chandra et al.·2025·Paper
★★★☆☆

Related Pages

Top Related Pages

Approaches

AI Content AuthenticationDeepfake DetectionAI-Era Epistemic Security

Analysis

Fraud Sophistication Curve ModelAuthentication Collapse Timeline ModelTrust Erosion Dynamics ModelDeepfakes Authentication Crisis Model

Risks

AI DisinformationAuthentication CollapseAutonomous WeaponsAI Trust Cascade FailureAI-Driven Legal Evidence Crisis

Policy

NIST AI Risk Management Framework (AI RMF)EU AI ActChina AI Regulatory Framework

Key Debates

AI Misuse Risk CruxesAI Governance and Policy

Concepts

Large Language ModelsMisuse OverviewPersuasion and Social Manipulation