Skip to content

Electoral Impact Assessment Model

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:61 (Good)⚠️
Importance:52 (Useful)
Last edited:2026-01-28 (4 days ago)
Words:3.5k
Structure:
📊 7📈 2🔗 2📚 2238%Score: 13/15
LLM Summary:This model estimates AI disinformation's marginal electoral impact using multi-step pathway analysis (generation→exposure→belief→votes→outcomes), concluding AI currently flips 0.2-5% of elections individually but may affect 1-3 races annually with 2-5% annual democratic trust erosion. Key finding: systemic trust damage likely matters more than individual election flips, with 2024 evidence suggesting limited direct impact (cheap fakes 7:1 vs AI) but measurement challenges remain severe.
Critical Insights (4):
  • Counterint.AI disinformation likely flips only 1-3 elections annually globally despite creating 150-3000x more content than traditional methods, because exposure multipliers (1.5-4x) and belief change effects (2-6x) compound to much smaller vote shifts than the content volume increase would suggest.S:4.5I:4.5A:4.0
  • Quant.Platform content moderation currently catches only 30-60% of AI-generated disinformation with detection rates declining over time, while intervention costs range from $100-500 million annually with uncertain and potentially decreasing effectiveness.S:3.5I:4.0A:4.5
  • ClaimSystemic erosion of democratic trust (declining 3-5% annually in media trust, 2-4% in election integrity) may represent a more critical threat than direct vote margin shifts, as the 'liar's dividend' makes all evidence deniable regardless of specific election outcomes.S:3.5I:5.0A:3.0
Issues (2):
  • QualityRated 61 but structure suggests 87 (underrated by 26 points)
  • Links1 link could use <R> components
Model

Electoral Impact Assessment Model

Importance52
Model TypeImpact Assessment
Target RiskDisinformation
Related
Model Quality
Novelty
5.5
Rigor
6
Actionability
5
Completeness
6.5

AI dramatically lowers the cost of creating and distributing disinformation at scale. But does this translate to meaningful impact on election outcomes? This model provides a framework for estimating the marginal effect of AI-generated disinformation on electoral results and democratic processes.

Core Question: By how much can AI disinformation shift election results, and under what conditions?

Understanding AI disinformation’s electoral impact matters because democratic legitimacy depends on elections reflecting genuine voter preferences. If AI disinformation can reliably shift 2-5% of votes in close elections (our central estimate), this represents a fundamental threat to democratic governance.

DimensionAssessmentQuantitative Estimate
Direct electoral impactModerate - individual elections rarely flipped, but close races vulnerable0.2-5% chance of flipping any given election
Cumulative electoral impactHigh - across 50+ major elections annually, 1-3 likely flipped1-3 elections changed per year globally
Democratic trust erosionVery High - systemic effect may exceed direct vote impactsTrust declining 2-5% annually, accelerating
Close election vulnerabilityCritical - races within 3% margin highly susceptible20-30% of elections are close enough to flip
Expected vote shift from AIModerate - 1-3% of electorate potentially shifted1.5-4.5 million votes in US presidential election
FactorAssessmentConfidence
Direct harm severityHigh (threatens democracy)Medium
Tractability of defenseMedium (multiple interventions possible)Low
NeglectednessLow-Medium (receiving attention, but not calibrated to threat)Medium
Time sensitivityHigh (affects 2024-2026 elections)High
InterventionInvestment NeededExpected ImpactPriority
Platform detection and removal$100-300 million annuallyReduces AI disinformation reach by 20-40%; declining effectivenessHigh (near-term)
Provenance mandates for political ads$20-50 million for implementationAuthenticates 60-80% of legitimate political contentHigh
Election security infrastructure$200-500 million over 4 yearsRapid response capability; fact-checking coordinationHigh
Voter media literacy campaigns$50-150 million per election cycleIncreases skepticism by 10-20%; limited reach to vulnerable populationsMedium
International coordination on attribution$30-80 million annuallyEnables consequences for state-sponsored interferenceMedium
Emergency content restrictions (if crisis)Political cost, not financialCould prevent immediate crisis but raises free speech concernsConditional
CruxIf TrueIf FalseCurrent Assessment
AI disinformation can reliably shift greater than 2% of votesFundamental threat to close elections; justifies major interventionThreat overstated; focus resources elsewhere60-70% probability - evidence from micro-targeting suggests plausible
Detection can keep pace with generation qualityPlatform moderation remains effective defenseDetection fails; alternative defenses needed20-30% probability - declining trend suggests failure likely
Voters develop resistance to AI manipulationNatural adaptation reduces threat over timeVulnerability persists or increases40-50% probability - some evidence of growing skepticism
Cheap fakes remain more effective than sophisticated AIAI adds marginal threat; traditional methods dominateAI becomes primary disinformation vector55-65% probability near-term; declining as AI quality improves
Systemic trust erosion matters more than individual electionsPrioritize long-term democratic health over election-specific defenseFocus on preventing specific election manipulation70-80% probability - trust trends more concerning than documented flips

Key insight: The marginal impact of AI disinformation is probably smaller than media coverage suggests for individual elections, but systemic effects on democratic trust may matter more than vote margin shifts.

The following table summarizes key model parameters derived from empirical research and expert elicitation.

ParameterBest EstimateRangeConfidenceSource
AI content generation cost reduction100-1000x50-5000xHighIndustry benchmarks
Personalized AI persuasion uplift1.3-2x1.1-3xMediumScientific Reports 2024
AI vs human propaganda persuasiveness~Equal0.8-1.2xMediumPNAS Nexus 2024
Traditional campaign effect on vote≈0%-0.5 to 0.5%HighAmerican Political Science Review
AI dialogue persuasion effectLarger than video ads1.2-2x video adsMediumNature 2025
Platform detection rate (AI content)30-60%20-80%LowPlatform disclosures
Cheap fakes vs AI ratio in 20247:15:1 to 10:1HighKnight Columbia
Close election threshold3% margin1-5%HighHistorical analysis
P(election flipped by AI)0.2-5%0.1-10%Very LowModel estimate

Research from MIT Sloan found that false information spreads 70% faster than true information on social media, with political falsehoods showing particularly rapid diffusion. This suggests AI-generated disinformation may benefit from inherent platform dynamics that favor novel, emotionally engaging content.

Elections are influenced by countless factors:

  • Economic conditions
  • Candidate quality
  • Campaign spending
  • Media coverage
  • Debates and events
  • Ground operations
  • Traditional advertising
  • Disinformation (pre-AI)
  • AI-Generated disinformation (new)

Challenge: Isolating the marginal contribution of AI-enhanced disinformation from everything else.

We can decompose the causal pathway from AI capability to electoral impact:

Loading diagram...

Each step has a probability/magnitude. The overall impact is the product of all steps.

Step 1: AI → Disinformation Volume/Quality

Section titled “Step 1: AI → Disinformation Volume/Quality”

Pre-AI Disinformation Constraints:

  • Human effort required for each piece of content
  • Limited personalization
  • Detectable patterns (template-based)
  • Cost: $1-10 per piece for quality content

AI Enhancement:

  • Automated generation at massive scale
  • Personalized to individual targets
  • High quality, indistinguishable from organic content
  • Cost: $0.001-0.01 per piece

Multiplier Effect:

  • Volume increase: 100-1000x
  • Quality increase: 1.5-3x (more convincing)
  • Personalization increase: 10-100x (targeted messaging)

Overall AI Impact on Content Creation: ~150-3000x increase in effective disinformation output

Confidence: High. Well-documented in 2024 elections.

Not all content reaches audiences. Social media algorithms, platform moderation, and user behavior filter content.

Platform Moderation:

  • Platforms remove ~20-40% of detected disinformation
  • AI-generated content currently detected at ~30-60% rate (falling)
  • Net effect: 50-80% of AI disinformation reaches audiences (vs ~60-90% of human disinformation)

Algorithmic Amplification:

  • Engaging content (often outrage-inducing disinformation) promoted
  • AI-generated content can optimize for engagement
  • Multiplier: 1.2-2x amplification vs. baseline

Audience Reach:

  • Traditional disinformation: reaches 5-15% of target audience
  • AI-personalized disinformation: reaches 10-30% of target audience (better targeting)

Overall Exposure Multiplier (AI vs traditional): 1.5-4x

Confidence: Medium. Platform algorithms are opaque; estimates based on disclosed data.

How many people who see disinformation actually believe it?

Baseline Belief Rates (Pre-AI):

  • Aligned with existing beliefs: 30-50% believe
  • Counter to existing beliefs: 5-15% believe
  • No prior opinion: 20-40% believe

AI Enhancement Factors:

Personalization: AI can tailor messaging to individual psychology

  • Estimated increase in persuasiveness: 1.3-2x

Multimodal Content: Deepfakes, voice clones more convincing than text

  • Estimated increase for video/audio: 1.5-2.5x vs text

Repetition at Scale: Multiple exposures via different “sources” (all AI)

  • Estimated increase per additional exposure: 1.2x (up to 3-4 exposures)

Overall Belief Change Multiplier (AI vs traditional): 2-6x depending on content type and targeting

Confidence: Low-Medium. Limited experimental data. Based on persuasion research and preliminary studies.

Step 4: Belief Change → Vote Choice Change

Section titled “Step 4: Belief Change → Vote Choice Change”

Not all belief changes translate to vote switching.

Baseline Vote Impact (pre-AI disinformation):

  • Partisans rarely switch: 1-3% affected
  • Swing voters more susceptible: 10-20% affected
  • Low-information voters most susceptible: 15-30% affected

Election Type Matters:

  • Presidential elections: Voters have strong priors, hard to shift
  • Local elections: Lower information, easier to influence
  • Ballot initiatives: Voters often uncertain, highly influenceable

AI Disinformation Vote Impact: Assuming AI increases belief change by 2-6x (Step 3):

  • Partisans: 2-8% affected (low end—beliefs don’t translate to switching)
  • Swing voters: 15-35% affected
  • Low-info voters: 25-50% affected

Weighted Average (typical electorate):

  • ~15% swing voters
  • ~30% low-info voters
  • ~55% strong partisans

Overall Vote Impact: 5-15% of exposed population might shift vote due to AI disinformation

Confidence: Low. Vote switching is multi-causal; attribution difficult.

Finally, how many votes need to shift to change election results?

Close Elections:

  • 2020 U.S. Presidential: Decided by ~44,000 votes across 3 states (~0.03% of total votes)
  • Many congressional races decided by 1-3%
  • Close elections highly vulnerable to small shifts

Landslide Elections:

  • 10+ point margins require massive shifts to overturn
  • AI disinformation unlikely to swing

Quantitative Model:

Assume:

  • Close election (within 3%)
  • AI disinformation reaches 30% of electorate
  • Of those, 10% shift votes
  • Overall vote shift: 3%

Result: Enough to flip a close election.

2024 Elections: The “AI Election” That Wasn’t?

Section titled “2024 Elections: The “AI Election” That Wasn’t?”

Despite being called the “AI election year,” post-election analysis found limited evidence of decisive AI disinformation impact.

Why the limited impact?

Possible Explanations:

  1. Detection Worked: Platform moderation caught enough AI content to limit spread

    • Evidence: Multiple platforms reported removing AI-generated campaigns
    • Counter-evidence: Much went undetected
  2. Audience Skepticism: Voters increasingly aware of AI manipulation, more skeptical

    • Evidence: Increased media literacy campaigns
    • Counter-evidence: Most voters unaware of specific AI threats
  3. Cheap Fakes More Effective: Simple edited videos outperformed sophisticated AI (7:1 ratio per News Literacy Project)

    • Evidence: Well-documented
    • Implication: Quality may matter less than simplicity
  4. Existing Polarization Dominates: Voters already so polarized that marginal disinformation doesn’t matter

    • Evidence: Historically high partisan loyalty
    • Implication: AI disinformation adds noise, not signal
  5. Measurement Problem: Impact exists but is undetectable amid other factors

    • Evidence: Close races in swing states consistent with small AI impact
    • Problem: Can’t prove counterfactual

Most Likely: Combination of #3, #4, and #5. AI disinformation had some impact but was not decisive in 2024.

Event: Audio deepfake of liberal party leader discussing vote rigging surfaced days before election Result: Liberal party suffered upset loss Attribution: Unclear if deepfake was decisive

Analysis:

  • Timing (just before election) maximized impact, minimized correction time
  • Topic (vote rigging) highly salient and credible to some voters
  • Close race amplified marginal effects

Estimated Impact: Possibly 1-3% vote shift, potentially decisive in close race

Lessons:

  • Timing matters enormously
  • Topic credibility affects impact
  • Close races vulnerable to small effects

Taiwan 2024: Documented AI Influence Campaign

Section titled “Taiwan 2024: Documented AI Influence Campaign”

Event: Microsoft documented China-based AI-generated deepfakes targeting Taiwan election Result: Unclear impact on outcome Characteristics: First confirmed state-actor use of AI in foreign election

Analysis:

  • Detected and publicized before election (reduced impact)
  • Taiwan electorate somewhat prepared for Chinese interference
  • Content quality varied (some obvious, some convincing)

Estimated Impact: <1% vote shift, not decisive

Lessons:

  • Attribution and publicity can reduce impact
  • Prepared electorates more resilient

The following table synthesizes experimental research on AI persuasion effects relevant to electoral contexts.

StudyMethodKey FindingEffect SizeRelevance
PNAS Nexus 2024Survey experiment comparing GPT-3 vs human propagandaAI content equally persuasive as human-writtend ≈ 0 (no difference)Establishes AI can match human quality
Scientific Reports 20247 sub-studies on personalized AI messages (N=1,788)Personalized AI messages more influential1.3-2x upliftShows personalization advantage
Nature 2025Pre-registered experiments in US, Canada, PolandAI dialogues change candidate preferenceLarger than video adsMost direct electoral evidence
APSR 2018Meta-analysis of 49 field experimentsCampaign contact has ~zero effectd ≈ 0Baseline for traditional persuasion
Stanford 2020Facebook/Instagram deactivation (N=35,000)Platform removal had little effect on viewsMinimalSuggests limited platform-specific impact

These findings suggest a paradox: while AI can produce highly persuasive content in experimental settings, real-world electoral effects remain difficult to detect. Possible explanations include: (1) experimental conditions differ from actual campaign contexts; (2) effects are real but small and distributed across many elections; (3) countervailing forces (skepticism, platform moderation) offset AI advantages in practice.

P(AI flips election) = P(close race) × P(AI campaign) × P(reaches voters) × P(shifts votes) × P(shift is decisive)
Where:
P(close race) = 0.15-0.30 (varies by election type)
P(AI campaign) = 0.50-0.90 (becoming common)
P(reaches voters) = 0.20-0.50 (platform moderation, virality)
P(shifts votes) = 0.05-0.15 (small persuasion effect)
P(shift is decisive) = 0.10-0.30 (in close race context)

Result: P(AI flips election) = 0.0015 to 0.054 (0.15% to 5.4%)

Interpretation: In any given election, AI disinformation has a ~0.2-5% chance of being decisive.

Over many elections (50+ major races in a year), AI disinformation likely flips 1-3 elections annually (current state).

Confidence: Very low. Enormous uncertainty in each parameter.

Baseline Assumptions:

  • 100 million voters
  • 50-50 race
  • 30% exposed to AI disinformation
  • 5% of exposed shift votes
  • 1.5 million vote shift (1.5% of total)

In close elections (decided by <1%): AI disinformation likely decisive

In moderate elections (3-5% margin): AI disinformation possibly influential but not clearly decisive

In landslide elections (>7% margin): AI disinformation unlikely decisive

Implication: ~20-30% of elections are close enough that AI disinformation could plausibly be decisive.

The following scenarios represent distinct trajectories for AI disinformation’s electoral impact over the 2025-2030 period.

ScenarioProbabilityImpact LevelKey DriversPolicy Response
Detection Keeps Pace15-20%Low (0.5-2% elections affected)Platform investment in AI detection; regulatory pressure; content provenance adoptionMaintain current approach; enhance monitoring
Stalemate30-40%Moderate (2-5% elections affected)Arms race between generation and detection; mixed regulatory success; public adaptationStrengthen platform accountability; expand media literacy
Sophistication Wins25-35%High (5-15% elections affected)Detection fails; personalization improves; state actors scale operationsEmergency measures; mandatory provenance; election reforms
Saturation Effect15-25%Moderate-Declining (3-5% then decreasing)Information overload; voter skepticism universalizes; all content treated as suspectFocus on trust restoration; institutional resilience

The most concerning finding from recent research is the Romania 2024 case, where election results were annulled after evidence of AI-powered interference using manipulated videos. This represents the first documented case of AI disinformation being consequential enough to trigger institutional response.

Loading diagram...
  1. Targeting Sophistication: Better micro-targeting increases efficiency
  2. Multimodal Content: Video/audio more persuasive than text
  3. Coordination: Multiple AI campaigns from different sources reinforce messaging
  4. Erosion of Trust: As authentic media becomes suspect, all information becomes equally (un)reliable
  5. Authoritarian Backing: State-sponsored campaigns have more resources and persistence
  1. Platform Countermeasures: Detection, labeling, removal
  2. Media Literacy: Educated populations more skeptical
  3. Provenance Systems: C2PA and similar make authentic content verifiable
  4. Partisan Polarization: Voters so entrenched that persuasion is difficult
  5. Saturation: So much disinformation that all becomes noise

Characteristics:

  • AI disinformation common but detectable
  • Platforms implementing countermeasures
  • Electorate beginning to adapt
  • Estimated impact: 1-3% of close elections flipped

Characteristics:

  • AI-generated content becomes harder to detect
  • Personalization improves (better targeting)
  • More actors deploy AI campaigns
  • Public awareness increases but so does volume
  • Estimated impact: 3-8% of close elections flipped

Two Possible Paths:

Path A: Saturation (40% probability)

  • So much disinformation that voters tune out
  • All information treated as equally suspect
  • Impact paradoxically decreases as volume increases
  • Estimated impact: 2-5% of elections (impact declines)

Path B: Sophistication Wins (60% probability)

  • Personalized, multimodal AI content highly effective
  • Detection fails to keep pace
  • Provenance systems not widely adopted
  • Estimated impact: 10-20% of close elections flipped

Beyond individual elections, AI disinformation affects democratic health:

Trust Erosion:

  • Even if specific election impacts are small, aggregate trust in media declines
  • “Liar’s dividend” makes all evidence deniable
  • Democratic deliberation requires shared reality—this breaks down

Measured Impact:

  • Trust in media: Declining 3-5% annually (accelerating)
  • Belief in election integrity: Declining 2-4% annually
  • Political polarization: Increasing (AI contribution unclear but likely 10-30%)

These systemic effects may matter more than vote margins in individual elections.

If Impact is Currently Low (<2% of elections)

Section titled “If Impact is Currently Low (<2% of elections)”

Interpretation: Current countermeasures working; worry may be overblown

Recommended Actions:

  • Maintain current platform policies
  • Monitor for increasing impact
  • Continue media literacy efforts
  • Avoid over-regulation that might harm free speech

Interpretation: Significant threat but manageable with effort

Recommended Actions:

  • Strengthen platform detection and removal
  • Mandate provenance systems (C2PA)
  • Increase funding for election security
  • International cooperation on attribution and consequences

Interpretation: Crisis-level threat to democratic integrity

Recommended Actions:

  • Emergency measures: possible temporary restrictions on AI-generated political content
  • Mandatory authentication for all political advertising
  • Dramatic increase in election security budgets
  • Consider election reforms (longer voting periods to allow fact-checking)

This model faces fundamental measurement challenges that limit confidence in its estimates.

Counterfactual Problem. The core limitation is that we cannot observe what would have happened without AI disinformation in any given election. Romania 2024 provides suggestive evidence, but even there, the annulment was based on evidence of interference, not proof of decisive impact. Every estimate in this model involves a counterfactual comparison that cannot be directly observed.

Multi-Causality and Attribution. Elections are influenced by dozens of factors: economic conditions, candidate quality, campaign spending, media coverage, debates, and ground operations. Isolating the marginal contribution of AI disinformation from this complex system is methodologically challenging. The meta-analysis of 49 field experiments finding zero average effect from campaign contact illustrates how difficult persuasion measurement is even for well-controlled interventions.

Detection Bias. We can only measure detected AI campaigns. The most sophisticated operations may go entirely unnoticed, meaning our estimates potentially undercount the most impactful instances. Conversely, the Knight Columbia analysis of 78 election deepfakes found that 39 had no deceptive intent, suggesting overcount in some datasets.

Heterogeneity. Impact varies dramatically by context: election type (presidential vs. local), electorate characteristics (polarization level, media literacy), and institutional environment (platform policies, legal frameworks). Parameter estimates that work for U.S. presidential elections may be inappropriate for local ballot initiatives or elections in developing democracies.

Rapid Technological Change. Both AI generation capabilities and detection methods are improving rapidly. Model parameters derived from 2024 data may be obsolete by 2026. The finding that “cheap fakes” outperformed AI 7:1 in 2024 may not hold as AI quality improves and costs fall further.

Did AI “Break” 2024 Elections? Research suggests no, but measurement problems make this uncertain. Absence of evidence is not evidence of absence.

What Matters More: Individual Elections or Systemic Trust? Even if AI doesn’t flip many elections, erosion of epistemic commons might be the bigger harm.

Can Democracy Survive in an Era of Undetectable Disinformation? Pessimists say no; optimists argue humans have adapted to information threats before.

  • Disinformation Detection Arms Race - Can we detect it at all?
  • Deepfakes Authentication Crisis - Visual media authenticity