Skip to content
Longterm Wiki
Back

Generative Artificial Intelligence Threats to Information Integrity and Potential Policy Responses

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: RAND Corporation

This RAND policy brief examines how generative AI threatens information integrity and democracy, proposing whole-of-government responses to mitigate disinformation risks—directly relevant to AI safety governance and societal harms from deployed AI systems.

Metadata

Importance: 58/100policy briefanalysis

Summary

This RAND paper analyzes how generative AI technologies—including large language models and AI-generated media—accelerate existing information integrity threats to democratic systems. It surveys the ecosystem of harms and proposes a range of policy responses, from social media reforms to federal standards for AI-generated content. The authors offer whole-of-government and societal solutions to mitigate these risks at scale.

Key Points

  • Generative AI accelerates existing harms in the information environment, including disinformation and manipulation of public discourse.
  • AI-generated images, audio, and text from LLMs pose compounding threats to democratic institutions and information integrity.
  • Policy responses range from social media platform reforms to federal agency standards for AI-generated content labeling.
  • A whole-of-government approach is recommended, combining regulatory, technical, and societal interventions.
  • The paper provides detailed, actionable policy options rather than only high-level recommendations.

Cached Content Preview

HTTP 200Fetched Apr 11, 20266 KB
Generative Artificial Intelligence Threats to Information Integrity and Potential Policy Responses | RAND
 
 

 
 

 

 

 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 
 
 

 

 

 

 

 

 
 
 
 

 Oct
 NOV
 Dec
 

 
 

 
 15
 
 

 
 

 2024
 2025
 2026
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Save Page Now Outlinks

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20251115121943/https://www.rand.org/pubs/perspectives/PEA3089-1.html

 
 

Skip to page content

 

 

 

 
 
 

 

 Toggle Menu
 

 
Site-wide navigation

 
 

 
Topics

 

 
Trending

 

International Economic Relations

Global Security

Health Care Costs

Artificial Intelligence

Middle East

 
Topics

 
Children, Families, and Communities

Cyber and Data Sciences

Education and Literacy

Energy and Environment

Health, Health Care, and Aging

Homeland Security and Public Safety

Infrastructure and Transportation

International Affairs

Law and Business

National Security

Science and Technology

Social Equity

Workers and the Workplace

 All Topics
 

 

 

 Research & Commentary
 

 

 Experts
 

 

 About
 

 
 

 
 

 
Research Divisions

 

 
RAND's divisions conduct research on a uniquely broad front for clients around the globe.

 
 

 
U.S. research divisions

 
 
RAND Army Research Division

 
RAND Education, Employment, and Infrastructure

 
RAND Global and Emerging Risks

 
RAND Health

 
RAND Homeland Security Research Division

 
RAND National Security Research Division

 
RAND Project AIR FORCE

 
 

 

 
International research divisions

 
 
RAND Australia

 
RAND Europe

 
 

 
 

 

 

 Services & Impact
 

 

 Careers
 

 

 Graduate School
 

 

 Subscribe
 

 

 Give
 

 
Cart

 

 

 

 
 
 
 Toggle Search
 
Search termsSubmit

 
 

 

 

 
RAND

Research & Commentary

Expert Insights

PE-A3089-1

 

 
 

 

 

 

 

 

Generative Artificial Intelligence Threats to Information Integrity and Potential Policy Responses

Todd C. Helmus, Bilva Chandra

 Expert InsightsPublished Apr 16, 2024

 

 
 
 

 

 
 

 
 

 

 

 

 Download PDF
 
 

 
 
 
 

 

 

Share on LinkedIn

Share on X

Share on Facebook

Email

This paper highlights the ecosystem of generative artificial intelligence (AI) threats to information integrity and democracy and the potential policy responses to mitigate the nexus of those evolving threats. The authors focus on the information environment and how generative AI—such as large language models or AI-generated images and audio—is able to accelerate existing harms on the internet and beyond. The policy options that could address these complex problems are vast, varying from much-needed social media reforms to using federal agencies to create sweeping standards for AI-generated content. The authors provide an overview of the risks that generative AI presents to democra

... (truncated, 6 KB total)
Resource ID: 7aa708673ec02ac1 | Stable ID: sid_fdfyONM8mC