Skip to content
Longterm Wiki
Back

World Economic Forum's 2024 Global Risks Report

web

Empirical study from Harvard's Misinformation Review examining how public perceptions of AI election risks are shaped more by media consumption than by direct AI experience, relevant to AI governance and public communication strategies.

Metadata

Importance: 42/100organizational reportanalysis

Summary

A survey of 1,000 U.S. adults found that 83.4% expressed concern about AI being used to spread misinformation in the 2024 presidential election. Crucially, direct experience with AI tools like ChatGPT was not correlated with these concerns, while television news consumption was a stronger predictor, suggesting media framing rather than AI literacy drives public fear.

Key Points

  • 83.4% of surveyed Americans expressed concern about AI-driven misinformation in the 2024 election, consistent across demographic groups.
  • Direct experience with generative AI tools (ChatGPT, DALL-E) showed little association with reduced concern, regardless of education or STEM background.
  • Television news consumption was more strongly linked to heightened concerns, especially among older adults.
  • Findings suggest public fear may be driven by sensationalized media coverage rather than informed understanding of AI capabilities.
  • Authors recommend AI literacy campaigns focused on building knowledge rather than amplifying fear.

Cited by 1 page

PageTypeQuality
Electoral Impact Assessment ModelAnalysis65.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202632 KB
The origin of public concerns over AI supercharging misinformation in the 2024 U.S. presidential election | HKS Misinformation Review 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 

 

 
 
 
 

 
 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 Skip to main content 
 
 
 

 

 
 
 
 Article Metrics

 9

 CrossRef Citations

 672 PDF Downloads

 12235 Page Views

 
 
 We surveyed 1,000 U.S. adults to understand concerns about the use of artificial intelligence (AI) during the 2024 U.S. presidential election and public perceptions of AI-driven misinformation. Four out of five respondents expressed some level of worry about AI’s role in election misinformation. Our findings suggest that direct interactions with AI tools like ChatGPT and DALL-E were not correlated with these concerns, regardless of education or STEM work experience. Instead, news consumption, particularly through television, appeared more closely linked to heightened concerns. These results point to the potential influence of news media and the importance of exploring AI literacy and balanced reporting.

 

 
 By

 
 Harry Yaojun Yan Stanford Social Media Lab, Stanford University, USA

 Garrett Morrow Department of Political Science, Northeastern University, USA

 Kai-Cheng Yang Network Science Institute, Northeastern University, USA

 John Wihbey School of Journalism, Northeastern University, USA

 
 

 Image by geralt on Pixabay 

 
 Research Questions

 
 To what extent are Americans concerned about the applications of AI for spreading misinformation during the 2024 U.S. presidential election? 

 Do knowledge of and direct interactions with generative AI (GAI) tools contribute to the concerns?

 Does consuming AI-related news or information contribute to the concerns?

 What information sources contribute to people’s concerns?

 

 research note Summary

 
 In August 2023, we surveyed U.S. public opinion on new AI technologies and the upcoming U.S. presidential election via Dynata. In asking about AI’s potential role in spreading misinformation, we found that roughly four out of five Americans (83.4%) expressed some level of concern regarding AI being used to spread misinformation in the election at the time. The high prevalence of concern was consistent across various demographic groups.

 News consumption, particularly through television, is linked to the high prevalence of concerns. However, the correlation was more notable among older adults. In contrast, knowledge of ChatGPT development and direct experience with GAI tools, such as ChatGPT and DALL-E, appear to have little association with reduced concerns.

 The high prevalence of concerns about AI being used to spread misinformation in the 2024 U.S. presidential election was likely due to a combination of worries about election integrity in general, fear of the disruptive potential of AI technology, and its sensationalized news coverage. Although it remains uncert

... (truncated, 32 KB total)
Resource ID: 742a2119cf8d25da | Stable ID: sid_v9ryxoVm9T