Skip to content
Longterm Wiki
Back

Journalistic interventions matter: Understanding how Americans perceive fact-checking labels | HKS Misinformation Review

web

Relevant to AI safety governance discussions around automated vs. human-led content moderation; highlights that algorithmic labeling of misinformation may be perceived as less legitimate than professional human fact-checking, informing deployment decisions for AI content moderation tools.

Metadata

Importance: 28/100journal articleprimary source

Summary

A national survey (N=1,003) of U.S. adults examined how people perceive the efficacy of fact-checking labels created by algorithms, social media users, third-party fact checkers, and news media. Professional fact-checkers' labels were rated most effective, followed by news media, while user and algorithmic labels were rated similarly and lowest. Partisanship significantly moderated perceptions, with Republicans rating all label types as less effective than Democrats.

Key Points

  • Professional fact-checker labels are perceived as most effective; algorithmic and user-generated labels are rated similarly low in perceived efficacy.
  • News media labels rank second in perceived effectiveness but are not statistically distinguishable from fact-checker or algorithmic labels.
  • Republicans consistently rate all fact-checking label types lower than Democrats, highlighting partisan trust gaps.
  • Trust in news media and positive attitudes toward social media platforms correlate with higher perceived effectiveness of all label types.
  • Prior exposure to fact-checking labels moderates the relationship between media trust and label efficacy perceptions.

Cited by 1 page

PageTypeQuality
AI-Era Epistemic InfrastructureApproach59.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202634 KB
Journalistic interventions matter: Understanding how Americans perceive fact-checking labels | HKS Misinformation Review 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 

 

 
 
 
 

 
 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 Skip to main content 
 
 
 

 

 
 
 
 Article Metrics

 4

 CrossRef Citations

 370 PDF Downloads

 5660 Page Views

 
 
 While algorithms and crowdsourcing have been increasingly used to debunk or label misinformation on social media, such tasks might be most effective when performed by professional fact checkers or journalists. Drawing on a national survey (N = 1,003), we found that U.S. adults evaluated fact-checking labels created by professional fact checkers as more effective than labels by algorithms and other users. News media labels were perceived as more effective than user labels but not statistically different from labels by fact checkers and algorithms. There was no significant difference between labels created by users and algorithms. These findings have implications for platforms and fact-checking practitioners, underscoring the importance of journalistic professionalism in fact-checking. 

 

 
 By

 
 Chenyan Jia College of Arts, Media and Design, Northeastern University, USA

 Taeyoung Lee Jack J. Valenti School of Communication, University of Houston, USA

 
 

 Image by geralt on Pixabay 

 
 Research Questions

 
 How do people perceive the efficacy of fact-checking labels created by different sources (algorithms, social media users, third-party fact checkers, and news media)?

 Will partisanship, trust in news media, attitudes toward social media, reliance on algorithmic news, and prior exposure to fact-checking labels be associated with people’s perceived efficacy of different fact-checking labels?

 Will people’s prior exposure to fact-checking labels moderate the relationships between people’s trust in news media or attitudes toward social media platforms and label efficacy?

 

 Essay Summary

 
 To examine how people perceive the efficacy of different types of fact-checking labels, we conducted a national survey of U.S. adults ( N = 1,003) in March 2022. The sample demographics are comparable to the U.S. internet population in terms of gender, age, race/ethnicity, education, and income.

 We found that the perceived efficacy of third-party fact checker labels was the highest, which was higher than the perceived efficacy of algorithmic labels and other user labels. The effectiveness of news media labels was perceived as the second highest, but the statistically meaningful difference was detected only with user labels; the perceived efficacy of news media labels was not statistically different from labels by fact checkers and algorithms. There was no significant difference between the labels created by users and algorithms. 

 We also found that political and media-related variables are associated with the perceptions of fact-chec

... (truncated, 34 KB total)
Resource ID: 45e2f7288f950f01 | Stable ID: sid_iv4ePumak7