Skip to content
Longterm Wiki
Back

Generative AI Is Already Helping Fact-Checkers, But Proving Less Useful for Small Languages

web

Relevant to AI deployment equity and governance discussions; illustrates real-world limitations of LLMs in public-interest applications, particularly the risk of capability gaps across languages in high-stakes information verification contexts.

Metadata

Importance: 42/100news articlenews

Summary

A Reuters Institute report examines how fact-checkers are adopting generative AI tools to improve efficiency, while highlighting a significant disparity: these tools perform substantially worse for low-resource languages, creating equity concerns in global misinformation detection. The piece explores both the practical benefits and structural limitations of AI-assisted fact-checking at scale.

Key Points

  • Generative AI tools are being actively adopted by fact-checkers to speed up research, claim identification, and evidence synthesis.
  • Performance drops significantly for languages with less training data, disadvantaging fact-checkers in non-English-speaking regions.
  • The language gap raises concerns about equitable access to AI-assisted misinformation detection globally.
  • Fact-checkers report AI is most useful for drafting and summarization, less so for nuanced verification judgment.
  • The findings highlight deployment risks when AI tools designed for high-resource languages are applied in diverse linguistic contexts.

Cited by 1 page

PageTypeQuality
AI-Era Epistemic InfrastructureApproach59.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202618 KB
Generative AI is already helping fact-checkers. But it’s proving less useful in small languages and outside the West | Reuters Institute for the Study of Journalism 
 
 
 
 
 
 
 

 

 
 
 
 

 
 
 Skip to main content
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Generative AI is already helping fact-checkers. But it’s proving less useful in small languages and outside the West

 
 
 Experts from Norway, Georgia and Ghana discuss the limitations of this technology. Will AI platforms improve things in the years to come? 
 
 
 
 

 
 
 A screengrab of Faktisk Verifiserbar's map made assisted by AI which verifies images and videos showing attacks on hospitals and schools in the Middle East.

 
 
 
 
 
 
 
 
 Gretel Kahn 
 
 
 29 April 2024 
 
 
 
 
 
 
 In a year where more than 50 countries are holding elections, bad actors are ramping up their disinformation campaigns with fake images , fake videos and fake audios created with the use of generative artificial intelligence. But just as AI is revolutionising the spread of falsehoods, could it also help debunk them? 

 To answer that question, I spoke to three fact-checkers from Norway, Georgia, and Ghana who are applying artificial intelligence to their work. But our conversations soon turned to the limitations of this new technology when applied to diverse geographical contexts, for example in non-Western countries or in countries with underrepresented languages in training models. 

 How AI can help 

 The dawn of generative AI has made many fact-checking organisations embrace this technology in different ways, with the promise of accuracy and speed. 

 Faktisk Verifiserbar is a Norwegian cooperative fact-checking organisation that focuses mostly on fact-checking conflict areas using OSINT techniques. They have been experimenting with different aspects of AI to facilitate and automate their work. Henrik Brattli Vold, a senior adviser for the organisation, explained they have been using an AI photo geolocation platform called GeoSpy , which extracts “unique features from photos and matches those features against geographic regions, countries and cities.” 

 They have also worked with AI researchers from the University of Bergen to develop tools that can make their work more efficient. One of them is a tool called Tank Classifier , which identifies and classifies a tank or an artillery vehicle present in a picture a user posts. Another one is a language detection tool to check what language is being spoken in a given video or audio file. 

 Faktisk Verifiserbar has used ChatGPT to help them visualise their OSINT investigations. Here’s how Brattli Vold explains it: “We used our database, which is a huge Google Sheet, and we structured it with ChatGPT in a way that we could write a precise prompt that would give us an embeddable map output every time we ask. We worked quite a bit with structured prompting, with the data interpreter function in ChatGPT just to be able to do that.” The metho

... (truncated, 18 KB total)
Resource ID: a08cfcaf32c36175 | Stable ID: sid_kf9EOumE1k