Skip to content
Longterm Wiki
Index
Citation·page:reducing-hallucinations:fn3

Reducing Hallucinations in AI-Generated Wiki Content - Footnote 3

Verdictpartial85%
1 check · 4/3/2026

The claim states "GPT-4 shows a hallucination rate of approximately 3% according to recent benchmarks", but the source only mentions that "OpenAI notes that GPT-4 is 40 percent more likely to produce factual responses than its predecessor." The claim states "general chatbots exhibit rates between 3-27% when summarizing documents", but the source says "One estimate from Vectara, an AI startup, suggests chatbots hallucinate anywhere between 3 percent and 27 percent of the time, according to Vectara’s public hallucination leaderboard on GitHub, which tracks the frequency of hallucinations among popular chatbots when summarizing documents."

Our claim

entire record

No record data available.

Source evidence

1 src · 1 check
partial85%Haiku 4.5 · 4/3/2026

NoteThe claim states "GPT-4 shows a hallucination rate of approximately 3% according to recent benchmarks", but the source only mentions that "OpenAI notes that GPT-4 is 40 percent more likely to produce factual responses than its predecessor." The claim states "general chatbots exhibit rates between 3-27% when summarizing documents", but the source says "One estimate from Vectara, an AI startup, suggests chatbots hallucinate anywhere between 3 percent and 27 percent of the time, according to Vectara’s public hallucination leaderboard on GitHub, which tracks the frequency of hallucinations among popular chatbots when summarizing documents."

Case № page:reducing-hallucinations:fn3Filed 4/3/2026Confidence 85%