Reducing Hallucinations in AI-Generated Wiki Content - Footnote 3
The claim states "GPT-4 shows a hallucination rate of approximately 3% according to recent benchmarks", but the source only mentions that "OpenAI notes that GPT-4 is 40 percent more likely to produce factual responses than its predecessor." The claim states "general chatbots exhibit rates between 3-27% when summarizing documents", but the source says "One estimate from Vectara, an AI startup, suggests chatbots hallucinate anywhere between 3 percent and 27 percent of the time, according to Vectara’s public hallucination leaderboard on GitHub, which tracks the frequency of hallucinations among popular chatbots when summarizing documents."
Our claim
entire recordNo record data available.
Source evidence
1 src · 1 checkNoteThe claim states "GPT-4 shows a hallucination rate of approximately 3% according to recent benchmarks", but the source only mentions that "OpenAI notes that GPT-4 is 40 percent more likely to produce factual responses than its predecessor." The claim states "general chatbots exhibit rates between 3-27% when summarizing documents", but the source says "One estimate from Vectara, an AI startup, suggests chatbots hallucinate anywhere between 3 percent and 27 percent of the time, according to Vectara’s public hallucination leaderboard on GitHub, which tracks the frequency of hallucinations among popular chatbots when summarizing documents."