Skip to content
Longterm Wiki
Index
Citation·page:reducing-hallucinations:fn25

Reducing Hallucinations in AI-Generated Wiki Content - Footnote 25

Verdictpartial85%
1 check · 4/3/2026

The claim mentions the 2025 AI Index Report, but the source does not mention this report or RLAIF (Reinforcement Learning from AI Feedback) or DPO (Direct Preference Optimization). The source does not mention GPT-4 reducing factual errors by 40% after RLHF training, but does mention OpenAI's GPT-4 seeing a 40% reduction in factual errors after undergoing RLHF training.

Our claim

entire record

No record data available.

Source evidence

1 src · 1 check
partial85%Haiku 4.5 · 4/3/2026

NoteThe claim mentions the 2025 AI Index Report, but the source does not mention this report or RLAIF (Reinforcement Learning from AI Feedback) or DPO (Direct Preference Optimization). The source does not mention GPT-4 reducing factual errors by 40% after RLHF training, but does mention OpenAI's GPT-4 seeing a 40% reduction in factual errors after undergoing RLHF training.

Case № page:reducing-hallucinations:fn25Filed 4/3/2026Confidence 85%