Reducing Hallucinations in AI-Generated Wiki Content - Footnote 25
The claim mentions the 2025 AI Index Report, but the source does not mention this report or RLAIF (Reinforcement Learning from AI Feedback) or DPO (Direct Preference Optimization). The source does not mention GPT-4 reducing factual errors by 40% after RLHF training, but does mention OpenAI's GPT-4 seeing a 40% reduction in factual errors after undergoing RLHF training.
Our claim
entire recordNo record data available.
Source evidence
1 src · 1 checkNoteThe claim mentions the 2025 AI Index Report, but the source does not mention this report or RLAIF (Reinforcement Learning from AI Feedback) or DPO (Direct Preference Optimization). The source does not mention GPT-4 reducing factual errors by 40% after RLHF training, but does mention OpenAI's GPT-4 seeing a 40% reduction in factual errors after undergoing RLHF training.