Reducing Hallucinations in AI-Generated Wiki Content - Footnote 59
The claim states that Stanford research showed 58-82% hallucination rates, but the source says this was from a previous study of general-purpose chatbots, not the current study on legal tools. The claim says the Stanford research contradicts vendor marketing claims, but the source says the research shows the tools do reduce errors compared to general-purpose AI models.
Our claim
entire recordNo record data available.
Source evidence
1 src · 1 checkNoteThe claim states that Stanford research showed 58-82% hallucination rates, but the source says this was from a previous study of general-purpose chatbots, not the current study on legal tools. The claim says the Stanford research contradicts vendor marketing claims, but the source says the research shows the tools do reduce errors compared to general-purpose AI models.