Back
Human performance in detecting deepfakes: A systematic review and meta-analysis
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: ScienceDirect
Data Status
Not fetched
Cited by 6 pages
| Page | Type | Quality |
|---|---|---|
| AI Content Authentication | Approach | 58.0 |
| Deepfake Detection | Approach | 91.0 |
| AI-Era Epistemic Security | Approach | 63.0 |
| Authentication Collapse | Risk | 57.0 |
| Epistemic Collapse | Risk | 49.0 |
| AI-Powered Fraud | Risk | 69.0 |
Cached Content Preview
HTTP 200Fetched Feb 27, 20264 KB
[Skip to main content](https://www.sciencedirect.com/science/article/pii/S2451958824001714#screen-reader-main-content) [Skip to article](https://www.sciencedirect.com/science/article/pii/S2451958824001714#screen-reader-main-title)
- View **PDF**
- Download full issue
Search ScienceDirect
[](https://www.sciencedirect.com/journal/computers-in-human-behavior-reports "Go to Computers in Human Behavior Reports on ScienceDirect")
## [Computers in Human Behavior Reports](https://www.sciencedirect.com/journal/computers-in-human-behavior-reports "Go to Computers in Human Behavior Reports on ScienceDirect")
[Volume 16](https://www.sciencedirect.com/journal/computers-in-human-behavior-reports/vol/16/suppl/C "Go to table of contents for this volume/issue"), December 2024, 100538
[](https://www.sciencedirect.com/journal/computers-in-human-behavior-reports/vol/16/suppl/C)
# Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers
Author links open overlay panelAlexanderDielab, TaniaLalgiab, Isabel CarolinSchröterab, Karl F.MacDormanc, MartinTeufelab, AlexanderBäuerleab
Show more
Add to Mendeley
Cite
[https://doi.org/10.1016/j.chbr.2024.100538](https://doi.org/10.1016/j.chbr.2024.100538 "Persistent link using digital object identifier") [Get rights and content](https://s100.copyright.com/AppDispatchServlet?publisherName=ELS&contentID=S2451958824001714&orderBeanReset=true)
Under a Creative Commons [license](http://creativecommons.org/licenses/by-nc-nd/4.0/)
Open access
## Highlights
- •
Synthesized human deepfake detection is at chance for different modalities.
- •
Synthesized human deepfake detection is worse than detection of real stimuli.
- •
Strategies aimed to improve deepfake detection successfully increase performance.
## Abstract
_Deepfakes_ are AI-generated media designed to look real, often with the intent to deceive. Deepfakes threaten public and personal safety by facilitating disinformation, propaganda, and identity theft. Though research has been conducted on human performance in deepfake detection, the results have not yet been synthesized. This systematic review and meta-analysis investigates human deepfake detection accuracy. Searches in PubMed, ScienceGov, JSTOR, Google Scholar, and paper references, conducted in June and October 2024, identified empirical studies measuring human detection of high-quality deepfakes. After pooling accuracy, odds-ratio, and sensitivity ( _d'_) effect sizes ( _k_ = 137 effects) from 56 papers involving 86,155 participants, we analyzed 1) overall deepfake detection performance, 2) performance across stimulus types (audio, image, text, and video), and 3) the effects of detection-improvement strategies. Overall deepfake dete
... (truncated, 4 KB total)Resource ID:
5c1ad27ec9acc6f4 | Stable ID: NDFlYTRkZj