Back
MIT Media Lab: Detecting Deepfakes
webmedia.mit.edu·media.mit.edu/projects/detect-fakes/overview/
Relevant to AI safety discussions around misuse of generative models; this project represents a public-facing, human-centered approach to mitigating deepfake harms rather than a purely technical solution.
Metadata
Importance: 42/100tool pageeducational
Summary
MIT Media Lab's Detect Fakes project investigates how people can identify AI-generated media, particularly synthetic video and audio. The project uses an experimental website to test and train public ability to spot deepfakes through critical observation techniques. It aims to raise awareness and build human-level media literacy as a defense against AI-generated disinformation.
Key Points
- •Develops interactive tools to help general audiences distinguish real media from AI-generated deepfakes through hands-on experience.
- •Focuses on human perceptual skills and critical observation as a complement to automated detection methods.
- •Addresses the societal risks of synthetic media by building public awareness and media literacy.
- •Part of broader MIT Media Lab research into the trustworthiness and authenticity of digital content.
- •Highlights the growing challenge of content verification as generative AI capabilities advance rapidly.
Review
The Detect Fakes project by MIT Media Lab addresses the growing challenge of AI-generated media manipulation by developing strategies to help ordinary people critically evaluate digital content. By creating an interactive website and providing detailed guidelines, the researchers aim to enhance public understanding of deepfake technologies and their potential risks. The project's methodology involves exposing users to curated deepfake and authentic videos, teaching them to recognize subtle computational manipulations through eight key observation points. These include analyzing facial features, skin texture, eye movements, lighting, and lip synchronization. While the approach doesn't rely on advanced machine learning algorithms, it emphasizes human perception and critical thinking as essential tools in combating misinformation, representing an important complementary approach to technical deepfake detection methods.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Authentication Collapse | Risk | 57.0 |
| AI-Driven Legal Evidence Crisis | Risk | 43.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20266 KB
Overview ‹ Detect DeepFakes: How to counteract misinformation created by AI — MIT Media Lab
Find People, Projects, etc.
Search
Login
Register
Email:
Password:
Work for a Member organization and need a Member Portal account? Register here with your official email address.
Register a Member Account
Project
Detect DeepFakes: How to counteract misinformation created by AI
Creative Commons
Attribution 4.0 International
Matt
Project Contact:
Matt Groh
matthew.groh@kellogg.northwestern.edu
Project Website
Other Press Inquiries
Groups
Detect DeepFakes: How to counteract misinformation created by AI was active from April 2020 to January 2025
See for yourself how accurately you can identify AI-generated images at the DetectFakes Experiment and if you want to learn to spot deepfakes, please check out our recent paper on How to Distinguish AI-Generated Images from Authentic Photographs
You can find more of our work at publications in PNAS , a workshop at IJCAI , and pre-print on arXiv.
Check out a video from the Election Misinformation Symposium: Fighting Misinfo Through Fact-checking and Deepfake Detection
Find our deepfake research discussed in the news: Science , Scientific American , BBC , WSJ , NYT , , and NPR
Now, here's an excerpt from a couple years ago:
How do you spot a DeepFake? How good are DeepFake videos? How well can ordinary people tell the difference between a video manipulated by AI and a normal, non-altered video? Rather than try to explain in words, we built the Detect Fakes website so you can see the answer for yourself. Detect Fakes is a research project designed to answer these questions and identify techniques to counteract AI-generated misinformation. It turns out there are many subtle signs that a video has been algorithmically manipulated. Some subtleties are explained in detail below.
Read the latest paper on this research
Creative Commons
Attribution 4.0 International
The airbrush effect on the left is an example of an artificial intelligence manipulation
Credit: Matt
We already know DeepFakes can be quite believable, but just how believa
... (truncated, 6 KB total)Resource ID:
a26a9dd48ceec146 | Stable ID: sid_2bc8sS59Dy