Back
BBC: Deepfakes in Court
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: BBC
Relevant to AI safety discussions around misuse and governance, specifically how synthetic media generation capabilities create real-world harms in legal and institutional contexts.
Metadata
Importance: 42/100news articlenews
Summary
This BBC news article examines the growing legal challenges posed by deepfake technology in courtroom settings, exploring how AI-generated fake videos and audio threaten the integrity of digital evidence. It highlights concerns from legal experts about authentication difficulties and the potential for deepfakes to undermine judicial proceedings.
Key Points
- •Deepfake technology creates convincing synthetic media that is increasingly difficult to distinguish from authentic recordings, posing major challenges for legal evidence.
- •Courts and legal systems lack robust standardized methods for authenticating digital evidence against AI-generated forgeries.
- •Legal experts warn that deepfakes could be weaponized to introduce false evidence or discredit legitimate evidence in criminal and civil cases.
- •The rise of accessible deepfake tools lowers the barrier for bad actors to fabricate convincing video or audio recordings.
- •Emerging forensic detection tools exist but lag behind the rapid advancement of deepfake generation capabilities.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Driven Legal Evidence Crisis | Risk | 43.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20268 KB
Intel's deepfake detector tested on real and fake videos Skip to content Home
News
US & Canada
UK
UK Politics
England
N. Ireland
N. Ireland Politics
Scotland
Scotland Politics
Wales
Wales Politics
Africa
Asia
China
India
Australia
Europe
Latin America
Middle East
In Pictures
BBC InDepth
BBC Verify
Sport
Business
World of Business
Technology of Business
NYSE Opening Bell
Technology
Watch Documentaries
Artificial Intelligence
AI v the Mind
Health
Watch Documentaries
Culture
Watch Documentaries
Film & TV
Music
Art & Design
Style
Books
Entertainment News
Arts
Watch Documentaries
Arts in Motion
Travel
Watch Documentaries
Destinations
Africa
Antarctica
Asia
Australia and Pacific
Caribbean & Bermuda
Central America
Europe
Middle East
North America
South America
World’s Table
Culture & Experiences
Adventures
The SpeciaList
Earth
Watch Documentaries
Science
Natural Wonders
Climate Solutions
Sustainable Business
Green Living
Audio
Podcast Categories
Radio
Audio FAQs
Video
Watch Documentaries
BBC Maestro
Discover the World
Live
Live News
Live Sport
Documentaries
Home News Sport Business Technology Health Culture Arts Travel Earth Audio Video Live Documentaries Weather Newsletters Watch Live Intel's deepfake detector tested on real and fake videos
22 July 2023 Share Save Add as preferred on Google James Clayton North America technology reporter Intel In March last year a video appeared to show President Volodymyr Zelensky telling the people of Ukraine to lay down their arms and surrender to Russia.
It was a pretty obvious deepfake - a type of fake video that uses artificial intelligence to swap faces or create a digital version of someone.
But as AI developments make deepfakes easier to produce, detecting them quickly has become all the more important.
Intel believes it has a solution, and it is all about blood in your face.
The company has named the system "FakeCatcher".
In Intel's plush, and mostly empty, offices in Silicon Valley we meet Ilke Demir, research scientist at Intel Labs, who explains how it works.
"We ask what is real about authentic videos? What is real about us? What is the watermark of being human?" she says.
Deepfake presidents used in Russia-Ukraine war
Central to the system is a technique called Photoplethysmography (PPG), which detects changes in blood flow.
Faces created by deepfakes don't give out these signals, she says.
The system also analyses eye movement to check for authenticity.
"So normally, when humans look at a point, when I look at you, it's as if I'm shooting rays from my eyes, to you. But for deepfakes, it's like googly eyes, they are divergent," she says.
By looking at both these traits, Intel believes it can work out the difference between a real video and a fake within seconds.
The compa
... (truncated, 8 KB total)Resource ID:
cf7d4c226d33b313 | Stable ID: sid_urEfOGNPcA