Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Microsoft

Relevant to AI safety discussions around misuse of generative AI; illustrates industry-led technical countermeasures to deepfake disinformation, with implications for AI deployment norms and content provenance standards.

Metadata

Importance: 42/100press releasenews

Summary

Microsoft introduces Video Authenticator, a tool that analyzes images and videos to detect AI-generated manipulations (deepfakes) by identifying subtle blending boundaries and grayscale elements invisible to the human eye. The initiative is part of a broader effort including partnerships with NewsGuard and media literacy campaigns to combat disinformation ahead of the 2020 U.S. election. Microsoft also introduced a content provenance system to help publishers and journalists signal content authenticity.

Key Points

  • Video Authenticator provides real-time confidence scores on whether media has been artificially manipulated, detecting subtle deepfake artifacts.
  • Microsoft partnered with NewsGuard and other organizations to distribute the tool to news organizations and campaigns to counter election disinformation.
  • A content provenance technology was introduced to cryptographically certify the origin and history of media content.
  • The tool is part of Microsoft's AI for Health and broader responsible AI deployment, acknowledging deepfakes will improve and detection must evolve.
  • Microsoft emphasized media literacy alongside technical tools, recognizing that technology alone cannot solve the disinformation problem.

Review

Microsoft's approach to addressing disinformation represents a multi-faceted strategy combining technological innovation and educational initiatives. The Video Authenticator, developed by Microsoft Research and the Responsible AI team, provides a real-time confidence score for detecting artificially manipulated media by analyzing subtle visual cues that might escape human perception. The technology acknowledges its own limitations, recognizing that AI detection methods are not infallible and will need continuous evolution. Microsoft's comprehensive strategy extends beyond technical solutions, including partnerships with media organizations, academic institutions, and initiatives like Project Origin and media literacy programs. By collaborating with entities like the AI Foundation, BBC, and University of Washington, Microsoft aims to create a holistic approach to combating synthetic media and disinformation, emphasizing both technological detection and public education.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 9, 202611 KB
New steps to combat disinformation - Microsoft On the Issues 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 

 
 
 
 

 
 
 
 

 
 
 

 
 
 
 
 
 
 

 
 
 Skip to content 

 
 
 

 
 

 

 
 
 

 
 

 Skip to main content 

 
 

 
 
 
 
 

 
 
 
 
 
 

 
 
 Today, we’re announcing two new technologies to combat disinformation, new work to help educate the public about the problem, and partnerships to help advance these technologies and educational efforts quickly.

 There is no question that disinformation is widespread. Research we supported from Professor Jacob Shapiro at Princeton , updated this month, cataloged 96 separate foreign influence campaigns targeting 30 countries between 2013 and 2019. These campaigns, carried out on social media, sought to defame notable people, persuade the public or polarize debates. While 26% of these campaigns targeted the U.S., other countries targeted include Armenia, Australia, Brazil, Canada, France, Germany, the Netherlands, Poland, Saudi Arabia, South Africa, Taiwan, Ukraine, the United Kingdom and Yemen. Some 93% of these campaigns included the creation of original content, 86% amplified pre-existing content and 74% distorted objectively verifiable facts. Recent reports also show that disinformation has been distributed about the COVID-19 pandemic , leading to deaths and hospitalizations of people seeking supposed cures that are actually dangerous.

 What we’re announcing today is an important part of Microsoft’s Defending Democracy Program, which, in addition to fighting disinformation, helps to protect voting through ElectionGuard and helps secure campaigns and others involved in the democratic process through AccountGuard , Microsoft 365 for Campaigns and Election Security Advisors . It’s also part of a broader focus on protecting and promoting journalism as Brad Smith and Carol Ann Browne discussed in their Top Ten Tech Policy Issues for the 2020s. 

 New Technologies 

 Disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate. At Microsoft, we’ve been working on two separate technologies to address different aspects of the problem.

 One major issue is deepfakes , or synthetic media, which are photos, videos or audio files manipulated by artificial intelligence (AI) in hard-to-detect ways. They could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.

 Today, we’re announcing Microsoft Video Authenticator. Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it 

... (truncated, 11 KB total)
Resource ID: 97907cd3e6b9f226 | Stable ID: sid_tqwfzqsMf9