Microsoft Video Authenticator
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Microsoft
Relevant to AI safety discussions around misuse of generative AI; illustrates industry-led technical countermeasures to deepfake disinformation, with implications for AI deployment norms and content provenance standards.
Metadata
Summary
Microsoft introduces Video Authenticator, a tool that analyzes images and videos to detect AI-generated manipulations (deepfakes) by identifying subtle blending boundaries and grayscale elements invisible to the human eye. The initiative is part of a broader effort including partnerships with NewsGuard and media literacy campaigns to combat disinformation ahead of the 2020 U.S. election. Microsoft also introduced a content provenance system to help publishers and journalists signal content authenticity.
Key Points
- •Video Authenticator provides real-time confidence scores on whether media has been artificially manipulated, detecting subtle deepfake artifacts.
- •Microsoft partnered with NewsGuard and other organizations to distribute the tool to news organizations and campaigns to counter election disinformation.
- •A content provenance technology was introduced to cryptographically certify the origin and history of media content.
- •The tool is part of Microsoft's AI for Health and broader responsible AI deployment, acknowledging deepfakes will improve and detection must evolve.
- •Microsoft emphasized media literacy alongside technical tools, recognizing that technology alone cannot solve the disinformation problem.
Review
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Accelerated Reality Fragmentation | Risk | 28.0 |
Cached Content Preview
New steps to combat disinformation - Microsoft On the Issues
Skip to content
Skip to main content
Today, we’re announcing two new technologies to combat disinformation, new work to help educate the public about the problem, and partnerships to help advance these technologies and educational efforts quickly.
There is no question that disinformation is widespread. Research we supported from Professor Jacob Shapiro at Princeton , updated this month, cataloged 96 separate foreign influence campaigns targeting 30 countries between 2013 and 2019. These campaigns, carried out on social media, sought to defame notable people, persuade the public or polarize debates. While 26% of these campaigns targeted the U.S., other countries targeted include Armenia, Australia, Brazil, Canada, France, Germany, the Netherlands, Poland, Saudi Arabia, South Africa, Taiwan, Ukraine, the United Kingdom and Yemen. Some 93% of these campaigns included the creation of original content, 86% amplified pre-existing content and 74% distorted objectively verifiable facts. Recent reports also show that disinformation has been distributed about the COVID-19 pandemic , leading to deaths and hospitalizations of people seeking supposed cures that are actually dangerous.
What we’re announcing today is an important part of Microsoft’s Defending Democracy Program, which, in addition to fighting disinformation, helps to protect voting through ElectionGuard and helps secure campaigns and others involved in the democratic process through AccountGuard , Microsoft 365 for Campaigns and Election Security Advisors . It’s also part of a broader focus on protecting and promoting journalism as Brad Smith and Carol Ann Browne discussed in their Top Ten Tech Policy Issues for the 2020s.
New Technologies
Disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate. At Microsoft, we’ve been working on two separate technologies to address different aspects of the problem.
One major issue is deepfakes , or synthetic media, which are photos, videos or audio files manipulated by artificial intelligence (AI) in hard-to-detect ways. They could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.
Today, we’re announcing Microsoft Video Authenticator. Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it
... (truncated, 11 KB total)97907cd3e6b9f226 | Stable ID: sid_tqwfzqsMf9