Skip to content
Longterm Wiki
Back

White House AI commitments

government

This is a primary government document marking an early formal U.S. government effort to establish AI safety norms; useful for tracking the evolution of AI governance and the limitations of voluntary industry commitments prior to the October 2023 Executive Order on AI.

Metadata

Importance: 55/100press releaseprimary source

Summary

The Biden-Harris Administration secured voluntary commitments from seven major AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI) around three pillars: safety testing before release, security protections for model weights, and trust mechanisms like watermarking. This represented an interim governance step ahead of a forthcoming executive order and legislative efforts, establishing a public accountability framework for industry self-regulation.

Key Points

  • Seven leading AI companies committed to pre-release internal and external safety testing and sharing risk information with governments and civil society.
  • Security commitments include cybersecurity protections for model weights and mechanisms for reporting AI-related vulnerabilities.
  • Trust commitments include watermarking AI-generated content and publishing transparency reports on AI capabilities and limitations.
  • Framed explicitly as voluntary and interim, with the administration simultaneously pursuing an executive order and bipartisan legislation for binding governance.
  • Represents an early high-profile attempt at government-industry coordination on AI safety norms in the U.S.

Cited by 4 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 20268 KB
FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI | The White House 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 

 

 
 

 
 Skip to content 
 
 
 
 
 This is historical material “frozen in time”. The website is no longer updated and links to external websites and some internal pages may not work. 
 

 
 

 

 

 
 

 
 

 
 
 
 

 Voluntary commitments – underscoring safety, security, and trust – mark a critical step toward developing responsible AI 

 Biden-Harris Administration will continue to take decisive action by developing an Executive Order and pursuing bipartisan legislation to keep Americans safe 

 Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have moved with urgency to seize the tremendous promise and manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety. As part of this commitment, President Biden is convening seven leading AI companies at the White House today – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.   

Companies that are developing these emerging technologies have a responsibility to ensure their products are safe. To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.

These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI. As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.

There is much more work underway. The Biden-Harris Administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.

Today, these seven leading AI companies are committing to:

 Ensuring Products are Safe Before Introducing Them to the Public 

 
 The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.

 The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, informatio

... (truncated, 8 KB total)
Resource ID: a9468089fafed8cd | Stable ID: sid_m7wPXzsffC