Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: European Union

This is an official G7/EU policy document from October 2023, representing a major international voluntary governance initiative for advanced AI; relevant to understanding global AI governance coordination efforts and soft-law approaches to AI safety.

Metadata

Importance: 62/100policy briefprimary source

Summary

The G7's Hiroshima AI Process produced a voluntary International Code of Conduct for organizations developing advanced AI systems, including foundation models and generative AI. Building on OECD AI Principles, it provides a non-exhaustive, risk-based framework covering the full AI lifecycle from design through deployment. The document is intended as a living framework updated through multistakeholder consultations.

Key Points

  • Voluntary code providing guidance for organizations developing advanced AI, including foundation models and generative AI systems.
  • Adopts a risk-based approach applicable across the full AI lifecycle: design, development, deployment, and use.
  • Builds on existing OECD AI Principles, extended to address recent developments in advanced AI systems.
  • Open to endorsement by academia, civil society, private sector, and public sector organizations.
  • Designed as a living document, to be updated through inclusive multistakeholder consultations as technology evolves.

Cited by 1 page

PageTypeQuality
Failed and Stalled AI ProposalsAnalysis63.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20262 KB
Hiroshima Process International Code of Conduct for Advanced AI Systems | Shaping Europe’s digital future 
 
 
 
 
 
 
 

 
 
 
 Skip to main content 

 

 
 

 
 
 
 

 
 

 Hiroshima Process International Code of Conduct for Advanced AI Systems 

 POLICY AND LEGISLATION
 Publication 30 October 2023
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 The International Code of Conduct for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.

 
 
 
 
 

 

 

 
 
 
 ©GettyImages
 
 
 
 
 
 
 Organizations should follow these actions in line with a risk-based approach.

 Organizations that may endorse this Code of Conduct may include, among others, entities from academia, civil society, the private sector, and/or the public sector.

 This non-exhaustive list of actions is discussed and elaborated as a living document to build on the existing OECD AI Principles in response to the recent developments in advanced AI systems and is meant to help seize the benefits and address the risks and challenges brought by these technologies. Organizations should apply these actions to all stages of the lifecycle to cover, when and as applicable, the design, development, deployment and use of advanced AI systems. 

 This document will be reviewed and updated as necessary, including through ongoing inclusive multistakeholder consultations, in order to ensure it remains fit for purpose and responsive to this rapidly evolving technology.

 Different jurisdictions may take their own unique approaches to implementing these actions in different ways.

 

 
 Downloads

 
 
 
 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems 
 
 
 Download 
 
 
 

 
 
 
 
 Related topics

 International relations 
 Artificial intelligence 

 
 
 

 
 
 Related content

 
 Press release
 30 October 2023
 Commission welcomes G7 leaders' agreement on Guiding Principles and a Code of Conduct on Artificial Intelligence The Commission welcomes the agreement by G7 leaders on International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Last update

 30 October 2023

 
 
 
 
 
 
 Print as PDF
Resource ID: 58462eaba21b0729 | Stable ID: sid_hLsalFtKv3