Skip to content
Longterm Wiki
Back

OECD AI Policy Observatory: National Policies

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OECD

This OECD database is a key reference for tracking how governments globally are regulating and governing AI, relevant to anyone studying international AI policy coordination and the institutional landscape surrounding AI safety governance.

Metadata

Importance: 55/100organizational reportreference

Summary

The OECD AI Policy Observatory tracks and catalogues national AI strategies, policies, and regulations from countries worldwide. It serves as a comprehensive reference hub for understanding how different governments are approaching AI governance, including safety, ethics, and deployment frameworks. The resource enables cross-country comparison of policy approaches to AI development and oversight.

Key Points

  • Aggregates national AI strategies and policy documents from OECD member and partner countries in one searchable database
  • Allows comparison of how different nations address AI safety, ethics, accountability, and governance in their national frameworks
  • Tracks evolution of AI policy over time, reflecting growing governmental attention to AI risks and opportunities
  • Connects to the OECD AI Principles, providing normative context for evaluating national policy approaches
  • Useful for researchers and policymakers seeking to understand international coordination and divergence in AI governance

Review

The OECD report provides a comprehensive overview of how countries are approaching AI governance through national strategies and policy frameworks. It highlights a significant global shift towards structured AI policy-making, with over 50 national strategic initiatives and 930 policy efforts documented by May 2023. Countries are adopting diverse approaches, ranging from creating dedicated AI governance bodies to establishing multi-stakeholder advisory groups and developing regulatory sandboxes. The analysis reveals key implementation strategies across five core principles: inclusive growth, human-centered values, transparency, robustness, and accountability. While approaches vary, there's a clear trend towards creating ethical frameworks, developing soft and hard laws, and establishing monitoring mechanisms. The report underscores the importance of international cooperation, with initiatives like the G7 Hiroshima AI process demonstrating a collaborative approach to addressing AI's challenges and opportunities.

Cached Content Preview

HTTP 200Fetched Apr 7, 202621 KB
How countries are implementing the OECD Principles for Trustworthy AI - OECD.AI 
 
 
 
 
 
 
 AI Risk & Accountability 

 AI has risks and all actors must be accountable. 

 AI, Data & Privacy 

 Data and privacy are primary policy issues for AI. 

 Generative AI 

 Managing the risks and benefits of generative AI. 

 Future of Work 

 How AI can and will affect workers and working environments 

 AI Index 

 The OECD AI will be a synthetic measurement framework on Trustworthy Artificial Intelligence (AI) 

 AI Incidents 

 To manage risks, governments must track and understand AI incidents and hazards. 

 AI in Government 

 Governments are not only AI regulators and investors, but also developers and users. 

 Data Governance 

 Expertise on data governance to promote its safe and faire use in AI 

 Responsible AI 

 The responsible development, use and governance of human-centred AI systems 

 Innovation & Commercialisation 

 How to drive cooperation on AI and transfer research results into products 

 AI Compute and the Environment 

 AI computing capacities and their environmental impact. 

 AI & Health 

 AI can help health systems overcome their most urgent challenges. 

 AI Futures 

 AI’s potential futures.

 WIPS 

 Programme on Work, Innovation, Productivity and Skills in AI. 

 Catalogue Tools & Metrics 

 Explore tools & metrics to build and deploy AI systems that are trustworthy. 

 AIM: AI Incidents and Hazards Monitor 

 Gain valuable insights on global AI incidents and hazards. 

 The Hiroshima AI Reporting Framework 

 Organisations developing advanced AI systems can participate by submitting a report. By sharing information, they will facilitate transparency and comparability of risk mitigation measures. 

 OECD AI Principles 

 The first IGO standard to promote innovative and trustworthy AI 

 Policy areas 

 Browse OECD work related to AI across policy areas. 

 Papers & Publications 

 OECD and GPAI publications on AI, including the OECD AI Papers Series. 

 Videos 

 Watch videos about AI policy the issues that matter most. 

 Context 

 AI is already a crucial part of most people’s daily routines. 

 About OECD.AI 

 OECD.AI is an online interactive platform dedicated to promoting trustworthy, human-centric AI. 

 About GPAI 

 The GPAI initiative and OECD member countries’ work on AI joined forces under the GPAI brand to create an integrated partnership. 

 Community of Experts 

 Experts from around the world advise GPAI and contribute to its work. 

 Partners 

 OECD.AI works closely with many partners. 

 Intergovernmental How countries are implementing the OECD Principles for Trustworthy AI 

 Lucia Russo , Noah Oder 

 October 31, 2023 — 9 min read 

 
 
 
 
 
 

 

 Since signing on to the OECD AI Principles in 2019, countries have been using them as guidance to craft policies to tackle AI risks and capitalise on opportunities. The Principles are a global reference and the first intergovernmental standard for trust

... (truncated, 21 KB total)
Resource ID: 158abf058791d842 | Stable ID: sid_2S12MgVk8R