OECD: AI Safety Institutes Challenge
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OECD
Published by OECD in July 2024, this piece provides a comparative overview of national AI Safety Institutes and the systemic challenges they face, useful for understanding the international policy landscape around AI safety governance.
Metadata
Summary
This OECD analysis examines the emerging landscape of national AI Safety Institutes (AISIs) established by the US, UK, Japan, Canada, Singapore, and EU, assessing their roles in evaluating AI capabilities and risks. It identifies key challenges these bodies face, including surveying AI unpredictability, establishing evaluation standards, conducting safety research, and coordinating internationally. The piece argues that while AISIs represent a significant step toward coordinated global AI safety governance, substantial structural and resource challenges remain.
Key Points
- •Multiple countries (US, UK, Japan, Canada, Singapore) have established dedicated AI Safety Institutes to evaluate AI capabilities, conduct safety research, and share information.
- •AISIs face the challenge of surveying inherently unpredictable AI systems whose capabilities and risks may evolve faster than current evaluation frameworks.
- •Standardized evaluation approaches are lacking; some sensitive evaluations (e.g., national security) can only be conducted by authorized government bodies.
- •The EU AI Office's Safety Unit fulfills similar functions to AISIs but also carries primary regulatory responsibilities, creating a distinct hybrid model.
- •International coordination among AISIs is critical, as AI risks and capabilities are inherently cross-border and no single country can address them alone.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Safety Institutes (AISIs) | Policy | 69.0 |
Cached Content Preview
AI Safety Institutes: Can countries meet the challenge? - OECD.AI
AI Risk & Accountability
AI has risks and all actors must be accountable.
AI, Data & Privacy
Data and privacy are primary policy issues for AI.
Generative AI
Managing the risks and benefits of generative AI.
Future of Work
How AI can and will affect workers and working environments
AI Index
The OECD AI will be a synthetic measurement framework on Trustworthy Artificial Intelligence (AI)
AI Incidents
To manage risks, governments must track and understand AI incidents and hazards.
AI in Government
Governments are not only AI regulators and investors, but also developers and users.
Data Governance
Expertise on data governance to promote its safe and faire use in AI
Responsible AI
The responsible development, use and governance of human-centred AI systems
Innovation & Commercialisation
How to drive cooperation on AI and transfer research results into products
AI Compute and the Environment
AI computing capacities and their environmental impact.
AI & Health
AI can help health systems overcome their most urgent challenges.
AI Futures
AI’s potential futures.
WIPS
Programme on Work, Innovation, Productivity and Skills in AI.
Catalogue Tools & Metrics
Explore tools & metrics to build and deploy AI systems that are trustworthy.
AIM: AI Incidents and Hazards Monitor
Gain valuable insights on global AI incidents and hazards.
The Hiroshima AI Reporting Framework
Organisations developing advanced AI systems can participate by submitting a report. By sharing information, they will facilitate transparency and comparability of risk mitigation measures.
OECD AI Principles
The first IGO standard to promote innovative and trustworthy AI
Policy areas
Browse OECD work related to AI across policy areas.
Papers & Publications
OECD and GPAI publications on AI, including the OECD AI Papers Series.
Videos
Watch videos about AI policy the issues that matter most.
Context
AI is already a crucial part of most people’s daily routines.
About OECD.AI
OECD.AI is an online interactive platform dedicated to promoting trustworthy, human-centric AI.
About GPAI
The GPAI initiative and OECD member countries’ work on AI joined forces under the GPAI brand to create an integrated partnership.
Community of Experts
Experts from around the world advise GPAI and contribute to its work.
Partners
OECD.AI works closely with many partners.
Civil society AI Safety Institutes: Can countries meet the challenge?
Alexandre Variengien , Charles Martinet
July 29, 2024 — 7 min read
Governments need to understand what these models can do to manage the risks and seize the benefits of AI. In recent months, recognising the need to keep up with the unprecedented pace of AI development, the U.S ., U.K. , Japan , Canada and Singapore have established speciali
... (truncated, 15 KB total)5ce5182494b7fbe9 | Stable ID: sid_4FcwJnhp05