Skip to content
Longterm Wiki
Back

AIAAIC Repository – AI Incidents, Controversies & Risks

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OECD

A practical reference for AI safety researchers and policymakers seeking empirical evidence of real-world AI failures and harms; complements theoretical safety work with documented incidents.

Metadata

Importance: 62/100tool pagedataset

Summary

The AIAAIC (AI, Algorithmic, and Automation Incidents and Controversies) Repository is an independent public database cataloguing real-world incidents and controversies involving AI and algorithmic systems. It serves as a transparency resource for tracking harms, failures, and risks that have emerged from deployed AI systems. The repository is hosted on the OECD AI catalogue as a tool for researchers, policymakers, and practitioners.

Key Points

  • Maintains a public, searchable database of AI-related incidents, controversies, and systemic risks drawn from real-world deployments.
  • Covers a wide range of harm categories including bias, misinformation, surveillance, safety failures, and misuse of AI systems.
  • Supports AI governance and policy work by providing documented evidence of AI risks and failure modes.
  • Useful for red-teaming, risk assessment, and informing safety standards by grounding concerns in empirical case studies.
  • Independent and freely accessible, making it a key reference for transparency and accountability in AI development.

Review

The AIAAIC Repository represents a critical initiative in AI safety by systematically collecting and analyzing incidents related to artificial intelligence, algorithms, and automation. Started in 2019 as a private project, it has evolved into a comprehensive, open-access platform that serves researchers, academics, journalists, and policymakers worldwide in understanding AI's complex risk landscape. By cataloging real-world AI incidents across sectors like social welfare, education, and corporate governance, the repository offers a unique transparency mechanism for identifying potential systemic risks. Its independent nature, coupled with an open-source approach, enables broad collaboration and knowledge sharing. While the tool primarily functions as an educational and awareness-building resource, it significantly contributes to responsible AI development by providing empirical evidence of AI system failures and potential ethical challenges.

Cached Content Preview

HTTP 200Fetched Apr 9, 20266 KB
AIAAIC Repository - OECD.AI 
 
 
 
 
 
 
 AI Risk & Accountability 

 AI has risks and all actors must be accountable. 

 AI, Data & Privacy 

 Data and privacy are primary policy issues for AI. 

 Generative AI 

 Managing the risks and benefits of generative AI. 

 Future of Work 

 How AI can and will affect workers and working environments 

 AI Index 

 The OECD AI will be a synthetic measurement framework on Trustworthy Artificial Intelligence (AI) 

 AI Incidents 

 To manage risks, governments must track and understand AI incidents and hazards. 

 AI in Government 

 Governments are not only AI regulators and investors, but also developers and users. 

 Data Governance 

 Expertise on data governance to promote its safe and faire use in AI 

 Responsible AI 

 The responsible development, use and governance of human-centred AI systems 

 Innovation & Commercialisation 

 How to drive cooperation on AI and transfer research results into products 

 AI Compute and the Environment 

 AI computing capacities and their environmental impact. 

 AI & Health 

 AI can help health systems overcome their most urgent challenges. 

 AI Futures 

 AI’s potential futures.

 WIPS 

 Programme on Work, Innovation, Productivity and Skills in AI. 

 Catalogue Tools & Metrics 

 Explore tools & metrics to build and deploy AI systems that are trustworthy. 

 AIM: AI Incidents and Hazards Monitor 

 Gain valuable insights on global AI incidents and hazards. 

 The Hiroshima AI Reporting Framework 

 Organisations developing advanced AI systems can participate by submitting a report. By sharing information, they will facilitate transparency and comparability of risk mitigation measures. 

 OECD AI Principles 

 The first IGO standard to promote innovative and trustworthy AI 

 Policy areas 

 Browse OECD work related to AI across policy areas. 

 Papers & Publications 

 OECD and GPAI publications on AI, including the OECD AI Papers Series. 

 Videos 

 Watch videos about AI policy the issues that matter most. 

 Context 

 AI is already a crucial part of most people’s daily routines. 

 About OECD.AI 

 OECD.AI is an online interactive platform dedicated to promoting trustworthy, human-centric AI. 

 About GPAI 

 The GPAI initiative and OECD member countries’ work on AI joined forces under the GPAI brand to create an integrated partnership. 

 Community of Experts 

 Experts from around the world advise GPAI and contribute to its work. 

 Partners 

 OECD.AI works closely with many partners. 

 Catalogue of Tools & Metrics for Trustworthy AI These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe. 

 Overview Tools Metrics About the catalogue Contribute to the catalogue Contribute to the catalogue AIAAIC Repository

 Website Slack 

 An independent, open, public interest resource, the AIAAIC Repository details incid

... (truncated, 6 KB total)
Resource ID: f4e336365b5dfda9 | Stable ID: sid_tWDwS3KwHg