Skip to content
Longterm Wiki
Back

AI Incident Database

web
incidentdatabase.ai·incidentdatabase.ai/

The AIID is a key empirical reference for AI safety researchers studying real-world deployment failures; useful for grounding theoretical risk concerns in documented, concrete harms.

Metadata

Importance: 72/100dataset

Summary

The AI Incident Database is a publicly accessible repository cataloging real-world failures, harms, and unintended consequences caused by deployed AI systems. It serves as an empirical record to help researchers, policymakers, and developers learn from past mistakes and improve AI safety practices. The database enables systematic study of AI failure modes across industries and applications.

Key Points

  • Catalogs hundreds of documented AI incidents spanning domains like healthcare, criminal justice, autonomous vehicles, and content moderation.
  • Provides structured data on AI harms to support empirical research into failure patterns and risk factors.
  • Serves as a learning resource for developers and policymakers to anticipate and mitigate similar future failures.
  • Maintained as an open, community-contributed resource to ensure broad coverage of AI system failures globally.
  • Useful for red-teaming and safety evaluation by illustrating real-world consequences of misaligned or miscalibrated AI systems.

Review

The AI Incident Database serves as a critical resource for tracking and analyzing real-world AI system failures, providing transparency and insight into the potential risks associated with emerging artificial intelligence technologies. By documenting incidents across different sectors—including education, healthcare, law enforcement, and social media—the database offers a systematic approach to understanding AI's unintended consequences and potential pitfalls. The database's methodology of collecting, categorizing, and presenting detailed incident reports represents an important contribution to AI safety research. By creating a publicly accessible repository of AI-related mishaps, the project enables researchers, policymakers, and technology developers to learn from past mistakes, identify recurring patterns, and develop more robust safeguards and ethical guidelines for AI system design and deployment.

Cited by 2 pages

PageTypeQuality
Persuasion and Social ManipulationCapability63.0
AI Misuse Risk CruxesCrux65.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202617 KB
Welcome to the Artificial Intelligence Incident Database Discover Submit Welcome to the AIID 
 Discover Incidents 
 Spatial View 
 Table View 
 List view 
 Entities 
 Taxonomies 
 Submit Incident Reports 
 Submission Leaderboard 
 Blog 
 AI News Digest 
 Risk Checklists 
 Random Incident 
 Sign Up 
 Collapse Welcome to the AI Incident Database 

 Search Discover Loading... Incident 1442: Kiro AI Coding Tool Was Reportedly Implicated in 13-Hour AWS Cost Explorer Outage in Mainland China

 “ Amazon blames human employees for an AI coding agent’s mistake ” Latest Incident Report 

 theverge.com 2026-04-05 Amazon Web Services suffered a 13-hour outage to one system in December as a result of its AI coding assistant Kiro's actions,  according to the  Financial Times . Numerous unnamed Amazon employees told the *FT *that AI agent Kiro was responsible for the December incident affecting an AWS service in parts of mainland China. People familiar with the matter said the tool chose to "delete and recreate the environment" it was working on, which caused the outage.

 While Kiro normally requires sign-off from two humans to push changes, the bot had the permissions of its operator, and a human error there allowed more access than expected.

 Amazon described the December disruption as an "extremely limited event" that pales in comparison to a  major outage in October , which took down online services, like Alexa, Fortnite, ChatGPT, and Amazon for hours. An outage that didn't  trap anyone in their smart bed  is something of a lucky escape.

 It is not the only time AI coding tools have caused problems for Amazon. A senior AWS employee said the December outage is the second production outage linked to an AI tool in the last few months, with another linked to Amazon's AI chatbot Q Developer. The employee described the outages as "small but entirely foreseeable." Amazon said the second incident did not impact a "customer facing AWS service."

 Amazon blames human error for the problems, not the rogue bot, and said it has "implemented numerous safeguards" like staff training following the incident. The company said it's a "coincidence that AI tools were involved" and insists that "the same issue could occur with any developer tool or manual action." That's true, and though I'm not an engineer, I'd guess one wouldn't deliberately scrap and rebuild something to make a change in all but the most dire of circumstances.

 Read More Loading... Incident 1444: Hachette Reportedly Canceled Publication of Mia Ballard's Shy Girl After Generative AI Authorship Allegations

 “ Publisher Pulls ‘Shy Girl’ Horror Novel After AI Allegations ”

 wsj.com 2026-04-05 The Hachette Book Group said Thursday that it has canceled the publication of horror novel "Shy Girl" following an investigation into the origins of the book.

 The novel, by Mia Ballard, was expected to publish May 19 in the U.S. via its Orbit U.S. imprint. Hachette said it

... (truncated, 17 KB total)
Resource ID: baac25fa61cb2244 | Stable ID: NDI3MTNjNT