Skip to content

Incidents

This section documents significant incidents involving AI systems - security breaches, misuse cases, accidents, and other events that provide concrete data points for understanding AI risks.

Incident documentation serves several purposes for AI safety:

  • Concrete evidence of risks that have actually materialized
  • Case studies for understanding attack vectors and failure modes
  • Calibration data for risk assessments and forecasts
  • Lessons learned for improving safety practices

Incidents included here generally meet one or more of these criteria:

  • First documented instance of a particular type of AI misuse or failure
  • Significant scale or impact
  • Novel attack methodology or failure mode
  • Substantial implications for AI safety discourse