Skip to content
Longterm Wiki
Back

fatal 2018 Uber self-driving car accident in Arizona

web
incidentdatabase.ai·incidentdatabase.ai/cite/20/

An AI Incident Database entry documenting the first fatal autonomous vehicle pedestrian accident; critical case study for AI deployment safety, regulatory responses, and the limits of real-world robustness in ML systems.

Metadata

Importance: 72/100wiki pageprimary source

Summary

Documents the March 2018 fatal collision in Tempe, Arizona, where an Uber autonomous test vehicle struck and killed pedestrian Elaine Herzberg. This was the first recorded pedestrian fatality involving a self-driving car, and the incident revealed critical failures in sensor fusion, emergency braking systems, and human safety oversight. It became a landmark case for AI safety, autonomous vehicle regulation, and deployment accountability.

Key Points

  • First pedestrian death caused by an autonomous vehicle; Uber's self-driving system failed to correctly classify the pedestrian and did not activate emergency braking.
  • The safety driver was distracted at the time of impact, highlighting the dangers of over-reliance on human backup oversight in semi-autonomous systems.
  • Post-incident investigations revealed the system had detected the pedestrian but repeatedly misclassified her, demonstrating robustness and generalization failures.
  • Led to significant regulatory scrutiny, temporary suspension of Uber's AV program, and broader industry-wide safety protocol reviews.
  • Illustrates real-world consequences of deploying AI systems before adequate safety validation, serving as a key case study in responsible AI deployment.

Cited by 1 page

PageTypeQuality
AI Distributional ShiftRisk91.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202616 KB
Incident 20: A Collection of Tesla Autopilot-Involved Crashes Discover Submit Welcome to the AIID 
 Discover Incidents 
 Spatial View 
 Table View 
 List view 
 Entities 
 Taxonomies 
 Submit Incident Reports 
 Submission Leaderboard 
 Blog 
 AI News Digest 
 Risk Checklists 
 Random Incident 
 Sign Up 
 Collapse Incident 20: A Collection of Tesla Autopilot-Involved Crashes

 Share to Twitter Share to LinkedIn Share by email Share to Facebook Description : Multiple unrelated car accidents result in varying levels of harm have been occurred while a Tesla's autopilot was in use. Tools

 Notify Me of Updates Notify Me of Updates New Report New Report New Response New Response Discover Discover Citation Info Citation Info View History View History Entities

 View all entities Alleged: Tesla developed and deployed an AI system, which harmed Motorists . Incident Stats

 Incident ID 20 Report Count 22 Incident Date 2016-06-30 Editors Sean McGregor Applied Taxonomies CSETv0 , CSETv1 , GMF , MIT CSETv1 Taxonomy Classifications

 Taxonomy Details Incident Number 

 The number of the incident in the AI Incident Database.   20

 Special Interest Intangible Harm 

 An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.   no

 Date of Incident Year 

 The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank.

Enter in the format of YYYY   2016

 Date of Incident Month 

 The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank.

Enter in the format of MM   05

 Date of Incident Day 

 The day on which the incident occurred. If a precise date is unavailable, leave blank.

Enter in the format of DD   07

 Estimated Date 

 “Yes” if the data was estimated. “No” otherwise.   No

 Show All Classifications CSETv0 Taxonomy Classifications

 Taxonomy Details Problem Nature 

 Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.   Specification, Robustness

 Physical System 

 Where relevant, indicates wh

... (truncated, 16 KB total)
Resource ID: e3ad4d7f973693b0 | Stable ID: sid_s9Fb1ecYf2