2025 review in AI & Society
paperCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Springer
Relevant to AI safety researchers and practitioners concerned with human-AI teaming failures; challenges the assumption that XAI tools reliably improve human oversight and decision-making in high-stakes settings.
Metadata
Summary
This systematic review of 35 studies challenges the view that automation bias stems solely from over-trust, identifying multiple interacting factors including AI literacy, expertise, and cognitive profiles. Notably, it finds that Explainable AI and transparency mechanisms frequently fail to reduce automation bias or improve decision accuracy. The authors argue that designs promoting active user verification are more effective interventions than explanations alone.
Key Points
- •Automation bias is driven by multiple interacting factors—AI literacy, professional expertise, cognitive profiles, and trust dynamics—not just over-trust or attention failures.
- •XAI and transparency mechanisms improve perceived acceptability but often fail to reduce automation bias or improve actual decision accuracy.
- •User engagement and critical independent verification are identified as more effective interventions than explanations alone.
- •Findings have significant implications for high-stakes domains like healthcare and law where over-reliance on AI can cause serious harm.
- •Recommends adaptive explanation designs that actively prompt users to verify AI recommendations rather than passively present rationales.
Cited by 3 pages
| Page | Type | Quality |
|---|---|---|
| AI-Human Hybrid Systems | Approach | 91.0 |
| Automation Bias (AI Systems) | Risk | 56.0 |
| AI-Driven Institutional Decision Capture | Risk | 73.0 |
Cached Content Preview
# Exploring automation bias in human–AI collaboration: a review and implications for explainable AI Authors: Giuseppe Romeo, Daniela Conti Journal: AI & SOCIETY Published: 2026-01 DOI: 10.1007/s00146-025-02422-7 ## Abstract Abstract As Artificial Intelligence (AI) becomes increasingly embedded in high-stakes domains such as healthcare, law, and public administration, automation bias (AB)—the tendency to over-rely on automated recommendations—has emerged as a critical challenge in human–AI collaboration. While previous reviews have examined AB in traditional computer-assisted decision-making, research on its implications in modern AI-driven work environments remains limited. To address this gap, this research systematically investigates how AB manifests in these settings and the cognitive mechanisms that influence it. Following PRISMA 2020 guidelines, we reviewed 35 peer-reviewed studies from SCOPUS, ScienceDirect, PubMed, and Google Scholar. The included literature, published between January 2015 and April 2025, spans fields such as cognitive psychology, human factors engineering, human–computer interaction, and neuroscience, providing an interdisciplinary foundation for our analysis. Traditional perspectives attribute AB to over-trust in automation or attentional constraints, resulting in users perceiving AI-generated outputs as reliable. However, our review presents a more nuanced view. While confirming some prior findings, it also sheds light on additional interacting factors such as, AI literacy, level of professional expertise, cognitive profile, developmental trust dynamics, task verification demands, and explanation complexity. Notably, although Explainable AI (XAI) and transparency mechanisms are designed to mitigate AB, overly technical, cognitively demanding, or even simplistic explanations may inadvertently reinforce misplaced trust, especially among less experienced professionals with low AI literacy. Taken together, these findings suggest that although explanations may increase perceived system acceptability, they are often insufficient to improve decision accuracy or mitigate AB. Instead, user engagement emerges as the most feasible and impactful point of intervention. As increased verification effort has been shown to reduce complacency toward AI mis-recommendations, we propose explanation design strategies that actively promote critical engagement and independent verification. These conclusions offer both theoretical and practical contributions to bias-aware AI development, underscoring that explanation usability is best supported by features such as understandability and adaptiveness.
a96cbf6f98644f2f | Stable ID: sid_4FTUI8IQYz