Back
Adversarial Machine Learning Review 2025 - Springer
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Springer
A comprehensive 2025 survey of adversarial machine learning covering attack and defense techniques across healthcare, automotive, energy, and LLM domains, providing critical analysis of AI system vulnerabilities and security challenges.
Paper Details
Citations
0
Year
2019
Methodology
book-chapter
Categories
Adversarial Machine Learning
Metadata
journal articleanalysis
Summary
This survey explores adversarial machine learning in healthcare, automotive, energy systems, and large language models, analyzing attack techniques, defense strategies, and emerging challenges. It provides a cross-domain perspective on AI system vulnerabilities and security.
Key Points
- •First comprehensive cross-industry analysis of adversarial machine learning challenges
- •Detailed taxonomy of adversarial attacks including evasion, privacy, and poisoning techniques
- •Practical recommendations for developing robust and privacy-preserving AI systems
Review
The paper offers a critical examination of adversarial machine learning (AML), addressing the growing security and privacy challenges in AI systems across multiple high-stakes industries. By systematically investigating attack vectors, defense mechanisms, and evaluation tools, the research highlights the complex landscape of AI vulnerabilities, particularly in domains where system failures could have significant consequences. The methodology is robust, utilizing an extensive literature review across multiple scientific databases and focusing on publications from 2014-2025. The authors make significant contributions by providing a comprehensive taxonomy of adversarial attacks, including evasion, privacy, and poisoning attacks, while also offering practical insights into open-source tools and benchmarking techniques. The cross-domain approach is particularly valuable, as it allows for a holistic understanding of AML challenges that transcend individual industry sectors.
Cached Content Preview
HTTP 200Fetched Apr 7, 20262 KB
# Adversarial machine learning: a review of methods, tools, and critical industry sectors Authors: Sotiris Pelekis, Thanos Koutroubas, Afroditi Blika, Anastasis Berdelis, Evangelos Karakolis, Christos Ntanos, Evangelos Spiliotis, Dimitris Askounis Journal: Artificial Intelligence Review Published: 2025-05-03 DOI: 10.1007/s10462-025-11147-4 ## Abstract Abstract The rapid advancement of Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), has produced high-performance models widely used in various applications, ranging from image recognition and chatbots to autonomous driving and smart grid systems. However, security threats arise from the vulnerabilities of ML models to adversarial attacks and data poisoning, posing risks such as system malfunctions and decision errors. Meanwhile, data privacy concerns arise, especially with personal data being used in model training, which can lead to data breaches. This paper surveys the Adversarial Machine Learning (AML) landscape in modern AI systems, while focusing on the dual aspects of robustness and privacy. Initially, we explore adversarial attacks and defenses using comprehensive taxonomies. Subsequently, we investigate robustness benchmarks alongside open-source AML technologies and software tools that ML system stakeholders can use to develop robust AI systems. Lastly, we delve into the landscape of AML in four industry fields –automotive, digital healthcare, electrical power and energy systems (EPES), and Large Language Model (LLM)-based Natural Language Processing (NLP) systems– analyzing attacks, defenses, and evaluation concepts, thereby offering a holistic view of the modern AI-reliant industry and promoting enhanced ML robustness and privacy preservation in the future.
Resource ID:
5690b641011b8f9f | Stable ID: sid_dOJxooUkce