Back
Adversarial Machine Learning Review 2025 - Springer
webSpringer(peer-reviewed)·link.springer.com/article/10.1007/s10462-025-11147-4
Credibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Springer
Data Status
Full text fetchedFetched Dec 28, 2025
Summary
This survey explores adversarial machine learning in healthcare, automotive, energy systems, and large language models, analyzing attack techniques, defense strategies, and emerging challenges. It provides a cross-domain perspective on AI system vulnerabilities and security.
Key Points
- •First comprehensive cross-industry analysis of adversarial machine learning challenges
- •Detailed taxonomy of adversarial attacks including evasion, privacy, and poisoning techniques
- •Practical recommendations for developing robust and privacy-preserving AI systems
Review
The paper offers a critical examination of adversarial machine learning (AML), addressing the growing security and privacy challenges in AI systems across multiple high-stakes industries. By systematically investigating attack vectors, defense mechanisms, and evaluation tools, the research highlights the complex landscape of AI vulnerabilities, particularly in domains where system failures could have significant consequences. The methodology is robust, utilizing an extensive literature review across multiple scientific databases and focusing on publications from 2014-2025. The authors make significant contributions by providing a comprehensive taxonomy of adversarial attacks, including evasion, privacy, and poisoning attacks, while also offering practical insights into open-source tools and benchmarking techniques. The cross-domain approach is particularly valuable, as it allows for a holistic understanding of AML challenges that transcend individual industry sectors.
Resource ID:
5690b641011b8f9f | Stable ID: M2YwOGJjYT