Skip to content
Longterm Wiki
Back

Cooperative AI research

paper

Authors

Sunil Arora·John Hastings

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Research paper addressing security challenges in agentic AI systems, focusing on cyber risks from autonomous decision-making across critical sectors like healthcare and finance, contributing to understanding of AI safety and security frameworks.

Paper Details

Citations
1
0 influential
Year
1991
Methodology
peer-reviewed
Categories
AI & Society

Metadata

arxiv preprintprimary source

Abstract

Securing Agentic Artificial Intelligence (AI) systems requires addressing the complex cyber risks introduced by autonomous, decision-making, and adaptive behaviors. Agentic AI systems are increasingly deployed across industries, organizations, and critical sectors such as cybersecurity, finance, and healthcare. However, their autonomy introduces unique security challenges, including unauthorized actions, adversarial manipulation, and dynamic environmental interactions. Existing AI security frameworks do not adequately address these challenges or the unique nuances of agentic AI. This research develops a lifecycle-aware security framework specifically designed for agentic AI systems using the Design Science Research (DSR) methodology. The paper introduces MAAIS, an agentic security framework, and the agentic AI CIAA (Confidentiality, Integrity, Availability, and Accountability) concept. MAAIS integrates multiple defense layers to maintain CIAA across the AI lifecycle. Framework validation is conducted by mapping with the established MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) AI tactics. The study contributes a structured, standardized, and framework-based approach for the secure deployment and governance of agentic AI in enterprise environments. This framework is intended for enterprise CISOs, security, AI platform, and engineering teams and offers a detailed step-by-step approach to securing agentic AI workloads.

Summary

This research addresses security challenges specific to autonomous agentic AI systems by developing MAAIS, a lifecycle-aware security framework designed using Design Science Research methodology. The framework introduces the agentic AI CIAA concept (Confidentiality, Integrity, Availability, and Accountability) and integrates multiple defense layers to protect AI systems across their entire lifecycle. The approach is validated against MITRE ATLAS threat tactics and provides enterprise organizations with structured guidance for securing agentic AI deployments in critical sectors like cybersecurity, finance, and healthcare.

Cited by 1 page

PageTypeQuality
Agentic AICapability68.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202628 KB
Securing Agentic AI Systems - A Multilayer Security Framework 
 
 
 

 
 

 
 
 
 
 Securing Agentic AI Systems - A Multilayer Security Framework
 

 
 
 Sunil Arora  , John Hastings  
 
 
 
 Abstract

 Securing Agentic Artificial Intelligence (AI) systems requires addressing the complex cyber risks introduced by autonomous, decision-making, and adaptive behaviors. Agentic AI systems are increasingly deployed across industries, organizations, and critical sectors such as cybersecurity, finance, and healthcare. However, their autonomy introduces unique security challenges, including unauthorized actions, adversarial manipulation, and dynamic environmental interactions. Existing AI security frameworks do not adequately address these challenges or the unique nuances of agentic AI. This research develops a lifecycle-aware security framework specifically designed for agentic AI systems using the Design Science Research (DSR) methodology. The paper introduces MAAIS, an agentic security framework, and the agentic AI CIAA (Confidentiality, Integrity, Availability, and Accountability) concept. MAAIS integrates multiple defense layers to maintain CIAA across the AI lifecycle. Framework validation is conducted by mapping with the established MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) AI tactics. The study contributes a structured, standardized, and framework-based approach for the secure deployment and governance of agentic AI in enterprise environments. This framework is intended for enterprise CISOs, security, AI platform, and engineering teams and offers a detailed step-by-step approach to securing agentic AI workloads.

 
 
 
 I Introduction 

 
 Artificial intelligence (AI) enables machines to perceive, reason, learn, and decide [ russell2020aima ] . Agentic AI is the latest development in the evolution of intelligent systems. Agentic AI systems can make decisions, plan actions, select tools to achieve an outcome, and adjust to changing environments without continuous human control [ 1acharya , HOSSEINI2025100399 ] . Their design allows them to pursue defined objectives while responding to new information and conditions in real time.

 
 
 Unlike traditional machine learning, AI, or generative models that operate within fixed parameters, agentic AI systems show continuous improvements, reasoning, and autonomous behavior across diverse contexts. This ability of AI agents to achieve autonomy, automated workflows, and decision-making has generated significant interest from industries such as cybersecurity, finance, healthcare, transportation, medicine, and industrial automation [ tayiba , PowellmedicineAgenticAI , KARUNANAYAKE202573 ] , where autonomous operations offer significant efficiency and flexibility.

 
 
 The same autonomy that makes agentic AI valuable also creates new security concerns. Existing AI security frameworks, such as NIST AIRMF [ nist2023ai_rmf ] , ENISA [ ENISA_2023multilayer ] , EU AI Act [ Eu202

... (truncated, 28 KB total)
Resource ID: 4f79c3dae1e7f82a | Stable ID: ZGIxY2I3ZT