Securing Agentic AI Systems -- A Multilayer Security Framework

Reading time: 5 minute
...

📝 Original Info

  • Title: Securing Agentic AI Systems – A Multilayer Security Framework
  • ArXiv ID: 2512.18043
  • Date: 2025-12-19
  • Authors: Sunil Arora, John Hastings

📝 Abstract

Securing Agentic Artificial Intelligence (AI) systems requires addressing the complex cyber risks introduced by autonomous, decision-making, and adaptive behaviors. Agentic AI systems are increasingly deployed across industries, organizations, and critical sectors such as cybersecurity, finance, and healthcare. However, their autonomy introduces unique security challenges, including unauthorized actions, adversarial manipulation, and dynamic environmental interactions. Existing AI security frameworks do not adequately address these challenges or the unique nuances of agentic AI. This research develops a lifecycle-aware security framework specifically designed for agentic AI systems using the Design Science Research (DSR) methodology. The paper introduces MAAIS, an agentic security framework, and the agentic AI CIAA (Confidentiality, Integrity, Availability, and Accountability) concept. MAAIS integrates multiple defense layers to maintain CIAA across the AI lifecycle. Framework validation is conducted by mapping with the established MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) AI tactics. The study contributes a structured, standardized, and framework-based approach for the secure deployment and governance of agentic AI in enterprise environments. This framework is intended for enterprise CISOs, security, AI platform, and engineering teams and offers a detailed step-by-step approach to securing agentic AI workloads.

💡 Deep Analysis

Figure 1

📄 Full Content

Artificial intelligence (AI) enables machines to perceive, reason, learn, and decide [1]. Agentic AI is the latest development in the evolution of intelligent systems. Agentic AI systems can make decisions, plan actions, select tools to achieve an outcome, and adjust to changing environments without continuous human control [2], [3]. Their design allows them to pursue defined objectives while responding to new information and conditions in real time.

Unlike traditional machine learning, AI, or generative models that operate within fixed parameters, agentic AI systems show continuous improvements, reasoning, and autonomous behavior across diverse contexts. This ability of AI agents to achieve autonomy, automated workflows, and decisionmaking has generated significant interest from industries such as cybersecurity, finance, healthcare, transportation, medicine, and industrial automation [4], [5], [6], where autonomous operations offer significant efficiency and flexibility.

The same autonomy that makes agentic AI valuable also creates new security concerns. Existing AI security frameworks, such as NIST AIRMF [7], ENISA [8], EU AI Act [9], and ISO/IEC 42001:2023 [10], are designed for general and generative AI models and systems that do not account for continuous, autonomous decision-making capabilities. Industry standards and risk frameworks, such as EY Agentic AI risk frameworks [11], outline risks but lack the technical guidance and capabilities to mitigate agentic AI risks. Agentic AI introduces risks, such as unauthorized actions, adversarial interference, goal drift, and data misuse, arising from dynamic interactions with complex environments. In addition, evolving reasoning processes and unpredictable adaptations make it challenging to verify system security, integrity, and compliance once deployed.

As these systems move into high-stakes, regulated domains, the absence of a security framework that considers their full lifecycle development, deployment, operation, and governance has become a critical gap. Organizations face increasing exposure to operational, ethical, and compliance risks if such systems operate beyond their intended boundaries or are exploited by adversaries.

This research addresses that gap by proposing a security framework specifically designed for agentic AI systems. The framework adopts a lifecycle perspective, integrating protection mechanisms from data handling and model integrity to agent control, monitoring, and governance. It aims to help organizations manage confidentiality, integrity, availability, and accountability throughout the system’s operational lifespan, providing an end-to-end standard for secure and reliable use of agentic AI in enterprise environments.

In 2024, the market value of agentic AI reached USD 5.1 billion. According to Capgemini, it is expected to exceed 47 billion U.S. dollars, with a compound annual growth rate of over 44% [12]. This impressive growth highlights the potential of agentic AI to revolutionize industries through autonomous actions and decision-making. As per another survey, in 2024, less than 1% of enterprise software included agentic AI. By 2028, nearly a third are expected to incorporate agentic AI, greatly enhancing decision-making autonomy [13].

AI has evolved significantly since its origins in 1950, when Alan Turing and John McCarthy laid the groundwork for machine learning. It has advanced through many improvements that have shaped its capabilities and applications [14], [15]. Initially, it was developed as a rule-based expert system. AI has reached a stage where it can make autonomous decisions, learn from new datasets, and perform complex multistep tasks with minimal or no human oversight and intervention. This section provides an overview of AI’s historical development, its different types and generations, and the emergence of agentic AI, a novel advancement poised to redefine the field. Agentic AI presents new opportunities for automation and efficiency in many fields, but also raises critical security concerns that shall be addressed for its secure and ethical deployment.

AI research noticed a shift in statistical and machine learning methods in the 1980s and 1990s. It enabled computers to learn patterns from data instead of relying on a predefined set of rules. This transition marked the rise of Artificial Narrow Intelligence (ANI), in which AI models performed excellently at specific tasks such as speech recognition and image processing. However, it was still far from broader cognitive abilities. The advancement of deep learning and neural networks in the 2010s revolutionized AI, significantly improving performance in fields such as natural language processing (NLP), autonomous systems, and robotics. However, despite these advancements, traditional AI models still require extensive human oversight [16], [17].

AI systems are generally grouped into three categories: ANI, Artificial General Intelligence (AGI), and Artifici

📸 Image Gallery

AgenticAI_layers.png CIAA.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut