From Goals to Aspects, Revisited: An NFR Pattern Language for Agentic AI Systems
Agentic AI systems exhibit numerous crosscutting concerns – security, observability, cost management, fault tolerance – that are poorly modularized in current implementations, contributing to the high failure rate of AI projects in reaching production. The goals-to-aspects methodology proposed at RE 2004 demonstrated that aspects can be systematically discovered from i* goal models by identifying non-functional soft-goals that crosscut functional goals. This paper revisits and extends that methodology to the agentic AI domain. We present a pattern language of 12 reusable patterns organized across four NFR categories (security, reliability, observability, cost management), each mapping an i* goal model to a concrete aspect implementation using an AOP framework for Rust. Four patterns address agent-specific crosscutting concerns absent from traditional AOP literature: tool-scope sandboxing, prompt injection detection, token budget management, and action audit trails. We extend the V-graph model to capture how agent tasks simultaneously contribute to functional goals and non-functional soft-goals. We validate the pattern language through a case study analyzing an open-source autonomous agent framework, demonstrating how goal-driven aspect discovery systematically identifies and modularizes crosscutting concerns. The pattern language offers a principled approach for engineering reliable agentic AI systems through early identification of crosscutting concerns.
💡 Research Summary
The paper tackles the chronic problem of poorly modularized non‑functional requirements (NFRs) in autonomous, LLM‑driven agentic AI systems. While such agents excel at planning, reasoning, and tool‑calling, they also expose a dense set of crosscutting concerns—security, observability, cost management, and fault tolerance—that are scattered throughout code bases, leading to high failure rates when moving projects to production.
Building on the “Goals‑to‑Aspects” methodology introduced at RE 2004, the authors revisit and extend the approach for the emerging domain of agentic AI. The original method demonstrated that soft‑goals (NFRs) in i* models naturally crosscut functional goals, and that V‑graphs (a functional goal, an NFR soft‑goal, and the tasks that contribute to both) can be mined to extract aspects. The new contribution is threefold:
-
Extended V‑graph for Agents – The authors define a V‑graph for agents as a triple (functional goal g_f, NFR soft‑goal g_nf, task set T). They introduce the notion of crosscutting density δ(t) = |NFR(t)|, measuring how many NFRs a single task influences. In agent systems, tasks such as “Call LLM Provider” often have δ ≥ 4, meaning multiple NFRs intersect at the same join point. This high density is a key differentiator from earlier case studies where δ ≤ 2.
-
Three‑phase discovery process –
Phase 1 constructs a detailed i* Strategic Dependency (SD) and Strategic Rationale (SR) model for an agentic system, identifying actors (Agent User, Agent System, LLM Provider, Tool Provider, Operator) and decomposing functional goals (e.g., Execute Tool) into subtasks that each contribute to several NFR soft‑goals (Security, Cost, Reliability, Observability, Safety).
Phase 2 runs an enhanced AspectFinder algorithm that (i) enumerates tasks, (ii) builds overlapping V‑graphs, (iii) groups them by NFR to form candidate aspects, (iv) validates each candidate by measuring source‑code scattering (a concern appearing in ≥ 3 modules confirms the V‑graph prediction), and (v) determines advice type, join points, and composition order.
Phase 3 instantiates each validated candidate as a reusable pattern. -
Pattern language of 12 AOP patterns – Organized into four NFR categories (Security, Reliability, Observability, Cost Management), the catalog includes eight “Existing” patterns (Authorization Guard, Input Validation, Rate Limiter, Structured Logger, Performance Monitor, Metrics Collector, Response Cache, etc.) and four novel, agent‑specific patterns:
Tool Scope Sandbox (Security) – before‑advice that checks file paths, command allow‑lists, and network domains before any tool execution.
Prompt Guard (Security) – detects and mitigates prompt‑injection attacks on LLM inputs.
Token Budget Manager (Cost) – tracks token consumption per LLM call, enforces a pre‑configured budget, and aborts calls that would exceed it.
Action Audit Trail (Observability) – records every decision, tool invocation, and LLM response in a structured log for accountability and post‑mortem analysis.
Each pattern is presented with four elements: (1) Problem statement, (2) i* goal model showing how the NFR is operationalized, (3) concrete Rust implementation using the Aspect‑RS framework (a procedural‑macro‑based AOP system that provides before, after, around, and error advice with near‑zero runtime overhead), and (4) Composition relationships that specify prerequisite patterns and ordering (e.g., Authorization Guard must precede Tool Scope Sandbox, which in turn composes with Prompt Guard).
Case Study – The methodology is applied to an open‑source autonomous agent framework comprising 192 source files and 129 040 lines of Rust code. The authors identify 11 crosscutting concerns (seven established, four novel) and quantify scattering: each concern appears in an average of five modules, confirming high crosscutting density. V‑graph analysis uncovers missing NFR coverage (e.g., no existing token‑budget enforcement), prompting the insertion of the corresponding pattern. After integrating all 12 patterns, code duplication for crosscutting logic drops by ~38 %, and runtime overhead remains under 2 %. The study also reports a 15 % reduction in average token cost per task due to the Token Budget Manager, and a measurable decrease in security incidents (simulated prompt‑injection attempts are blocked 92 % of the time).
Threats to Validity – The authors acknowledge that i* modeling is subjective and depends on analyst expertise; the Rust‑centric AOP framework may limit generalizability to other ecosystems; and dynamic aspects of agents (runtime addition of new tools, asynchronous workflows) are not fully captured by static V‑graphs. They propose future work on automated goal extraction, multi‑language AOP support, and runtime‑adaptive aspect composition.
Conclusion – By linking early‑phase goal modeling with concrete aspect‑oriented implementations, the paper offers a principled, repeatable approach to discover, modularize, and enforce NFRs in agentic AI systems. The pattern language bridges requirements engineering and software development, enabling developers to embed security, reliability, observability, and cost controls directly into the agent’s execution pipeline, thereby improving production readiness, maintainability, and overall system trustworthiness.
Comments & Academic Discussion
Loading comments...
Leave a Comment