Artificial Intelligence (AI) and the Relationship between Agency, Autonomy, and Moral Patiency
The proliferation of Artificial Intelligence (AI) systems exhibiting complex and seemingly agentive behaviours necessitates a critical philosophical examination of their agency, autonomy, and moral status. In this paper we undertake a systematic analysis of the differences between basic, autonomous, and moral agency in artificial systems. We argue that while current AI systems are highly sophisticated, they lack genuine agency and autonomy because: they operate within rigid boundaries of pre-programmed objectives rather than exhibiting true goal-directed behaviour within their environment; they cannot authentically shape their engagement with the world; and they lack the critical self-reflection and autonomy competencies required for full autonomy. Nonetheless, we do not rule out the possibility of future systems that could achieve a limited form of artificial moral agency without consciousness through hybrid approaches to ethical decision-making. This leads us to suggest, by appealing to the necessity of consciousness for moral patiency, that such non-conscious AMAs might represent a case that challenges traditional assumptions about the necessary connection between moral agency and moral patiency.
💡 Research Summary
This paper provides a systematic philosophical analysis of agency, autonomy, and moral patiency in the context of artificial intelligence (AI). It begins by acknowledging the proliferation of AI systems that exhibit complex, seemingly agentive behaviors, which necessitates a critical examination of these core concepts. The authors argue that while contemporary AI is highly sophisticated, it fundamentally lacks genuine agency and autonomy.
The analysis is structured around a clear conceptual hierarchy. First, the paper defines “basic agency” as the capacity for environmentally responsive, goal-directed action, distinguishing it from mere mechanical causation. Drawing on criteria like interactivity, autonomy, and adaptability (Floridi & Sanders), and perspectives from enactivism, it positions agency as a dynamic, relational phenomenon. Examples like a bacterium navigating its environment illustrate this basic form. The paper then rigorously assesses current AI systems (e.g., LLMs like ChatGPT, autonomous vehicles) against this framework. It concludes that these systems operate within rigid boundaries of pre-programmed objectives and training data. Their apparent goal-directedness is an illusion or a designed mimicry, lacking the real-time, adaptive, and self-directed engagement with the world that characterizes genuine basic agency. They are reactive, not truly interactive or adaptive in a first-person sense.
The discussion then ascends to “autonomous agency,” which requires more sophisticated capacities beyond basic environmental responsiveness. Building on theories of personal autonomy, the authors outline key requirements: self-directed goal setting (involving critical reflection and freedom from oppressive socialization), genuine choice (access to an adequate range of valuable alternatives, per Raz), and the capacity for critical reflection on and modification of one’s guiding values and principles (involving self-consciousness and reflective endorsement, per Watson). Current AI systems are shown to fall short on all these counts. They may exhibit a form of “basic machine autonomy” (executing complex algorithms independently of real-time human input), but they lack the authenticity, competency, and self-constitutive attitudes required for full autonomous agency. Their goals and decision-making frameworks are externally given and static.
The paper’s most provocative contribution lies in its exploration of future “Artificial Moral Agents (AMAs).” The authors do not rule out the possibility that future systems could achieve a limited form of artificial moral agency. They suggest this might be possible even without consciousness, perhaps through hybrid rule-consequentialist architectures for ethical decision-making. This leads to the core philosophical challenge: if such non-conscious AMAs could qualify as moral agents (i.e., entities capable of morally relevant actions and bearing responsibility), what does this imply for their moral patiency (i.e., their status as entities deserving of moral consideration)? The paper appeals to the common philosophical view that consciousness is a necessary condition for moral patiency (to be a patient is to be capable of suffering or experiencing well-being). Consequently, a non-conscious AMA could, in theory, be a moral agent without being a moral patient. This presents a scenario where the traditional assumed linkage between moral agency and moral patiency is decoupled. The AI challenge, therefore, is to conceptualize a being that can be held accountable for its actions (an agent) yet may not be owed direct moral duties itself (not a patient). This insight has profound implications for the future of AI ethics, law, and policy, forcing a re-evaluation of foundational concepts as AI systems approach greater behavioral complexity.
Comments & Academic Discussion
Loading comments...
Leave a Comment