Challenges for an Ontology of Artificial Intelligence
Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be assimilated and regarded as “normal,” and (3) the tendency of human beings to anthropomorphize. This list is not intended as exhaustive, nor is it seen to preclude entirely a clear ontology, however, these challenges are a necessary set of topics for consideration. Each of these factors is seen to present a ‘moving target’ for discussion, which poses a challenge for both technical specialists and non-practitioners of AI systems development (e.g., philosophers and theologians) to speak meaningfully given that the corpus of AI structures and capabilities evolves at a rapid pace. Finally, we present avenues for moving forward, including opportunities for collaborative synthesis for scholars in philosophy and science.
💡 Research Summary
The paper “Challenges for an Ontology of Artificial Intelligence” by Scott H. Hawley argues that establishing a clear ontological framework for AI is essential given the technology’s rapid integration into society, but that this task is hampered by three inter‑related challenges.
First, the definition of AI is fluid and contested. The term has been used to describe everything from the original Dartmouth vision of machines that can think like humans, to narrow, task‑specific systems, to the current wave of machine‑learning models that learn from data. The author distinguishes classic AI (GOFAI) – rule‑based expert systems and symbolic reasoning – from modern statistical learning approaches, yet points out that both ultimately consist of an algorithm coupled with a training dataset and a purpose. Because “intelligence” is interpreted in many ways (from simple adaptive behavior to full consciousness), any static definition quickly becomes outdated.
Second, the “new normal” phenomenon shows how technologies once labeled as AI become ordinary tools once they achieve widespread adoption. Speech‑to‑text, facial recognition, and voice assistants were once hailed as AI breakthroughs, but today most users see them as mere features of smartphones or smart speakers. This normalization erodes the semantic boundary of the term “AI,” making it a moving target that depends on the current state of technology rather than on intrinsic properties of the system.
Third, human beings tend to anthropomorphize AI systems. The author illustrates this with a personal anecdote about a recurrent neural network that learned binary addition; the system evoked a sense of agency and pride despite being a deterministic function. Such emotional projection fuels both hype and fear, complicating ethical and legal discussions that hinge on whether a system should be treated as an agent.
The paper concludes that these challenges prevent a fixed, universal ontology of AI. Instead, the author proposes a dynamic, collaborative approach: philosophers, scientists, and engineers should jointly refine AI definitions, standardize terminology to reduce public‑expert gaps, and cultivate meta‑awareness of anthropomorphic tendencies. By doing so, the community can develop a more robust ontological framework that supports responsible policy, regulation, and societal integration of AI technologies.
Comments & Academic Discussion
Loading comments...
Leave a Comment