Computer Vision in Tactical AI Art

Computer Vision in Tactical AI Art
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

AI art comprises a spectrum of creative endeavors that emerge from and respond to the development of artificial intelligence (AI), the expansion of AI-powered economies, and their influence on culture and society. Within this repertoire, the relationship between the cognitive value of human vision and the wide application range of computer vision (CV) technologies opens a sizeable space for exploring the problematic sociopolitical aspects of automated inference and decision-making in modern AI. In this paper, I examine the art practices critically engaged with the notions and protocols of CV. After identifying and contextualizing the CV-related tactical AI art, I discuss the features of exemplar artworks in four interrelated subject areas. Their topical imbrications, common critical points, and shared pitfalls plot a wider landscape of tactical AI art, allowing me to detect factors that affect its poetic cogency, social responsibility, and political impact, some of which exist in the theoretical premises of digital art activism. Along these lines, I outline the routes for addressing the challenges and advancing the field.


💡 Research Summary

The paper surveys artistic practices that deliberately engage with computer‑vision (CV) technologies as a means of tactical artificial‑intelligence (AI) art. After a brief historical overview that traces the lineage of surveillance art from the 1970s to the present, the author positions CV as the contemporary visual infrastructure that underpins pervasive monitoring, data harvesting, behavioral prediction and micro‑targeting. Within this context, the author identifies a corpus of works produced over the last two decades and organizes them into four interrelated thematic clusters: (1) sociotechnical issues, (2) control and conditioning, (3) biometric classification, and (4) ethical and epistemic limits.

In the sociotechnical cluster, works such as Christian Moeller’s Cheese (2003) expose how emotion‑recognition systems translate facial expressions into “happiness” scores, thereby normalising affective conformity and turning personal affect into a surveilled commodity. The analysis highlights the way such pieces dramatise the hidden feedback loops between algorithmic affective scoring and social pressure.

The control‑and‑conditioning cluster examines installations that turn the camera into an active “eye” that watches and reacts to viewers. Golan Levin and Greg Baltus’s Opto‑Isolator (2007) and Seiko Mikami’s Desire of Codes (2010) embody the gaze‑response loop: the machine “sees” the audience, evaluates their gaze, and alters its behavior accordingly. By anthropomorphising vision, these works foreground the power asymmetry embedded in surveillance‑driven visual media and invite participants to experience the discomfort of being objectified by a machine.

The biometric classification cluster focuses on facial, iris and motion‑tracking systems that encode race, gender and class biases. Artists deliberately amplify dataset skew or provoke misrecognition to reveal that CV is not a neutral technical artifact but a sociopolitical instrument that reproduces existing hierarchies. The paper argues that exposing these biases is essential for a public understanding of algorithmic injustice.

The ethical‑and‑epistemic limits cluster critiques the underlying assumption that “seeing is knowing.” By juxtaposing humor, irony and provocation with CV pipelines, artists destabilise the deterministic narrative that visual data automatically confer authority. The author notes that many works employ subtle micro‑politics—small, disruptive interventions that educate rather than mobilise—yet they often fall short of delivering concrete policy or social change.

Across all four clusters, the author extracts three evaluative dimensions for tactical AI art: poetic cogency (the capacity to fuse technical sophistication with affective storytelling), social responsibility (the willingness to make algorithmic opacity visible and to challenge normative uses of vision), and political impact (the ability to trigger behavioural shifts or influence discourse). While many artworks achieve high poetic and critical value, the paper identifies systematic weaknesses: a lack of scalable activist infrastructure, limited engagement with legal or policy frameworks, and insufficient mechanisms for translating artistic critique into actionable reforms.

To address these gaps, the author proposes three forward‑looking routes: (1) building bridges between CV‑based art and governance structures through interdisciplinary collaborations; (2) promoting open‑source tools and transparent datasets to democratise access and enable reproducible critique; and (3) reconceptualising the theoretical foundations of digital art activism to incorporate robust ethical and epistemic scaffolding.

In sum, the article positions computer‑vision‑centric tactical AI art as a fertile site for interrogating the intertwined technical, social and political dimensions of contemporary AI. It argues that, when coupled with concrete institutional engagement, such artistic interventions can enrich public debate, sharpen critical awareness of surveillance economies, and ultimately contribute to more accountable AI systems.


Comments & Academic Discussion

Loading comments...

Leave a Comment