Technosocial risks of ideal emotion recognition technologies: A defense of the (social) value of emotional expressions
The prospect of AI systems that I call ideal emotion recognition technologies (ERTs) is often defended on the assumption that social life would benefit from increased affective transparency. This paper challenges that assumption by examining the technosocial risks posed by ideal ERTs, understood as multimodal systems capable of reliably inferring inner affective states in real time. Drawing on philosophical accounts of emotional expression and social practice, as well as empirical work in affective science and social psychology, I argue that the appeal of such systems rests on a misunderstanding of the social functions of emotional expression. Emotional expressions function not only as read-outs of inner states, but also as tools for coordinating action, enabling moral repair, sustaining interpersonal trust, and supporting collective norms. These functions depend on a background of partial opacity and epistemic friction. When deployed in socially authoritative or evaluative contexts, ideal ERTs threaten this expressive space by collapsing epistemic friction, displacing relational meaning with technology-mediated affective profiles, and narrowing the space for aspirational and role-sensitive expressions. The result is a drift towards affective determinism and ambient forms of affective auditing, which undermine both social cohesion and individual agency. I argue that, although it is intuitive to think that increasing accuracy would legitimise such systems, in the case of ERTs accuracy does not straightforwardly justify their deployment, and may, in some contexts, provide a reason for regulatory restraint. I conclude by defending a function-first regulatory approach that treats expressive discretion and intentional emotional expression as constitutive of certain social goods, and that accordingly seeks to protect these goods from excessive affective legibility.
💡 Research Summary
The paper examines the sociotechnical risks posed by future “ideal” emotion‑recognition technologies (ERTs) that can reliably infer a person’s inner affective state in real time. While current debates focus on bias, technical limitations, and scientific validity, the author argues that once accuracy reaches a level where these systems are trusted, a different set of harms emerges—namely, the erosion of the social functions of intentional emotional expression. Drawing on the EASI model (Emotions as Social Information), affective pragmatics, and philosophical work on speech‑act‑like emotional moves, the author shows that emotions are not mere read‑outs of inner states. They serve to coordinate action, enable moral repair (apology, forgiveness, reassurance), sustain interpersonal trust, and uphold collective norms. These functions rely on partial opacity and “epistemic friction”: observers interpret expressions defeasibly, integrating contextual knowledge and remaining open to correction.
Ideal ERTs, defined as multimodal AI systems that achieve high accuracy across cultures and can operate in real time, would insert a diagnostic layer between emitter and perceiver. Instead of humans forming their own provisional inferences, they would receive system‑generated labels that claim to reveal the “true” affective content. This shift has two major consequences. First, it collapses epistemic friction, replacing the negotiable, context‑sensitive interpretation of expressions with definitive classifications. The loss of friction weakens the expressive tools that allow people to modulate emotions for politeness, conflict avoidance, or moral signaling. Second, pervasive affective profiling creates ambient affective auditing in authoritative or evaluative settings (workplaces, schools, law‑enforcement). Continuous monitoring can lead to affective determinism, undermining individual agency, privacy, and the flexibility of social norms.
Crucially, the paper argues that higher accuracy does not increase the moral legitimacy of deploying ERTs; rather, it amplifies the need for regulation. The author proposes a “function‑first” regulatory framework that treats intentional emotional expression as a public good. Suggested measures include: (1) statutory protection of expressive discretion, (2) safeguards that preserve a degree of affective opacity (e.g., limits on data collection, mandatory anonymisation of affective profiles), and (3) strict bans or heightened transparency requirements for high‑risk domains such as hiring, policing, and education.
In sum, the work reframes the debate from a technical‑performance focus to one that foregrounds the social‑epistemic, moral, and political goods generated by emotional expression. It warns that ideal ERTs, despite their impressive accuracy, threaten the very fabric of human interaction and calls for proactive policy interventions to protect expressive freedom and social cohesion.
Comments & Academic Discussion
Loading comments...
Leave a Comment