Disambiguating Anthropomorphism and Anthropomimesis in Human-Robot Interaction

Disambiguating Anthropomorphism and Anthropomimesis in Human-Robot Interaction
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this preliminary work, we offer an initial disambiguation of the theoretical concepts anthropomorphism and anthropomimesis in Human-Robot Interaction (HRI) and social robotics. We define anthropomorphism as users perceiving human-like qualities in robots, and anthropomimesis as robot developers designing human-like features into robots. This contribution aims to provide a clarification and exploration of these concepts for future HRI scholarship, particularly regarding the party responsible for human-like qualities - robot perceiver for anthropomorphism, and robot designer for anthropomimesis. We provide this contribution so that researchers can build on these disambiguated theoretical concepts for future robot design and evaluation.


💡 Research Summary

This paper addresses a persistent source of conceptual confusion in Human‑Robot Interaction (HRI) and social robotics: the conflation of “anthropomorphism” and “anthropomimesis.” The authors propose a clear, responsibility‑based distinction between the two terms and explore the theoretical, methodological, and practical implications of this separation.

Motivation and Goal
Previous HRI literature shows that anthropomorphism has been defined in at least seven different ways across 57 studies (Damholdt et al., 2023), while the term anthropomimesis is rarely used in the field despite its emergence in philosophical discussions of AI (Shevlin, 2025). This lack of terminological clarity hampers researchers’ ability to pinpoint whether a robot’s perceived “human‑likeness” stems from design choices or from users’ attributions, which in turn obscures guidance for designers, educators, and policymakers. The paper’s primary aim is therefore to disambiguate the two concepts by focusing on the party responsible for the human‑like qualities.

Core Definitions

  • Anthropomorphism is defined as the cognitive process by which a user (the perceiver) attributes human‑like characteristics—such as form, intentions, emotions, or social capacities—to a robot. This aligns with the “attribution‑focused” definition of Epley et al. (2007) and with earlier formulations by Fink (2012) and Złotowski et al. (2015). The responsible party is explicitly the user.
  • Anthropomimesis is defined as the intentional (or sometimes unintentional) design activity by developers that embeds human‑like features into a robot. The authors break this down into three dimensions: aesthetic (physical appearance), behavioral (social and affective interaction patterns), and substantive (biological‑structure imitation such as joint configurations). The responsible party is the designer or developer.

Theoretical Background
The paper surveys the literature on anthropomorphism, noting the heterogeneity of definitions, and introduces anthropomimesis as a complementary construct drawn from AI philosophy. Table 1 succinctly contrasts the two concepts across responsible party, mechanism, and key theoretical sources (e.g., Fink 2012, Shevlin 2025, Holland et al. 2006).

Interaction Between the Concepts
The authors argue that anthropomimesis generally facilitates anthropomorphism, but the relationship is not deterministic. Over‑designed anthropomorphic cues can produce a “fake” or uncanny impression, potentially reducing perceived anthropomorphism. Conversely, users can anthropomorphize robots that were not deliberately anthropomimetic (e.g., simple mechanical devices). The classic “uncanny valley” (Mori et al., 2012) is employed as a heuristic: “robust” anthropomimetic robots that overcome the valley may be perceived as highly human‑like, whereas “weak” anthropomimetic robots remain on the valley’s shallow side.

Measurement Challenges
Current instruments such as the Godspeed Questionnaire (Bartneck et al., 2009) and the ABO‑T Human‑Likeness Database conflate the two constructs. For instance, ABO‑T records physical human‑like features (e.g., eyelashes, nose) but then uses them to predict users’ anthropomorphic judgments, blurring the distinction. The authors propose a dual‑track measurement approach: (1) objective coding of design‑level anthropomimesis (e.g., presence/absence of specific aesthetic or behavioral cues) and (2) subjective assessment of anthropomorphism via validated user‑report scales. They also highlight the need for new metrics that capture non‑physical dimensions such as personality, affective expressivity, and cultural cue interpretation.

Limitations and Future Work
The paper acknowledges that its definitions are preliminary and that real‑world cases may involve mixed responsibility (e.g., designers unintentionally embedding cues that vulnerable users over‑attribute). Cultural variability, age‑related differences, and the role of user vulnerability (children, cognitively impaired) are identified as areas requiring deeper investigation. Future research directions include: (a) constructing a detailed taxonomy of anthropomorphism and anthropomimesis; (b) developing cross‑cultural experimental protocols to validate the taxonomy; (c) designing measurement tools that separate design‑level cues from perception‑level attributions; and (d) translating these insights into policy recommendations concerning robot ethics, liability, and user protection.

Practical Implications
For robot designers, the distinction offers a roadmap for selecting the type and intensity of anthropomorphic cues to achieve desired user responses while avoiding uncanny or deceptive effects. For educators and user‑experience professionals, it underscores the importance of informing users about the intentional design of human‑like features, especially when interacting with vulnerable populations. Policymakers can use the clarified responsibility framework to assign liability more precisely—whether a robot’s perceived human‑likeness is a design outcome or a user‑generated attribution—thereby shaping regulation, standards, and safety guidelines.

Conclusion
By grounding the differentiation of anthropomorphism and anthropomimesis in the responsible party (user vs. designer) and explicating their mechanisms, this paper provides a conceptual scaffold that can unify future HRI research, guide robot design practices, and inform ethical and regulatory discourse. The authors’ call for refined measurement, taxonomy development, and cross‑cultural validation sets a clear agenda for advancing the field beyond terminological ambiguity toward a more rigorous understanding of human‑robot social dynamics.


Comments & Academic Discussion

Loading comments...

Leave a Comment