Impact of Robot Facial-Audio Expressions on Human Robot Trust Dynamics and Trust Repair
Despite recent advances in robotics and human-robot collaboration in the AEC industry, trust has mostly been treated as a static factor, with little guidance on how it changes across events during collaboration. This paper investigates how a robot’s task performance and its expressive responses after outcomes shape the dynamics of human trust over time. To this end, we designed a controlled within-subjects study with two construction-inspired tasks, Material Delivery (physical assistance) and Information Gathering (perceptual assistance), and measured trust repeatedly (four times per task) using the 14-item Trust Perception Scale for HRI plus a redelegation choice. The robot produced two multimodal expressions, a “glad” display with a brief confirmation after success, and a “sad” display with an apology and a request for a second chance after failure. The study was conducted in a lab environment with 30 participants and a quadruped platform, and we evaluated trust dynamics and repair across both tasks. Results show that robot success reliably increases trust, failure causes sharp drops, and apology-based expressions partially restores trust (44% recovery in Material Delivery; 38% in Information Gathering). Item-level analysis indicates that recovered trust was driven mostly by interaction and communication factors, with competence recovering partially and autonomy aspects changing least. Additionally, age group and prior attitudes moderated trust dynamics with younger participants showed larger but shorter-lived changes, mid-20s participants exhibited the most durable repair, and older participants showed most conservative dynamics. This work provides a foundation for future efforts that adapt repair strategies to task demands and user profiles to support safe, productive adoption of robots on construction sites.
💡 Research Summary
This paper, titled “Impact of Robot Facial-Audio Expressions on Human Robot Trust Dynamics and Trust Repair,” presents a comprehensive investigation into how trust between humans and robots evolves dynamically during collaborative tasks, specifically within a construction context. Moving beyond the traditional view of trust as a static attribute, the study examines the role of a robot’s task performance outcomes (success/failure) and its subsequent multimodal expressive behaviors in shaping and repairing human trust over time.
To address this, the authors designed a controlled within-subjects laboratory experiment with 30 participants. The study featured two construction-inspired tasks: “Material Delivery,” representing physical assistance where a quadruped robot transported an object, and “Information Gathering,” representing perceptual assistance where the robot captured and relayed site images. The key manipulation was the robot’s expressive response after each task attempt. Following a success, the robot displayed a “glad” expression (happy face animation and a brief confirming voice). Following a failure, it displayed a “sad” repair expression (sad face animation, a spoken apology, and a request for a second chance). Trust was measured repeatedly—four times per task—using the validated 14-item Trust Perception Scale for HRI (TPS-HRI) and a behavioral redelegation choice.
The results revealed clear and significant trust dynamics. Robot success reliably increased trust levels, while failure caused sharp, immediate declines in trust. Crucially, the apology-based multimodal expression following a failure led to a partial but statistically significant recovery of trust. The magnitude of this repair effect was task-dependent, with approximately 44% trust recovery in the Material Delivery task and 38% in the Information Gathering task. An item-level analysis of the trust scale provided deeper insight: the recovered trust was primarily driven by facets related to “Interaction & Communication” (e.g., the robot’s perceived communicative ability). Trust in the robot’s “Competence” showed partial recovery, while trust aspects related to its “Autonomy” changed the least, indicating these are more resistant to repair through social expressions alone.
Furthermore, the study identified important moderating factors. Participant age groups significantly influenced trust dynamics. Younger participants (18-22) exhibited larger trust swings in response to events, but these changes were more transient. Participants in their mid-20s (23-27) demonstrated the most substantial and durable trust repair. Older participants (28+) showed the most conservative and stable trust patterns, with less pronounced reactions to both failures and repairs. Pre-existing attitudes toward robots, measured before the experiment, also moderated initial trust levels and the sensitivity to trust violations.
In conclusion, this work makes a foundational contribution by empirically demonstrating that trust in human-robot collaboration is highly dynamic and can be actively influenced—not just by robotic performance, but also by socially intelligent expressive behaviors after failures. The findings underscore the necessity of designing robots capable of trust calibration and repair, suggesting that future systems should incorporate adaptive communication strategies tailored to both the nature of the task and the profile of the human user to foster safe, effective, and accepted robotic integration in complex environments like construction sites.
Comments & Academic Discussion
Loading comments...
Leave a Comment