A study on the effects of mixed explicit and implicit communications in human-artificial-agent interactions

A study on the effects of mixed explicit and implicit communications in human-artificial-agent interactions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Communication between humans and artificial agents is essential for their interaction. This is often inspired by human communication, which uses gestures, facial expressions, gaze direction, and other explicit and implicit means. This work presents interaction experiments where humans and artificial agents interact through explicit and implicit communication to evaluate the effect of mixed explicit-implicit communication against purely explicit communication and the impact of the task difficulty in this evaluation. Results obtained using Bayesian parameter estimation show that the task execution time did not significantly change when mixed explicit and implicit communications were used in neither of our experiments, which varied in the type of artificial agent (virtual agent and humanoid robot) used and task difficulty. The number of errors was affected by the communication only when the human was executing a more difficult task, and an impact on the perceived efficiency of the interaction was only observed in the interaction with the robot, for both easy and difficult tasks. In contrast, acceptance, sociability, and transparency of the artificial agent increased when using mixed communication modalities in both our experiments and task difficulty levels. This suggests that task-related measures, such as time, number of errors, and perceived efficiency of the interaction, as well as the impact of the communication on them, are more sensitive to the type of task and the difficulty level, whereas the combination of explicit and implicit communications more consistently improves human perceptions about artificial agents.


💡 Research Summary

This paper investigates how mixing explicit and implicit communication modalities influences human‑artificial‑agent interactions, comparing a purely explicit condition (EX) with a combined explicit‑implicit condition (EXIM). Two agent types were examined—a virtual agent displayed on a screen and a humanoid robot—and each was tested under easy and difficult task conditions, yielding a total of four experimental settings. Participants performed each task twice (within‑subjects design), once under EX and once under EXIM, allowing direct comparison while controlling for individual differences.

The authors first clarify terminology: explicit communication conveys information deliberately and is directly interpretable (e.g., spoken commands, manual gestures), whereas implicit communication embeds information in behavior and requires contextual inference (e.g., gaze direction, facial expressions, eyebrow movements). Table 1 in the paper summarizes these definitions and the specific modalities employed in the study. The experimental platform was built on ROS, integrating sensors for gaze tracking, facial expression detection, and marker‑based location inference. Humans supplied explicit cues via voice and mouse/keyboard entries and implicit cues via their physical location and body posture; the virtual agent used voice, facial animation, and gaze, while the robot added eyebrow movements as implicit signals.

Six hypotheses guided the work: H1 and H2 predicted that EXIM would reduce task execution time and error count, respectively; H3‑H5 anticipated higher acceptance, sociability, and transparency for EXIM; and H6 expected greater perceived interaction efficiency. Bayesian parameter estimation was applied to the collected data, providing posterior distributions and 95 % credible intervals for each metric, which allowed the authors to assess the probability that a given hypothesis held true.

Results showed that task execution time did not differ significantly between EX and EXIM across any condition, contradicting H1. Error counts were lower for EXIM only in the difficult task, supporting H2 selectively. Perceived interaction efficiency improved with EXIM only when interacting with the robot, not the virtual agent, partially confirming H6. In contrast, acceptance, sociability, and transparency were consistently higher for EXIM across both agents and both difficulty levels, confirming H3‑H5. The authors interpret these findings as evidence that while mixed communication enriches users’ subjective impressions of an agent, objective performance gains are contingent on task complexity and the physical embodiment of the agent.

The paper contributes three main points: (1) empirical evidence that combining explicit and implicit cues enhances human perceptions of artificial agents (acceptance, sociability, transparency); (2) a Bayesian analytical framework that quantifies uncertainty in HRI metrics and can be reused in future studies; and (3) a nuanced view that task‑related outcomes (time, errors, efficiency) are more sensitive to task difficulty and agent embodiment than to communication modality alone.

Limitations include a modest sample size, lack of cultural or demographic diversity, and the short‑term nature of the interactions. The authors suggest future work should involve larger, more diverse participant pools, longer‑duration collaborations, and systematic variation of implicit cue strength to further elucidate optimal multimodal communication strategies in human‑robot interaction.


Comments & Academic Discussion

Loading comments...

Leave a Comment