A Collaborative Crowdsourcing Method for Designing External Interfaces for Autonomous Vehicles

A Collaborative Crowdsourcing Method for Designing External Interfaces for Autonomous Vehicles
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Participatory design effectively engages stakeholders in technology development but is often constrained by small, resource-intensive activities. This study explores a scalable complementary method, enabling broad pattern identification in the design for interfaces in autonomous vehicles. We implemented a human-centered, iterative process that combined crowd creativity, structured participatory principles, and expert feedback. Across iterations, participant concepts evolved from simple cues to multimodal systems. Novel suggestions ranged from personalized features, like tracking lights, to inclusive elements like haptic feedback, progressively refining designs toward greater contextual awareness. To assess outcomes, we compared representative designs: a popular-design, reflecting the most frequently proposed ideas, and an innovative-design, merging participant innovations with expert input. Both were evaluated against a benchmark through video-based simulations. Results show that the popular-design outperformed the alternatives on both interpretability and user experience, with expert-validated innovations performing second best. These findings highlight the potential of scalable participatory methods for shaping emerging technologies.


💡 Research Summary

This paper presents a novel “collaborative crowdsourcing” methodology that blends core principles of participatory design (PD) with the scalability of online crowdsourcing to generate and refine external Human‑Machine Interface (eHMI) concepts for autonomous vehicles (AVs). The authors begin by noting that traditional PD excels at stakeholder involvement but is limited by small sample sizes, high costs, and intensive facilitation, whereas crowdsourcing can reach large, diverse populations at low cost but often yields pragmatic, low‑creativity outputs. To bridge this gap, they propose an iterative, human‑centered process that (1) seeds participants with a minimal set of well‑known eHMI elements (text messages, light strips, icons, etc.) organized in a tree‑based visualization, (2) asks crowdworkers to design eHMIs for two contrasting scenarios—a low‑risk, signal‑controlled crossing and a high‑risk, un‑signaled crowded crossing—and (3) incorporates expert feedback after each round.

In the expert phase, an independent human‑factors specialist blind to the study’s aims evaluates each clustered concept on two dimensions: effectiveness (how clearly the design communicates vehicle awareness and intent) and feasibility (technical implementability), using a 7‑point Likert scale and brief justification. This feedback is embedded as parent nodes in the visualization tree, guiding subsequent participants to build upon, remix, or re‑imagine prior ideas. The crowd phase follows a human‑based genetic‑algorithm framework: participants view the evolving tree, rate randomly selected designs for creativity, and then submit their own sketches with short rationales. Novelty is quantified (5 = new modality, 3 = substantial refinement, 1 = minor tweak) and data collection stops when the average novelty score of the last five participants falls below 2 and no new modality appears in three consecutive submissions. Across four iterations, roughly 1,200 sketches were collected, showing a progression from simple visual cues to multimodal concepts such as tracking lights and haptic feedback.

For final evaluation, three design families are compared in an online video‑based experiment: (1) the “popular design,” which aggregates the most frequently suggested visual cues; (2) the “innovative design,” which merges crowd‑generated ideas with expert‑validated novel modalities; and (3) a benchmark set drawn from prior eHMI literature (standard light strips, icons, and text). Participants view simulated AVs equipped with each eHMI and answer questionnaires measuring (a) interpretability (correct identification of vehicle intent) and (b) user experience (perceived safety, trust, and overall satisfaction). Results show that the popular design outperforms both alternatives on interpretability (≈ 84 % correct) and user experience (average 4.2 / 5), while the innovative design ranks second (≈ 78 % interpretability, 3.9 / 5). The benchmark designs lag behind (≈ 71 % interpretability, 3.5 / 5). Statistical analysis confirms the superiority of the popular design (p < 0.05).

The authors draw several insights. First, large‑scale, low‑cost participation can reveal robust preference patterns—specifically a strong bias toward familiar, standardized visual signals—suggesting that even in an open‑ended ideation setting, users gravitate toward designs that feel intuitive and safe. Second, expert‑guided novelty can introduce technically promising modalities (e.g., tracking lights, haptic cues) that improve performance relative to baseline but still fall short of the crowd‑driven consensus, highlighting a tension between innovation and immediate usability. Third, the iterative tree visualization and feedback loop effectively sustain creativity across rounds, as evidenced by increasing novelty scores early on and a gradual shift toward multimodal concepts.

Limitations are acknowledged. The evaluation isolates visual components, omitting audio or tactile channels that are integral to real‑world eHMIs; the video‑based paradigm cannot fully replicate the risk and attention dynamics of actual street crossings; and the crowdsourcing platform introduces sampling bias (e.g., over‑representation of tech‑savvy, younger participants).

Future work is outlined: extending the method to multimodal designs, conducting on‑road user studies with instrumented AVs, diversifying the participant pool to include older adults and people with visual impairments, and developing automated tools to manage the expert‑feedback integration and tree visualization at even larger scales.

In conclusion, the collaborative crowdsourcing approach demonstrates that participatory design principles can be successfully scaled, yielding user‑generated eHMI concepts that not only match but surpass traditional benchmark designs in interpretability and user experience. This work offers a practical, evidence‑based framework for involving the public in the design of emerging technologies such as autonomous vehicles, balancing imaginative exploration with expert validation to produce feasible, user‑centered solutions.


Comments & Academic Discussion

Loading comments...

Leave a Comment