"It's like a pet...but my pet doesn't collect data about me": Multi-person Households' Privacy Design Preferences for Household Robots
Household robots boasting mobility, more sophisticated sensors, and powerful processing models have become increasingly prevalent in the commercial market. However, these features may expose users to unwanted privacy risks, including unsolicited data collection and unauthorized data sharing. While security and privacy researchers thus far have explored people’s privacy concerns around household robots, literature investigating people’s preferred privacy designs and mitigation strategies is still limited. Additionally, the existing literature has not yet accounted for multi-user perspectives on privacy design and household robots. We aimed to fill this gap by conducting in-person participatory design sessions with 15 households to explore how they would design a privacy-aware household robot based on their concerns and expectations. We found that participants did not trust that robots, or their respective manufacturers, would respect the data privacy of household members or operate in a multi-user ecosystem without jeopardizing users’ personal data. Based on these concerns, they generated designs that gave them authority over their data, contained accessible controls and notification systems, and could be customized and tailored to suit the needs and preferences of each user over time. We synthesize our findings into actionable design recommendations for robot manufacturers and developers.
💡 Research Summary
This paper investigates privacy‑aware design preferences for household robots from the perspective of multi‑person households. While prior work has examined privacy concerns around static smart‑home devices, the unique combination of mobility, rich sensor suites, and powerful AI models in modern home robots introduces novel risks that have not been fully explored, especially in shared living environments. To fill this gap, the authors conducted in‑person participatory design (co‑design) sessions with 15 households (36 participants) in a lab that simulated a living room. Participants first completed an online interview describing their home layout and existing smart‑technology use, then interacted with a miniature robot and a floor‑plan replica during the workshop.
The study is organized around three research questions (RQs). RQ1 asks what factors shape households’ perceptions of a shared robot’s privacy. Findings reveal three main drivers: (1) distrust of robots and manufacturers regarding data collection and sharing, (2) uncertainty about a robot’s autonomous movement and sensor reach into private spaces, and (3) divergent privacy sensitivities among household members. RQ2 explores what privacy‑aware designs households prefer. Participants generated concrete ideas clustered into four categories:
- User Profiles – Authenticated, personalized accounts that let each resident control who can view, edit, or delete their data.
- Mapping (Access Control) – Time‑ and space‑based “safe zones” and “restricted zones” that limit where and when the robot may travel, configurable via the floor plan.
- Data Storage & Collection – Preference for local storage, user‑defined retention periods, and a hierarchy of sensor data that favors low‑fidelity motion sensing over high‑resolution video/audio unless an explicit “wake word” is spoken. Physical covers and clear visual/audible indicators are required when high‑risk sensors are active.
- Notifications – Context‑aware alerts that are sent only to relevant users, optionally routed to personal devices (e.g., smartphones).
RQ3 investigates mechanisms influencing whether households would actually adopt these designs. Four key mechanisms emerged: (1) Benefit‑Cost Trade‑off – the robot’s functional benefits must outweigh perceived privacy loss and the effort required to maintain privacy controls; (2) Social Treatment & Attachment – emotional attachment and the way the robot is treated socially affect willingness to grant or restrict access; (3) Intra‑Household Conflict Management – differing privacy preferences can cause friction, so designs must support collaborative decision‑making; and (4) Long‑Term Exposure Effects – experiences over time may shift privacy expectations, leading to revisions of settings.
Methodologically, the study blends pre‑study surveys, hands‑on interaction with a 3‑D printed robot, and scenario‑based discussions (one focusing on data privacy, another on interpersonal privacy). Scenarios were simplified for households with children. The authors used qualitative coding to extract themes and quantify the prevalence of each design recommendation.
The participant sample was relatively educated (93 % held at least a bachelor’s degree) and skewed toward higher income, but it included a range of household sizes (2–4 members) and ages (3–67 years), providing a realistic view of multi‑user dynamics.
Key contributions include: (1) empirical evidence that multi‑person households distrust both robots and manufacturers with personal data; (2) a set of actionable, user‑centric design recommendations that emphasize data sovereignty, granular access control, transparent sensor activation, and personalized notifications; (3) identification of four adoption mechanisms that can guide product managers, policymakers, and HRI researchers in shaping privacy‑first robot ecosystems. The authors argue that integrating these recommendations into robot design pipelines, regulatory frameworks, and user‑education programs will be essential for fostering trust and widespread adoption of household robots.
Comments & Academic Discussion
Loading comments...
Leave a Comment