"So, Tell Me What Users Want, What They Really, Really Want!"

"So, Tell Me What Users Want, What They Really, Really Want!"
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Equating users’ true needs and desires with behavioural measures of ’engagement’ is problematic. However, good metrics of ’true preferences’ are difficult to define, as cognitive biases make people’s preferences change with context and exhibit inconsistencies over time. Yet, HCI research often glosses over the philosophical and theoretical depth of what it means to infer what users really want. In this paper, we present an alternative yet very real discussion of this issue, via a fictive dialogue between senior executives in a tech company aimed at helping people live the life they `really’ want to live. How will the designers settle on a metric for their product to optimise?


💡 Research Summary

The paper “So, Tell Me What Users Want, What They Really, Really Want!” confronts the longstanding problem in human‑computer interaction and AI research of equating users’ true needs with observable behavioural metrics such as “engagement”. The authors argue that while engagement is easy to measure, it is a poor proxy for genuine desire because cognitive biases, contextual shifts, and temporal instability constantly reshape preferences. They trace this issue back to early HCI thought (e.g., Teitelman’s “Do What I Mean”) and note that many contemporary approaches—especially inverse reinforcement learning (IRL) and other preference‑inference techniques—implicitly assume rational, noise‑free decision making, an assumption contradicted by extensive evidence from behavioural economics (loss aversion, framing effects, hyperbolic discounting) and psychology (dual‑process theory).

To illustrate the philosophical and practical stakes, the authors stage a fictional round‑table at a global tech firm, “Gamaface”, whose product is an “algorithmic life service” that nudges billions of users throughout the day. Four archetypal executives debate how to define a metric that truly reflects what users want:

  • Randy Na, a libertarian information architect, insists that whatever users choose—cat videos, pizza orders, or anything else—must be taken at face value. He champions a pure behaviour‑based metric, arguing that occasional self‑reports can correct for addiction.
  • Harald Richter, a user researcher grounded in behavioural economics, points out the conflict between System 1 (fast, reward‑seeking) and System 2 (slow, reflective) processes. He suggests that users need tools to surface their reflective preferences rather than being driven solely by immediate clicks.
  • Nichola Machian, the lead ethicist, draws on decades of happiness research and philosophical theories of the good life (hedonism, desire‑satisfaction, objective‑list theories). She argues that universal human goods—meaningful work, social connection, health—should inform the optimisation target, even if users are not yet aware of them.
  • Sunny Zuckebezos, the CEO, tries to reconcile the need for a neutral, “no‑bias” algorithm with the moral imperative to improve lives.

From this dialogue the paper extracts two concrete optimisation metrics:

  1. Engagement with Preferred Options – a hybrid measure that combines observable interaction data (clicks, dwell time) with periodic self‑assessment scales (1‑10) asking users whether their current actions align with what they “most want to be doing”. Randomised perturbations are injected to encourage exploration, and a Bayesian update continuously refines a personal preference model.

  2. Regret When Reflecting on the Past – a retrospective measure capturing the intensity of users’ regret about past choices. The authors posit that higher regret signals a misalignment between short‑term behaviour and long‑term wellbeing, providing a corrective signal for the system.

Technically, the authors propose implementing these metrics through an “Autonomy Sense™” overlay that periodically prompts users, anonymises responses locally, and aggregates them in a privacy‑preserving manner. Weightings are dynamically adjusted for cultural, demographic, and socioeconomic factors, ensuring that the optimisation does not impose a monolithic notion of the “good life”.

The paper’s broader contribution is methodological: it reframes the central HCI question “what should we optimise?” as a multi‑dimensional, value‑sensitive problem rather than a single‑objective engineering task. By juxtaposing behavioural, reflective, and normative perspectives, it demonstrates that any robust user‑centric system must blend behavioural signals with mechanisms for users to articulate, reflect upon, and possibly revise their own preferences. This hybrid approach offers a pathway for designers and researchers to move beyond click‑through metrics toward systems that genuinely support human flourishing while respecting autonomy.


Comments & Academic Discussion

Loading comments...

Leave a Comment