The Eye-Head Mover Spectrum: Modelling Individual and Population Head Movement Tendencies in Virtual Reality
People differ in how much they move their head versus their eyes when shifting gaze, yet such tendencies remain largely unexplored in HCI. We introduce head movement tendencies as a fundamental dimension of individual difference in VR and provide a quantitative account of their population-level distribution. Using a 360° video free-viewing dataset (N=87), we model head contributions to gaze shifts with a hinge-based parametric function, revealing a spectrum of strategies from eye-movers to head-movers. We then conduct a user study (N=28) combining 360° video viewing with a short controlled task using gaze targets. While parameter values differ across tasks, individuals show partial alignment in their relative positions within the population, indicating that tendencies are meaningful but shaped by context. Our findings establish head movement tendencies as an important concept for VR and highlight implications for adaptive systems such as foveated rendering, viewport alignment, and multi-user experience design.
💡 Research Summary
The paper introduces a continuous “eye‑head mover spectrum” to capture individual differences in how much users rely on head rotation versus eye movement when shifting gaze in virtual reality. Prior HCI work has treated this variation as a binary classification (“head movers” vs. “non‑head movers”) using fixed thresholds, which obscures the nuanced, amplitude‑dependent nature of eye‑head coordination.
To address this, the authors first analyze a large open‑source 360° video free‑viewing dataset (D‑SAV360) comprising 87 participants after quality filtering. For each gaze shift they compute the total angular displacement (θ) and the accompanying head yaw (φ). The head contribution ratio h(θ)=φ/θ is near zero for small angles and begins to increase once the target eccentricity exceeds a personal threshold. This relationship is modeled with a hinge‑type piecewise linear function: h(θ)=0 for θ≤θ₀ and h(θ)=α·(θ‑θ₀) for θ>θ₀, where θ₀ (the “head‑onset angle”) and α (the “head‑gain slope”) are fitted per participant. The resulting parameter distributions form a smooth continuum rather than discrete clusters, demonstrating that users span a spectrum from “eye‑dominant” to “head‑dominant” strategies.
The second contribution is a controlled user study with 28 participants. Each participant performed two tasks: (1) free‑viewing the same 360° videos used in the first analysis, and (2) a brief target‑gazing task where a point appears on the periphery and the user must look at it as quickly as possible. The same hinge model is fitted to both task datasets. While the absolute values of θ₀ and α shift between tasks (reflecting task‑driven modulation of head use), the relative ranking of participants remains moderately stable (Pearson ρ≈0.68). This indicates that head‑movement tendency is a personal trait that can be flexibly expressed depending on context.
The authors discuss several practical implications. In foveated rendering and adaptive viewport streaming, knowing a user’s α allows the system to anticipate head motion and pre‑fetch high‑resolution tiles, reducing latency for head‑dominant users. In multi‑user collaboration, differences in θ₀ and α affect how closely participants’ viewpoints align, influencing joint attention cues; adaptive UI elements could compensate for large head‑movement disparities. Moreover, the spectrum can inform the design of gaze‑plus‑head interaction techniques, ergonomic assessments, and even biometric authentication based on eye‑head dynamics.
Limitations include the focus on horizontal eye‑head coordination only, a relatively small controlled‑task sample, and reliance on a single eye‑tracking headset model, which may affect generalizability across devices and vertical movements. Future work should extend the model to three‑dimensional gaze shifts, validate it in dynamic interactive scenes, explore real‑time parameter estimation for on‑the‑fly adaptation, and test its applicability in AR/MR contexts and larger, more diverse user populations.
In summary, the paper provides a novel, continuous parametric framework for quantifying individual head‑movement tendencies in VR, demonstrates its presence at the population level, shows partial consistency across tasks, and outlines how this insight can be leveraged to build more adaptive, comfortable, and socially aware immersive systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment