Simulating Word Suggestion Usage in Mobile Typing to Guide Intelligent Text Entry Design
Intelligent text entry (ITE) methods, such as word suggestions, are widely used in mobile typing, yet improving ITE systems is challenging because the cognitive mechanisms behind suggestion use remain poorly understood, and evaluating new systems often requires long-term user studies to account for behavioral adaptation. We present WSTypist, a reinforcement learning-based model that simulates how typists integrate word suggestions into typing. It builds on recent hierarchical control models of typing, but focuses on the cognitive mechanisms that underlie the high-level decision-making for effectively integrating word suggestions into manual typing: assessing efficiency gains, considering orthographic uncertainties, and including personal reliance on AI support. Our evaluations show that WSTypist simulates diverse human-like suggestion-use strategies, reproduces individual differences, and generalizes across different systems. Importantly, we demonstrate on four design cases how computational rationality models can be used to inform what-if analyses during the design process, by simulating how users might adapt to changes in the UI or in the algorithmic support, reducing the need for long-term user studies.
💡 Research Summary
The paper introduces WSTypist, a reinforcement‑learning (RL) based computational rationality model that simulates how mobile typists integrate word‑suggestion systems into their typing workflow. Building on earlier hierarchical control models of typing, the authors shift focus from low‑level motor planning to high‑level decision making: when to glance at the suggestion list, whether to accept a suggestion, and how to balance the expected keystroke savings against the visual‑cognitive cost of checking suggestions. The model is formalized as a Partially Observable Markov Decision Process (POMDP) with a supervisory agent that allocates limited working‑memory resources, processes noisy observations from eye‑gaze and finger actions, and selects actions that maximize a reward function encoding typing speed, error rate, and cognitive load. Three core mechanisms are implemented: (1) Efficiency assessment, which estimates potential keystroke reduction for the current word given the suggestion candidates; (2) Linguistic uncertainty handling, which evaluates how much a suggestion reduces orthographic ambiguity and spelling errors; and (3) Personal reliance, a set of parameters that capture individual differences in trust of AI support, tolerance for interruption, and preference for uninterrupted manual typing. These mechanisms enable the agent to learn diverse, human‑like strategies ranging from aggressive suggestion use (completion, correction, capitalization) to conservative manual typing.
Training data combine real‑world typing logs with eye‑tracking measurements, providing empirically grounded benchmarks that capture not only speed and error metrics but also suggestion‑checking frequency, failed checks, and gaze allocation. Evaluation shows that WSTypist reproduces observed patterns across user groups (high‑speed vs. low‑speed typists, heavy vs. light suggestion users) and generalizes to systems without suggestions or with auto‑correction only.
The authors demonstrate the practical value of the model through four “what‑if” design case studies: (1) Varying suggestion algorithm accuracy reveals a sweet spot around 60‑70 % where users gain efficiency without excessive over‑reliance; (2) Prioritizing longer words in the suggestion list yields measurable speed gains; (3) Tailoring suggestions to personal strategies, such as emphasizing capitalized words, benefits users who frequently type proper nouns; and (4) Relocating the top suggestion into the input field and exposing it via a keyboard shortcut reduces gaze shifts and improves overall performance. These simulations provide quantitative predictions of user adaptation that would otherwise require costly longitudinal studies.
Finally, the paper contributes (a) the WSTypist model itself, (b) a suite of empirically derived benchmarking metrics for suggestion‑aware typing, and (c) an open‑source implementation of both the model and a flexible suggestion engine, enabling other researchers and designers to conduct rapid, simulation‑based evaluations of intelligent text‑entry systems. The work bridges a gap between cognitive modeling and HCI design, showing that computational rationality can guide the development of more efficient, user‑friendly mobile typing interfaces.
Comments & Academic Discussion
Loading comments...
Leave a Comment