Anchoring Bias in Online Voting
Voting online with explicit ratings could largely reflect people’s preferences and objects’ qualities, but ratings are always irrational, because they may be affected by many unpredictable factors like mood, weather, as well as other people’s votes. By analyzing two real systems, this paper reveals a systematic bias embedding in the individual decision-making processes, namely people tend to give a low rating after a low rating, as well as a high rating following a high rating. This so-called \emph{anchoring bias} is validated via extensive comparisons with null models, and numerically speaking, the extent of bias decays with interval voting number in a logarithmic form. Our findings could be applied in the design of recommender systems and considered as important complementary materials to previous knowledge about anchoring effects on financial trades, performance judgements, auctions, and so on.
💡 Research Summary
The paper investigates whether users’ previous ratings influence their subsequent ratings in online voting platforms, a phenomenon known as anchoring bias. Using two publicly available datasets—MovieLens (6040 users, 3952 movies, over one million 1‑to‑5 star ratings) and WikiLens (289 users, 4951 items, 269 370 ratings on a 0.5‑step scale)—the authors first characterize the basic statistics of the systems. Both user‑degree and item‑degree distributions follow a stretched‑exponential form rather than a pure power law, reflecting the limited activity range and the imposed minimum number of votes per user/item.
To detect anchoring, the authors define “outliers”: items whose average rating is below 2.0 (low‑quality outliers, LQO) or above 4.5 (high‑quality outliers, HQO). They then examine the ratings that immediately follow an outlier vote (A⁻ after LQO, A⁺ after HQO). The average A⁺ rating is around 4.1–4.2, while A⁻ averages 2.6–2.7, and these values differ systematically from the corresponding item‑average and user‑average baselines. Although outlier‑adjacent votes constitute only about 2 % of all votes, the pattern already suggests that the preceding rating serves as an anchor for the next one.
Because the outlier analysis covers a tiny fraction of the data, the authors extend the investigation to the full rating sequences of each user. They normalize each rating in four ways (subtracting the global mean, the item mean, the user mean, or both item and user means) and compute the Pearson correlation coefficient R_i between successive ratings for each user. In the empirical data, the distribution of R_i is strongly skewed toward positive values; more than 70 % of users have R_i > 0, with a mean around 0.38–0.42. By contrast, a null model that randomly permutes each user’s rating order yields a distribution centered at zero, with less than 50 % of users showing positive R_i. This comparison confirms that the observed positive serial correlation is not a statistical artifact.
To assess the temporal persistence of the bias, the authors define an L‑step correlation R_i(L), correlating ratings separated by L positions (e.g., rating 1 with rating L + 1). They find that for small L the empirical R_i(L) exceeds the null model substantially, and the difference ΔR(L) decays logarithmically with L: ΔR(L) ≈ A − B log L, where A ≈ 0.08 and B ≈ 0.04 for both datasets. This logarithmic decay indicates that anchoring bias persists over many voting events, diminishing slowly rather than disappearing after a single step.
Importantly, the bias remains robust across all four normalization schemes, demonstrating that it is not driven by individual users’ overall rating tendencies (e.g., “generous” versus “harsh” raters) or by intrinsic item quality differences. Instead, the immediate prior rating itself acts as a psychological anchor that biases the next judgment.
The authors discuss the implications for recommender‑system design. Many collaborative‑filtering algorithms assume rating independence; the presence of anchoring bias violates this assumption and may lead to systematic prediction errors. Incorporating an anchoring decay factor, weighting recent ratings more heavily, or explicitly modeling serial correlation could improve recommendation accuracy and robustness against manipulation. Moreover, understanding anchoring can aid in designing better user interfaces that mitigate unwanted bias (e.g., by hiding previous ratings or randomizing presentation order).
In conclusion, the study provides strong empirical evidence that anchoring bias—well documented in financial, auction, and judgment contexts—also operates in online rating systems. The bias is quantifiable, statistically significant, and exhibits a logarithmic decay over voting intervals, suggesting a long‑lasting influence on user behavior. These findings enrich the literature on human decision‑making in digital environments and offer concrete directions for more nuanced, bias‑aware recommendation algorithms.
Comments & Academic Discussion
Loading comments...
Leave a Comment