Contrarian Motives in Social Learning

Contrarian Motives in Social Learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We study sequential social learning with endogenous information acquisition when agents have a taste for nonconformity. Each agent observes predecessors’ actions, chooses whether to acquire a private signal (and its precision), and then selects between two actions. Payoffs reward correctness and add a history-based bonus for taking the less popular action, so equilibrium inference remains Bayesian without fixed points in anticipated popularity. In a Gaussian-quadratic specification, optimal actions are posterior thresholds that shift linearly with observed popularity and contrarian intensity, tilting decisions against the majority. We solve the precision choice problem with a fixed entry cost and a convex cost of precision. Whenever the no-signal action coincides with the observed majority, stronger contrarian motives weakly increase the maximized value of information and enlarge the set of histories in which agents invest in signals. We also derive comparative statics for thresholds and choice probabilities. In particular, increasing contrarian intensity reduces the likelihood of taking the currently popular action in both states.


💡 Research Summary

This paper investigates how a taste for non‑conformity—modeled as a “contrarian bonus” for choosing the less popular action—affects sequential social learning when agents also endogenously decide whether to acquire private information and how precise that information should be. The authors build a canonical cascade framework: a binary state θ∈{0,1} is drawn once, each arriving agent observes the full history of past actions, may pay a fixed entry cost F plus a convex cost c·ρ² to obtain a Gaussian signal s∼N(θ,1/ρ) of chosen precision ρ≥0, and then chooses an action a∈{0,1}. Payoffs consist of (i) a correctness reward of 1 if a=θ, and (ii) a contrarian bonus k·(1‑p), where p is the observed popularity of the chosen action among predecessors (p∈


Comments & Academic Discussion

Loading comments...

Leave a Comment