Social versus Moral preferences in the Ultimatum Game: A theoretical model and an experiment

Social versus Moral preferences in the Ultimatum Game: A theoretical   model and an experiment
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In the Ultimatum Game (UG) one player, named “proposer”, has to decide how to allocate a certain amount of money between herself and a “responder”. If the offer is greater than or equal to the responder’s minimum acceptable offer (MAO), then the money is split as proposed, otherwise, neither the proposer nor the responder get anything. The UG has intrigued generations of behavioral scientists because people in experiments blatantly violate the equilibrium predictions that self-interested proposers offer the minimum available non-zero amount, and self-interested responders accept. Why are these predictions violated? Previous research has mainly focused on the role of social preferences. Little is known about the role of general moral preferences for doing the right thing, preferences that have been shown to play a major role in other social interactions (e.g., Dictator Game and Prisoner’s Dilemma). Here I develop a theoretical model and an experiment designed to pit social preferences against moral preferences. I find that, although people recognize that offering half and rejecting low offers are the morally right things to do, moral preferences have no causal impact on UG behavior. The experimental data are indeed well fit by a model according to which: (i) high UG offers are motivated by inequity aversion and, to a lesser extent, self-interest; (ii) high MAOs are motivated by inequity aversion.


💡 Research Summary

The paper investigates whether moral preferences—general motivations to do what is “right”—play a causal role in the Ultimatum Game (UG), or whether behavior is driven primarily by outcome‑based social preferences such as inequity aversion and self‑interest. The author first conducts a small pre‑study on Amazon Mechanical Turk participants, asking them what they consider the morally correct offer and the morally correct minimum acceptable offer (MAO). The overwhelming majority answer that offering a 50‑50 split and demanding at least half are the morally right actions, confirming that participants perceive a moral dimension in the UG.

Building on this, the main experiment is designed to pit moral preferences against social preferences. Participants first play a Trade‑Off Game (TOG) that measures their intrinsic moral preference strength (the willingness to choose a “fair” option over a self‑benefiting one). Then, participants are randomly assigned to different framing conditions in the UG: a “moral” framing that explicitly tells them to act fairly, and a neutral framing. Both proposers and responders make decisions in the UG while their TOG‑derived moral scores are recorded.

Statistical analysis (multivariate regressions) examines how the moral‑preference score, the framing condition, and the standard social‑preference parameters (inequity aversion β and self‑interest α) predict two key outcomes: (i) the size of the proposer’s offer, and (ii) the responder’s MAO. The results show that higher offers are best explained by a combination of inequity aversion and, to a lesser extent, self‑interest. Similarly, higher MAOs are driven primarily by inequity aversion. The moral‑preference score does not have a statistically significant effect on either outcome, and the moral framing manipulation fails to shift behavior in any meaningful direction.

These findings lead to two major conclusions. First, although participants recognize that “splitting the pie equally” and “rejecting low offers” are morally appropriate, these moral judgments do not translate into actual decision‑making in the UG. Moral preferences, while evident in self‑reports, have no causal impact on UG behavior. Second, the classic social‑preference model—particularly the inequity‑aversion framework—remains the most parsimonious and empirically supported explanation for both proposers’ generous offers and responders’ rejections of unfair splits.

The paper situates its contribution within a broader literature that has largely focused on social preferences, noting that only a handful of prior studies have examined morality in the UG (Kimbrough & Vostroknutov 2016; Eriksson et al. 2017). By providing a controlled experimental test that directly measures moral preference strength and manipulates moral salience, the study fills a gap and demonstrates that, at least in the UG, strategic considerations and fairness concerns rooted in outcome‑based preferences dominate over abstract moral motivations.

The author acknowledges that moral preferences do affect behavior in other games (e.g., Dictator Game, Prisoner’s Dilemma) where strategic interaction is absent, suggesting that the structure of the game determines the relevance of moral motives. Future research could explore hybrid models that allow both moral and social preferences to operate simultaneously, or test whether different cultural contexts amplify the role of morality in the UG. Overall, the paper provides robust evidence that the Ultimatum Game’s deviations from the self‑interest equilibrium are best explained by inequity aversion rather than by a desire to “do the right thing.”


Comments & Academic Discussion

Loading comments...

Leave a Comment