Maximin Relative Improvement: Fair Learning as a Bargaining Problem
When deploying a single predictor across multiple subpopulations, we propose a fundamentally different approach: interpreting group fairness as a bargaining problem among subpopulations. This game-theoretic perspective reveals that existing robust optimization methods such as minimizing worst-group loss or regret correspond to classical bargaining solutions and embody different fairness principles. We propose relative improvement, the ratio of actual risk reduction to potential reduction from a baseline predictor, which recovers the Kalai-Smorodinsky solution. Unlike absolute-scale methods that may not be comparable when groups have different potential predictability, relative improvement provides axiomatic justification including scale invariance and individual monotonicity. We establish finite-sample convergence guarantees under mild conditions.
💡 Research Summary
The paper introduces a novel perspective on group‑fair machine learning by casting the problem of deploying a single predictor across multiple subpopulations as a cooperative bargaining game. Traditional robust‑optimization approaches—such as Group Distributionally Robust Optimization (Group‑DRO) that minimizes the worst‑group risk, or minimax regret that minimizes the worst‑group loss relative to each group’s own optimum—are shown to correspond to classical bargaining solutions that implicitly encode distinct fairness principles. However, these methods rely on absolute loss or regret scales, which can be misleading when groups differ substantially in their intrinsic predictability.
To address this, the authors propose the relative improvement metric: for a group (g),
\
Comments & Academic Discussion
Loading comments...
Leave a Comment