On Approximate Nash Equilibria in Mean Field Games
In the context of large population symmetric games, approximate Nash equilibria are introduced through equilibrium solutions of the corresponding mean field game in the sense that the individual gain from optimal unilateral deviation under such strategies converges to zero in the large population size asymptotic. We show that these strategies satisfy an $Ł^\infty$ notion of approximate Nash equilibrium which guarantees that the individual gain from optimal unilateral deviation is small uniformly among players and uniformly on their initial characteristics. We establish these results in the context of static models and in the dynamic continuous time setting, and we cover situations where the agents’ criteria depend on the conditional law of the controlled state process.
💡 Research Summary
The paper addresses a fundamental gap in the literature on large‑population symmetric games: while mean‑field game (MFG) solutions are known to generate ε‑Nash equilibria for the corresponding finite‑player games, the convergence is traditionally measured in an average sense (L¹ or in probability). Such results do not preclude rare but potentially large deviations for individual agents. The authors propose a much stronger notion of approximation—an L∞‑type approximate Nash equilibrium—where the gain from any unilateral deviation is uniformly small across all players and for all admissible initial states.
The work proceeds in several stages. First, a one‑period static model is introduced. The state space S is a Polish space, each agent i receives an initial state X⁰_i and an independent uniform random seed ξ_i. Controls are measurable maps α_i from the augmented state space (\bar S = S \times
Comments & Academic Discussion
Loading comments...
Leave a Comment