Finding Mixed Nash Equilibria of Generative Adversarial Networks

Finding Mixed Nash Equilibria of Generative Adversarial Networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We reconsider the training objective of Generative Adversarial Networks (GANs) from the mixed Nash Equilibria (NE) perspective. Inspired by the classical prox methods, we develop a novel algorithmic framework for GANs via an infinite-dimensional two-player game and prove rigorous convergence rates to the mixed NE, resolving the longstanding problem that no provably convergent algorithm exists for general GANs. We then propose a principled procedure to reduce our novel prox methods to simple sampling routines, leading to practically efficient algorithms. Finally, we provide experimental evidence that our approach outperforms methods that seek pure strategy equilibria, such as SGD, Adam, and RMSProp, both in speed and quality.


💡 Research Summary

The paper revisits the training objective of Generative Adversarial Networks (GANs) from the perspective of mixed Nash equilibria (NE) rather than the traditional pure‑strategy equilibrium. The authors argue that seeking a pure‑strategy saddle point is often ill‑posed for GANs—pure equilibria may not exist, can be degenerate, or are unreachable by existing optimization methods. By allowing each player (generator and discriminator) to randomize over their parameter space, the training problem becomes a min‑max game over probability measures on the parameter sets.

Formally, the standard Wasserstein‑GAN objective
\


Comments & Academic Discussion

Loading comments...

Leave a Comment