EM algorithms for optimization problems with polynomial objectives

EM algorithms for optimization problems with polynomial objectives
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The EM (Expectation-Maximization) algorithm is regarded as an MM (Majorization-Minimization) algorithm for maximum likelihood estimation of statistical models. Expanding this view, this paper demonstrates that by choosing an appropriate probability distribution, even nonstatistical optimization problem can be cast as a negative log-likelihood-like minimization problem, which can be approached by an EM (or MM) algorithm. When a polynomial objective is optimized over a simple polyhedral feasible set and an exponential family distribution is employed, the EM algorithm can be reduced to a natural gradient descent of the employed distribution with a constant step size. This is demonstrated through three examples. In this paper, we demonstrate the global convergence of specific cases with some exponential family distributions in a general form. In instances when the feasible set is not sufficiently simple, the use of MM algorithms can nevertheless be adequately described. When the objective is to minimize a convex quadratic function and the constraints are polyhedral, global convergence can also be established based on the existing results for an entropy-like proximal point algorithm.


💡 Research Summary

This paper presents a unified framework that casts deterministic optimization problems with polynomial objectives into a negative‑log‑likelihood form and solves them using Expectation–Maximization (EM) or, equivalently, Majorization–Minimization (MM) algorithms. The key observation is that if a function f(θ) can be expressed as
 f(θ)=−log E_{p_θ}


Comments & Academic Discussion

Loading comments...

Leave a Comment