Global optimization of low-rank polynomials

Global optimization of low-rank polynomials
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This work considers polynomial optimization problems where the objective admits a low-rank canonical polyadic tensor decomposition. We introduce LRPOP (low-rank polynomial optimization), a new hierarchy of semidefinite programming relaxations for which the size of the semidefinite blocks is determined by the canonical polyadic rank rather than the number of variables. As a result, LRPOP can solve low-rank polynomial optimization problems that are far beyond the reach of existing sparse hierarchies. In particular, we solve problems with up to thousands of variables with total degree in the thousands. Numerical conditioning for problems of this size is improved by using the Bernstein basis. The LRPOP hierarchy converges from below to the global minimum of the polynomial under standard assumptions.


💡 Research Summary

This paper introduces a novel computational framework called LRPOP (Low-Rank Polynomial Optimization) for solving large-scale polynomial optimization problems (POPs) where the objective function possesses a low-rank tensor structure. The central idea is to exploit the fact that a polynomial can be viewed as a multivariate tensor, and if it admits a low-rank Canonical Polyadic (CP) decomposition—meaning it can be expressed as a sum of a small number (r) of products of univariate polynomials—this structure can be leveraged to dramatically reduce computational complexity.

The authors begin by highlighting the scalability issue of the standard moment-sum-of-squares (SOS) hierarchy, where the size of the semidefinite programming (SDP) matrices grows combinatorially with the number of variables (n). While existing sparse hierarchies like CSSOS (exploiting correlative sparsity) and TSSOS (exploiting term sparsity) offer improvements for problems with localized variable interactions or few monomials, they are ineffective for dense polynomials that nonetheless have a low CP rank.

The key innovation of LRPOP is a reformulation. Given a low-rank polynomial f(x) = Σ_{l=1}^r Π_{i=1}^n f_{l,i}(x_i), the method introduces auxiliary variables t_{l,i} to sequentially encode the products. This transforms the original unconstrained POP into a new optimization problem over both x and t, subject to equality constraints that define the t variables. Crucially, the correlative sparsity graph of this new problem has a highly structured, “chain-like” pattern connecting the t and x variables.

By performing a chordal extension and clique decomposition on this graph, the authors show that the maximal clique size is bounded by a function of the rank r, independent of the original number of variables n. This allows them to construct a new moment-SOS hierarchy where each SDP block corresponds to a small clique in this decomposition, rather than to all variables. The size of these blocks scales with O( (r+d choose d) ), where d is the relaxation order, making the complexity linear in n instead of exponential.

The paper provides a rigorous theoretical foundation, proving that under a sparse Archimedean condition, the LRPOP hierarchy produces a sequence of lower bounds that converge monotonically to the global minimum of the original polynomial. To handle numerical conditioning for high-degree polynomials involved in large-scale problems, the authors advocate using the Bernstein polynomial basis on bounded intervals, which offers better stability than the standard monomial basis.

Numerical experiments demonstrate the breakthrough scalability of LRPOP. The method successfully solves problems with thousands of variables and total degree in the thousands, provided the polynomial has a small CP rank (e.g., r=2 or 3). Such problems are entirely intractable for existing sparse SOS hierarchies. The use of the Bernstein basis is shown to be crucial for obtaining reliable solutions at these scales.

In conclusion, LRPOP establishes “low-rank sparsity” as a new and powerful paradigm in polynomial optimization, complementing existing sparsity concepts. It opens the door to solving a class of extremely large-scale POPs that were previously considered impossible. The paper concludes by outlining promising extensions, such as adapting the framework to other tensor formats (e.g., Tucker or Tensor Train) and applying it to rational optimization problems.


Comments & Academic Discussion

Loading comments...

Leave a Comment