Algorithms and Differential Game Representations for Exploring Nonconvex Pareto Fronts in High Dimensions
We develop a new Hamiton-Jacobi (HJ) and differential game approach for exploring the Pareto front of (constrained) multi-objective optimization (MOO) problems. Given a preference function, we embed the scalarized MOO problem into the value function of a parameterized zero-sum game, whose upper value solves a first-order HJ equation that admits a Hopf-Lax representation formula. For each parameter value, this representation yields an inner minimizer that can be interpreted as an approximate solution to a shifted scalarization of the original MOO problem. Under mild assumptions, the resulting family of solutions maps to a dense subset of the weak Pareto front. Finally, we propose a primal-dual algorithm based on this approach for solving the corresponding optimality system. Numerical experiments show that our algorithm mitigates the curse of dimensionality (scaling polynomially with the dimension of the decision and objective spaces) and is able to expose continuous curves along nonconvex Pareto fronts in 100D in just $\sim$100 seconds.
💡 Research Summary
The paper introduces a novel framework that leverages Hamilton‑Jacobi (HJ) theory and zero‑sum differential games to explore Pareto fronts of constrained multi‑objective optimization (MOO) problems, especially when the front is non‑convex and the decision space is high‑dimensional. Traditional scalarization techniques, such as weighted‑sum or Chebyshev approaches, can only recover the convex hull of the Pareto set, leaving non‑convex regions unexplored. To overcome this limitation, the authors first define a smooth, monotone preference function g : ℝᴺ → ℝ. Minimizing the composite g∘ℓ(u) yields weak Pareto‑relevant points for appropriate shift vectors E. Rather than solving each scalarized problem independently, they embed the whole family of scalarizations into a parameterized zero‑sum game. The upper value V⁺ of this game satisfies a first‑order HJ equation with Hamiltonian H⁺ that is a min‑max over the control and disturbance variables of the underlying dynamics.
A key theoretical contribution is the Hopf‑Lax representation of V⁺: for any time horizon α, V⁺(x,τ,α) equals the infimum over admissible control trajectories u(·) of the supremum over admissible parameter trajectories λ(·) of an integral functional that combines the linear pairing ⟨λ,ℓ(u)⟩, the convex conjugate g⁎(λ), and the terminal cost g∘ℓ(u(α)). The inner minimizer uα can be interpreted as an approximate solution to a shifted scalarization problem g(ℓ(u)+Eα), where the shift Eα is explicitly controlled by a Bregman divergence term. As the parameter sequences (λ, E) converge to cluster points where the regularization error vanishes, the corresponding saddle‑point solves the original scalarization, thereby generating weak Pareto‑optimal points. By varying the horizon α continuously, the method traces continuous curves along the Pareto front, even through non‑convex regions.
On the algorithmic side, the authors derive the optimality system associated with the inner minimization and formulate a primal‑dual scheme. Each iteration alternates between a primal update (solving a simple convex subproblem in u) and a dual update (updating the multiplier p and the shift E). The updates are expressed in closed form using the Fenchel‑Moreau conjugacy of g, which keeps per‑iteration cost linear in the decision dimension d. Convergence is established via the Kurdyka‑Łojasiewicz (KL) property of the value function, guaranteeing global convergence and providing explicit rates (linear or sub‑linear depending on the KL exponent). Importantly, the overall computational complexity scales polynomially in both the number of objectives N and the decision dimension d, thereby mitigating the curse of dimensionality that plagues grid‑based dynamic programming or exhaustive scalarization.
Numerical experiments validate the theory. The authors consider a 5‑objective problem with d = 100 decision variables and a highly non‑convex Pareto front. Their primal‑dual algorithm discovers continuous Pareto curves, including non‑convex segments that weighted‑sum methods miss, in roughly 100 seconds on a standard workstation. Additional tests on constrained MOO formulations demonstrate that the approach extends naturally to problems with inequality constraints, preserving the same polynomial scaling.
In summary, the paper provides a mathematically rigorous and computationally efficient method for high‑dimensional, non‑convex Pareto front exploration. By unifying preference‑based scalarization, differential game theory, and HJ PDEs, it opens a new avenue for global multi‑objective analysis that overcomes both non‑convexity and dimensionality barriers, offering a valuable tool for fields such as robotics, aerospace design, and data‑driven modeling where trade‑offs among many criteria must be understood.
Comments & Academic Discussion
Loading comments...
Leave a Comment