A Linear Parameter-Varying Framework for the Analysis of Time-Varying Optimization Algorithms
In this paper we propose a framework to analyze iterative first-order optimization algorithms for time-varying convex optimization. We assume that the temporal variability is caused by a time-varying parameter entering the objective, which can be measured at the time of decision but whose future values are unknown. We consider the case of strongly convex objective functions with Lipschitz continuous gradients under a convex constraint set. We model the algorithms as discrete-time linear parameter varying (LPV) systems in feedback with monotone operators such as the time-varying gradient. We leverage the approach of analyzing algorithms as uncertain control interconnections with integral quadratic constraints (IQCs) and generalize that framework to the time-varying case. We propose novel IQCs that are capable of capturing the behavior of time-varying nonlinearities and leverage techniques from the LPV literature to establish novel bounds on the tracking error. Quantitative bounds can be computed by solving a semi-definite program and can be interpreted as an input-to-state stability result with respect to a disturbance signal which increases with the temporal variability of the problem. As a departure from results in this research area, our bounds introduce a dependence on different additional measures of temporal variations, such as the function value and gradient rate of change. We exemplify our main results with numerical experiments that showcase how our analysis framework is able to capture convergence rates of different first-order algorithms for time-varying optimization through the choice of IQC and rate bounds.
💡 Research Summary
This paper introduces a systematic framework for analyzing first‑order iterative optimization algorithms applied to time‑varying convex problems. The authors assume that the temporal variability originates from a measurable but future‑unknown parameter θₖ that enters the objective function f(x,θₖ). The objective is strongly convex with parameter m(θₖ) and has Lipschitz‑continuous gradients with constant L(θₖ), both of which are continuous functions of θₖ and bounded uniformly by m and L. The goal is to track the optimal trajectory xₖ* = arg minₓ f(x,θₖ) while only having access to the current θₖ.
The central modeling idea is to represent any first‑order algorithm as a discrete‑time linear parameter‑varying (LPV) system in feedback with a nonlinear oracle that supplies gradient information. Specifically, the algorithm dynamics are written as
ξₖ₊₁ = A(θₖ) ξₖ + B(θₖ) uₖ, yₖ = C(θₖ) ξₖ + D(θₖ) uₖ, uₖ = φₖ(yₖ),
where ξₖ is the internal state, yₖ the signal sent to the oracle, and uₖ the oracle output (typically ∇fₖ(yₖ)). The decision variable xₖ₊₁ is extracted from ξₖ₊₁ by a linear readout. This formulation captures classic methods such as gradient descent, Nesterov’s accelerated gradient, heavy‑ball, and more exotic triple‑momentum schemes, all of which can be expressed by appropriate choices of the time‑varying matrices A,B,C,D and the parameters αₖ,βₖ,γₖ that may depend on θₖ.
To handle the time‑varying nonlinearity, the authors develop new integral quadratic constraints (IQCs) that incorporate the difference between successive gradients, Δgₖ = ∇fₖ − ∇fₖ₊₁. Traditional IQCs (sector, slope‑restricted) only describe static nonlinearities; the proposed “time‑varying IQC” adds a residual term proportional to the parameter change Δθₖ. Formally, the IQC takes the form
Comments & Academic Discussion
Loading comments...
Leave a Comment