D-ripALM: A Tuning-friendly Decentralized Relative-Type Inexact Proximal Augmented Lagrangian Method
This paper proposes D-ripALM, a Decentralized relative-type inexact proximal Augmented Lagrangian Method for consensus convex optimization over multi-agent networks. D-ripALM adopts a double-loop distributed optimization framework that accommodates a wide range of inner solvers, enabling efficient treatment of both smooth and nonsmooth objectives. In contrast to existing double-loop distributed augmented Lagrangian methods, D-ripALM employs a relative-type error criterion to regulate the switching between inner and outer iterations, resulting in a more practical and tuning-friendly algorithmic framework with enhanced numerical robustness. Moreover, we establish rigorous convergence guarantees for D-ripALM under general convexity assumptions, without requiring smoothness or strong convexity conditions commonly imposed in the distributed optimization literature. Numerical experiments further demonstrate the tuning-friendly nature of D-ripALM and its efficiency in attaining high-precision solutions with fewer communication rounds.
💡 Research Summary
**
The paper introduces D‑ripALM, a decentralized relative‑type inexact proximal augmented Lagrangian method designed for consensus convex optimization over multi‑agent networks. The authors start by formulating the standard consensus problem: each agent i holds a local convex (possibly nonsmooth) function f_i: ℝᵈ → ℝ∪{+∞}, and the goal is to minimize the sum ∑₁ⁿ f_i(x) under a common decision variable x. By replicating x locally (x₁,…,x_n) and imposing the equality constraint x₁=…=x_n, the problem is rewritten as a linearly constrained convex program. The constraint can be expressed as Zx = 0, where Z = (I−W)⊗I_d and W is a symmetric, doubly‑stochastic mixing matrix associated with the communication graph.
Traditional distributed algorithms fall into two families. Single‑loop methods (e.g., DGD, EXTRA, gradient‑tracking) are cheap per iteration but require smoothness and diminishing step sizes for exact convergence. Double‑loop augmented Lagrangian (ALM) schemes solve a penalized Lagrangian subproblem repeatedly (inner loop) and update the multiplier (outer loop). While they guarantee exact convergence with a fixed penalty, they suffer from a critical practical issue: how many inner iterations should be performed, or how to schedule an absolute tolerance for the subproblem. Existing works either fix the inner iteration count or use a pre‑specified summable tolerance sequence, both of which demand extensive hand‑tuning and are highly problem‑dependent.
D‑ripALM resolves this difficulty by importing the relative‑type error criterion from the centralized ripALM framework. At outer iteration k the algorithm solves the proximal‑augmented Lagrangian subproblem
Ψ_k(x) = F(x) + ⟨Ω_k, x⟩ + (σ_k/2)⟨x, Zx⟩ + (τ_k/(2σ_k))‖x−x_k‖²,
where F(x)=∑ f_i(x_i) and Ω_k = √Z y_k is a transformed dual variable that avoids the need for explicit √Z operations. The additional proximal term τ_k²/σ_k‖x−x_k‖² enforces strong convexity of the subproblem, improving conditioning and allowing any first‑order or splitting method to be employed as the inner solver.
The inner solver returns an approximate solution x_{k+1} together with a residual Δ_{k+1} that satisfies
⟨w_k−x_{k+1}, σ_k Δ_{k+1}⟩ + ‖σ_k Δ_{k+1}‖² ≤ ρ (‖σ_k √Z x_{k+1}‖² + τ_k‖x_{k+1}−x_k‖²),
with a user‑chosen constant ρ∈
Comments & Academic Discussion
Loading comments...
Leave a Comment