Accelerated Inertial Gradient Algorithms with Vanishing Tikhonov Regularization

Accelerated Inertial Gradient Algorithms with Vanishing Tikhonov Regularization
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper, we study an explicit Tikhonov-regularized inertial gradient algorithm for smooth convex minimization with Lipschitz continuous gradient. The method is derived via an explicit time discretization of a damped inertial system with vanishing Tikhonov regularization. Under appropriate control of the decay rate of the Tikhonov term, we establish accelerated convergence of the objective values to the minimum together with strong convergence of the iterates to the minimum-norm minimizer. In particular, for polynomial schedules $\varepsilon_k = k^{-p}$ with $0<p<2$, we prove strong convergence to the minimum-norm solution while preserving fast objective decay. In the critical case $p=2$, we still obtain fast rates for the objective values, while our analysis does not guarantee strong convergence to the minimum-norm minimizer. Furthermore, we provide a thorough theoretical analysis for several choices of Tikhonov schedules. Numerical experiments on synthetic, benchmark, and real datasets illustrate the practical performance of the proposed algorithm.


💡 Research Summary

The paper addresses the problem of minimizing a smooth convex function f with a Lipschitz‑continuous gradient, where the set of minimizers may contain more than one point. In such cases, standard accelerated methods (e.g., Nesterov’s accelerated gradient) guarantee fast decay of the objective value but do not select the minimum‑norm solution, which is often desirable for stability or regularization reasons.

Motivated by continuous‑time dynamics that combine inertial motion with a vanishing Tikhonov regularization term, the authors propose a fully explicit first‑order algorithm called TRIGA (Tikhonov‑Regularized Inertial Gradient Algorithm). Starting from the second‑order ODE

  ¨x(t) + δ ε(t) ẋ(t) + ∇f(x(t)) + ε(t) x(t) = 0,

they discretize it with a forward Euler scheme for the gradient term and a Nesterov‑type extrapolation for the inertial term. The iteration reads

  yₖ = xₖ + (1 − αₖ)(xₖ − xₖ₋₁),
  xₖ₊₁ = yₖ − s


Comments & Academic Discussion

Loading comments...

Leave a Comment