Skip the Hessian, Keep the Rates: Globalized Semismooth Newton with Lazy Hessian Updates

Skip the Hessian, Keep the Rates: Globalized Semismooth Newton with Lazy Hessian Updates
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Second-order methods are provably faster than first-order methods, and their efficient implementations for large-scale optimization problems have attracted significant attention. Yet, optimization problems in ML often have nonsmooth derivatives, which makes the existing convergence rate theory of second-order methods inapplicable. In this paper, we propose a new semismooth Newton method (SSN) that enjoys both global convergence rates and asymptotic superlinear convergence without requiring second-order differentiability. Crucially, our method does not require (generalized) Hessians to be evaluated at each iteration but only periodically, and it reuses stale Hessians otherwise (i.e., it performs lazy Hessian updates), saving compute cost and often leading to significant speedups in time, whilst still maintaining strong global and local convergence rate guarantees. We develop our theory in an infinite-dimensional setting and illustrate it with numerical experiments on matrix factorization and neural networks with Lipschitz constraints.


💡 Research Summary

The paper introduces a novel semismooth Newton (SSN) algorithm—named GLAd‑SSN (Globalized Lazy Adaptive Semismooth Newton)—that simultaneously achieves global convergence rates and local superlinear (or γ‑order) convergence for composite optimization problems with nonsmooth components, without requiring the Hessian to be evaluated at every iteration. The authors consider the problem
\


Comments & Academic Discussion

Loading comments...

Leave a Comment