Sharpness of Minima in Deep Matrix Factorization

Sharpness of Minima in Deep Matrix Factorization
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Understanding the geometry of the loss landscape near a minimum is key to explaining the implicit bias of gradient-based methods in non-convex optimization problems such as deep neural network training and deep matrix factorization. A central quantity to characterize this geometry is the maximum eigenvalue of the Hessian of the loss. Currently, its precise role has been obfuscated because no exact expressions for this sharpness measure were known in general settings. In this paper, we present the first exact expression for the maximum eigenvalue of the Hessian of the squared-error loss at any minimizer in deep matrix factorization/deep linear neural network training problems, resolving an open question posed by Mulayoff & Michaeli (2020). This expression reveals a fundamental property of the loss landscape in deep matrix factorization: Having a constant product of the spectral norms of the left and right intermediate factors across layers is a sufficient condition for flatness. Most notably, in both depth-$2$ matrix and deep overparameterized scalar factorization, we show that this condition is both necessary and sufficient for flatness, which implies that flat minima are spectral-norm balanced even though they are not necessarily Frobenius-norm balanced. To complement our theory, we provide the first empirical characterization of an escape phenomenon during gradient-based training near a minimizer of a deep matrix factorization problem.


💡 Research Summary

This paper tackles a fundamental open problem in the theory of deep linear models: obtaining a closed‑form expression for the maximum eigenvalue of the Hessian (λ_max) of the squared‑error loss at any global minimizer of a deep matrix factorization problem (equivalently, a deep linear neural network). The authors consider the optimization problem
 L(W₁,…,W_L)=‖M−W_L⋯W₁‖_F²,
where M is a target matrix and each W_i is a factor (layer). While previous work (Mulayoff & Michaeli, 2020) derived λ_max only for flat minima and claimed that a general expression is intractable, this paper refutes that claim.

Main theoretical contribution (Theorem 3.1).
Using matrix calculus and Gâteaux directional derivatives, the authors derive an exact formula for λ_max at any global minimizer W*. The formula shows that λ_max depends solely on the spectral norms σ_max(W_i) of the individual factors and on the product of these norms across layers. In compact form, for a minimizer satisfying ∏{i=1}^L W_i = M,
 λ_max = 2·max
{i∈


Comments & Academic Discussion

Loading comments...

Leave a Comment