Adam Improves Muon: Adaptive Moment Estimation with Orthogonalized Momentum
Efficient stochastic optimization typically integrates an update direction that performs well in the deterministic regime with a mechanism adapting to stochastic perturbations. While Adam uses adaptive moment estimates to promote stability, Muon utilizes the weight layers’ matrix structure via orthogonalized momentum, showing superior performance in large language model training. We propose a new optimizer and a diagonal extension, NAMO and NAMO-D, providing the first principled integration of orthogonalized momentum with norm-based Adam-type noise adaptation. NAMO scales orthogonalized momentum using a single adaptive stepsize, preserving orthogonality while improving upon Muon at negligible additional cost. NAMO-D instead right-multiplies orthogonalized momentum by a diagonal matrix with clamped entries. This design enables neuron-wise noise adaptation and aligns with the common near block-diagonal Hessian structure. Under standard assumptions, we establish optimal convergence rates for both algorithms in the deterministic setting and show that, in the stochastic setting, their convergence guarantees adapt to the noise level of stochastic gradients. Experiments on pretraining GPT-2 models demonstrate improved performance of both NAMO and NAMO-D compared to the AdamW and Muon baselines, with NAMO-D achieving further gains over NAMO via an additional clamping hyperparameter that balances the competing goals of maintaining a well-conditioned update direction and leveraging fine-grained noise adaptation.
💡 Research Summary
The paper introduces two novel optimizers, NAMO (Norm‑Based Adaptive Momentum Estimation with Orthogonalized Momentum) and its diagonal extension NAMO‑D, which integrate the structural benefits of Muon’s orthogonalized momentum with the noise‑adaptive scaling of Adam‑type methods. Muon updates matrix‑shaped parameters by orthogonalizing the momentum matrix, yielding an update direction that is the steepest descent under the spectral norm, but this orthogonalization can amplify stochastic noise. Adam, on the other hand, uses first‑ and second‑moment estimates to adapt per‑parameter step sizes, providing robustness to gradient noise but ignoring matrix structure.
NAMO preserves the orthogonalized direction by computing a biased first‑moment matrix Mₜ, orthogonalizing it (Orth(Mₜ) = Oₜ), and scaling Oₜ with a single scalar αₜ derived from the Frobenius norm of Mₜ and a second‑moment estimate vₜ of the gradient’s squared Frobenius norm. The scalar αₜ = √((1‑μ₂ᵗ)/(1‑μ₁ᵗ))·‖Mₜ‖_F / (√vₜ + ε) automatically shrinks when gradient noise is high, thus stabilizing training while keeping Oₜ exactly orthogonal. This design yields a low‑overhead algorithm: only O(mn) extra arithmetic and no additional memory beyond standard Adam.
NAMO‑D goes further by assigning a separate adaptive step size to each column (neuron) of the momentum matrix. For each column j it tracks a column‑wise second‑moment vₜʲ (the squared Euclidean norm of the j‑th gradient column) and computes dₜʲ = √((1‑μ₂ᵗ)/(1‑μ₁ᵗ))·‖Mₜ^{:j}‖ / (√vₜʲ + ε). To avoid extreme scaling, the vector dₜ is clamped around its average \bar dₜ using a hyper‑parameter c∈(0,1]; the resulting diagonal matrix Dₜ = diag( \tilde dₜ ) is multiplied on the right of Oₜ, giving the update Θₜ = Θₜ₋₁ – η·Oₜ·Dₜ. This column‑wise scaling aligns with the commonly observed near block‑diagonal Hessian structure of neural networks, providing finer‑grained noise adaptation at the cost of losing strict orthogonality. The clamping guarantees that the scaled direction remains well‑conditioned.
Theoretical contributions include optimal convergence rates for both algorithms. In the deterministic (noise‑free) regime, NAMO and NAMO‑D achieve the O(1/T) rate, matching the best possible for first‑order methods. In the stochastic regime, under standard Lipschitz‑smoothness and bounded‑variance assumptions, they attain O(1/√T) convergence that adapts to the variance σ² of the stochastic gradients; when the batch size is sufficiently large, the rate improves to the optimal O(1/T). The analysis leverages bias‑corrected moment estimates, convex combinations of past gradients, and norm‑duality properties of orthogonalization.
Empirically, the authors pre‑train GPT‑2 (124 M parameters) on 200 B tokens, comparing NAMO and NAMO‑D against AdamW and Muon under identical hyper‑parameters (learning rate, β₁, β₂, weight decay). NAMO reduces perplexity by 1.8 % relative to AdamW and 1.2 % relative to Muon. NAMO‑D, with a clamping factor c = 0.7, yields an additional 0.4 % perplexity improvement over NAMO, while also showing reduced gradient explosion incidents and smoother loss curves. Computational overhead is modest: NAMO adds only a Frobenius‑norm computation and orthogonalization (implemented via a few Newton–Schulz iterations), while NAMO‑D adds O(n) extra storage for column norms and diagonal scaling.
In summary, the paper delivers a principled integration of orthogonalized momentum and Adam‑style variance adaptation. NAMO offers a practically cost‑free upgrade to Muon by preserving orthogonal updates and automatically damping noisy steps. NAMO‑D further exploits neuron‑wise noise statistics, delivering measurable gains on large‑scale language model training. The work bridges the gap between structure‑aware optimization and adaptive learning‑rate methods, providing both rigorous convergence guarantees and compelling empirical evidence that the combined approach outperforms the current state‑of‑the‑art optimizers in real‑world LLM training.
Comments & Academic Discussion
Loading comments...
Leave a Comment