Model Evolution Under Zeroth-Order Optimization: A Neural Tangent Kernel Perspective

Model Evolution Under Zeroth-Order Optimization: A Neural Tangent Kernel Perspective
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Zeroth-order (ZO) optimization enables memory-efficient training of neural networks by estimating gradients via forward passes only, eliminating the need for backpropagation. However, the stochastic nature of gradient estimation significantly obscures the training dynamics, in contrast to the well-characterized behavior of first-order methods under Neural Tangent Kernel (NTK) theory. To address this, we introduce the Neural Zeroth-order Kernel (NZK) to describe model evolution in function space under ZO updates. For linear models, we prove that the expected NZK remains constant throughout training and depends explicitly on the first and second moments of the random perturbation directions. This invariance yields a closed-form expression for model evolution under squared loss. We further extend the analysis to linearized neural networks. Interpreting ZO updates as kernel gradient descent via NZK provides a novel perspective for potentially accelerating convergence. Extensive experiments across synthetic and real-world datasets (including MNIST, CIFAR-10, and Tiny ImageNet) validate our theoretical results and demonstrate acceleration when using a single shared random vector.


💡 Research Summary

The paper tackles the largely unexplored dynamics of zeroth‑order (ZO) optimization from a functional‑space perspective by introducing the Neural Zeroth‑order Kernel (NZK). While first‑order (FO) methods benefit from Neural Tangent Kernel (NTK) theory—showing that infinitely wide networks evolve linearly in function space with a constant kernel—ZO methods lack exact gradients, making their dynamics opaque. The authors bridge this gap by defining NZK as the inner product of finite‑difference approximations of the model’s Jacobian taken along two independent random directions, z (for the perturbation used in the loss difference) and ζ (for estimating the Jacobian).

For a linear model f(x;θ)=⟨θ,x⟩, they prove that the expected NZK does not change over training iterations. The expectation depends explicitly on the first and second moments of the random vectors:
E


Comments & Academic Discussion

Loading comments...

Leave a Comment