Perceptrons and localization of attention's mean-field landscape
The forward pass of a Transformer can be seen as an interacting particle system on the unit sphere: time plays the role of layers, particles that of token embeddings, and the unit sphere idealizes layer normalization. In some weight settings the system can even be seen as a gradient flow for an explicit energy, and one can make sense of the infinite context length (mean-field) limit thanks to Wasserstein gradient flows. In this paper we study the effect of the perceptron block in this setting, and show that critical points are generically atomic and localized on subsets of the sphere.
💡 Research Summary
This paper presents a rigorous mean‑field analysis of transformer architectures that incorporates the often‑overlooked feed‑forward perceptron block. The authors begin by interpreting the forward pass of a transformer as an interacting particle system on the unit sphere (S^{d-1}), where depth plays the role of continuous time, token embeddings are particles, and layer‑normalization (or RMSNorm) keeps the particles on the sphere. In this picture, self‑attention corresponds to a state‑dependent coupling: each particle moves toward a weighted average of the others, with weights given by a softmax‑type kernel (e^{\beta x\cdot y}).
When the key‑query‑value matrices satisfy certain symmetry conditions (essentially (Q^{\top}K=\beta I) and (V=\pm I)), the finite‑(n) particle dynamics can be written as a preconditioned gradient flow of an explicit interaction energy
\
Comments & Academic Discussion
Loading comments...
Leave a Comment