Dynamical stability for dense patterns in discrete attractor neural networks
Neural networks storing multiple discrete attractors are canonical models of biological memory. Previously, the dynamical stability of such networks could only be guaranteed under highly restrictive conditions. Here, we derive a theory of the local stability of discrete fixed points in a broad class of networks with graded neural activities and in the presence of noise. By directly analyzing the bulk and the outliers of the Jacobian spectrum, we show that all fixed points are stable below a critical load that is distinct from the classical \textit{critical capacity} and depends on the statistics of neural activities in the fixed points as well as the single-neuron activation function. Our analysis highlights the computational benefits of threshold-linear activation and sparse-like patterns.
💡 Research Summary
This paper addresses a long‑standing gap in the theory of attractor neural networks: while classical results guarantee that a set of stored patterns can be realized as fixed points of the dynamics, they provide little insight into whether those fixed points are dynamically stable, especially when the patterns are dense and the neurons have graded, non‑saturating activation functions. The authors develop a comprehensive analytical framework that simultaneously treats storage capacity and local stability for a broad class of networks.
The model considered is a continuous‑time rate network
\
Comments & Academic Discussion
Loading comments...
Leave a Comment