GeoDynamics: A Geometric State-Space Neural Network for Understanding Brain Dynamics on Riemannian Manifolds
State-space models (SSMs) have become a cornerstone for unraveling brain dynamics, revealing how latent neural states evolve over time and give rise to observed signals. By combining the flexibility of deep learning with the principled dynamical structure of SSMs, recent studies have achieved powerful fits to functional neuroimaging data. However, most existing approaches still view the brain as a set of loosely connected regions or impose oversimplified network priors, falling short of a truly holistic and self-organized dynamical system perspective. Brain functional connectivity (FC) at each time point naturally forms a symmetric positive definite (SPD) matrix, which resides on a curved Riemannian manifold rather than in Euclidean space. Capturing the trajectories of these SPD matrices is key to understanding how coordinated networks support cognition and behavior. To this end, we introduce GeoDynamics, a geometric state-space neural network that tracks latent brain-state trajectories directly on the high-dimensional SPD manifold. GeoDynamics embeds each connectivity matrix into a manifold-aware recurrent framework, learning smooth and geometry-respecting transitions that reveal task-driven state changes and early markers of Alzheimer’s disease, Parkinson’s disease, and autism. Beyond neuroscience, we validate GeoDynamics on human action recognition benchmarks (UTKinect, Florence, HDM05), demonstrating its scalability and robustness in modeling complex spatiotemporal dynamics across diverse domains.
💡 Research Summary
**
GeoDynamics introduces a novel geometric state‑space neural network that directly models brain functional connectivity (FC) matrices as points on the symmetric positive‑definite (SPD) manifold. Recognizing that each FC matrix resides on a curved Riemannian space rather than in Euclidean space, the authors replace the traditional linear operators of classical state‑space models (SSMs) with manifold‑aware operations: weighted Fréchet mean (wFM) for intrinsic averaging and orthogonal group actions for isometric state transitions.
The model’s dynamics are defined as follows. At each discrete time step k, the latent state S(k) and the observation Y(k) are computed by aggregating the past τ states and inputs using wFM, then applying a translation T_X(g)=gXgᵀ where g∈O(N) is learned from the data. This translation acts as a multiplicative update on the manifold, preserving the SPD property under the Stein metric. Continuous‑time dynamics are discretized via matrix exponentials (exp(ΔA) and ΔA⁻¹(exp(ΔA)−I)ΔB), ensuring numerical stability while staying on the manifold.
To maintain SPD structure throughout the network, the authors design SPD‑preserving convolutional layers. Convolution kernels are parameterized as H=ZᵀZ+εI, guaranteeing that each layer’s output remains SPD. Building on this, a novel SPD‑preserving attention (SPA) module is introduced. SPA computes attention weights by applying an element‑wise exponential to the convolved feature map, normalizing them, and then masking the original SPD feature map. Because both the mask and the base convolution are SPD‑consistent, the attention‑enhanced representation stays on the manifold.
For downstream tasks, the final SPD output is mapped to the tangent space at the identity via the matrix logarithm, yielding a symmetric Euclidean matrix that can be vectorized and fed to a soft‑max classifier. The whole system is trained end‑to‑end with cross‑entropy loss.
The authors evaluate GeoDynamics on two fronts. First, they apply it to large‑scale brain connectome data from the Human Connectome Project and four disease‑related resting‑state fMRI datasets (ADNI, OASIS, PPMI, ABIDE). GeoDynamics consistently outperforms baseline RNNs, LSTMs, hidden Markov models, and variational Bayesian SSMs in classification accuracy, AUC, and early‑diagnosis sensitivity. Visualization of learned orthogonal transformations and attention masks reveals disease‑specific disruptions, such as reduced default‑mode connectivity in Alzheimer’s patients. Second, the same architecture is tested on human action recognition benchmarks (UTKinect, Florence 3D Actions, HDM05). Despite the domain shift, GeoDynamics achieves higher accuracy than state‑of‑the‑art graph‑convolutional or recurrent approaches, demonstrating its scalability and robustness.
Key contributions include: (1) a rigorous extension of SSMs to the SPD manifold, (2) the integration of wFM and isometric group actions to preserve geometric fidelity during temporal evolution, (3) SPD‑preserving convolution and attention mechanisms that enable deep, non‑linear representation learning without violating positive‑definiteness, and (4) extensive empirical validation across neuroscience and computer‑vision tasks.
Limitations noted by the authors involve sensitivity to hyper‑parameters such as the temporal window τ and the regularization constant ε, and the current focus on SPD matrices only. Future work may explore automatic hyper‑parameter tuning, extensions to asymmetric or complex‑valued connectivity, and lightweight implementations for real‑time clinical deployment.
Overall, GeoDynamics offers a powerful, geometry‑aware framework for modeling spatio‑temporal brain dynamics, bridging the gap between deep learning flexibility and the rigorous mathematical structure of state‑space models, and opens new avenues for early disease detection and cross‑domain sequence analysis.
Comments & Academic Discussion
Loading comments...
Leave a Comment