Zigzag Persistence of Neural Responses to Time-Varying Stimuli

Zigzag Persistence of Neural Responses to Time-Varying Stimuli
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We use topological data analysis to study neural population activity in the Sensorium 2023 dataset, which records responses from thousands of mouse visual cortex neurons to diverse video stimuli. For each video, we build frame-by-frame cubical complexes from neuronal activity and apply zigzag persistent homology to capture how topological structure evolves over time. These dynamics are summarized with persistence landscapes, providing a compact vectorized representation of temporal features. We focus on one-dimensional topological features-loops in the data-that reflect coordinated, cyclical patterns of neural co-activation. To test their informativeness, we compare repeated trials of different videos by clustering their resulting topological neural representations. Our results show that these topological descriptors reliably distinguish neural responses to distinct stimuli. This work highlights a connection between evolving neuronal activity and interpretable topological signatures, advancing the use of topological data analysis for uncovering neural coding in complex dynamical systems.


💡 Research Summary

The authors present a novel pipeline that applies zigzag persistent homology—a topological data analysis (TDA) technique capable of handling additions and deletions of points—to time‑varying neural population activity recorded from mouse primary visual cortex (V1). Using the publicly available Sensorium 2023 dataset, they analyze calcium imaging recordings from five mice, each imaged across ten z‑planes with roughly 800 neurons per plane while the animals view grayscale video clips belonging to four stimulus categories (naturalistic, Gaussian noise, waves, moving dots).

For each video frame the authors first normalize neuronal responses across time, interpolate them onto a fixed 2‑D spatial grid via piecewise cubic splines, and then construct a super‑level set cubical complex K(t) by selecting grid cells whose activity exceeds the per‑neuron mean. To make the data amenable to existing simplicial‑homology software, each active square is “closed” by inserting an abstract 3‑simplex on its four corner vertices and retaining only its 2‑skeleton (vertices, edges, and triangular faces). This operation deliberately over‑connects neighboring squares, encouraging the formation of 1‑dimensional cycles (loops) in the resulting simplicial complex S(t).

A zigzag filtration is built by interleaving each frame’s complex with the intersection of consecutive frames: S(t₁) ← S(t₁)∩S(t₂) → S(t₂) ← … . This construction mirrors the “add‑remove” nature of neural activity—neurons can become active or silent from frame to frame—allowing the homology to track the birth and death of loops over time. The authors compute 1‑dimensional zigzag barcodes using a combination of Dionysus2 and the fastzigzag library, then convert each barcode into a persistence landscape. Five landscape layers are sampled at 50 points each, yielding a 250‑dimensional vector per imaging plane; concatenating the ten planes produces a 2 500‑dimensional descriptor for each trial.

Three experimental questions are addressed: (A) Can repeated presentations of different videos within the same mouse be distinguished? (B) Can the stimulus class be predicted? (C) Can the mouse identity be inferred? For (A) the authors perform PCA (reducing to 10 dimensions) followed by Ward‑linkage agglomerative clustering, evaluating performance with the Adjusted Rand Index (ARI). Results show high ARI values (≈ 0.94 for naturalistic, 0.70 for Gaussian, 0.62 for waves), indicating that the topological signatures reliably separate individual video trials. Control manipulations—shuffling frame order or scrambling the spatial grid—drastically reduce ARI to near chance, confirming that both temporal continuity and spatial contiguity are essential for the observed discriminability.

For (B) a linear logistic regression classifier trained on the 2 500‑dimensional descriptors achieves ≈ 69 % ± 6 % accuracy in a five‑class stimulus‑type prediction task (chance ≈ 20 %). Class‑wise F1 scores and confusion matrices reveal that naturalistic videos are most easily identified, while some confusion exists among the other categories.

For (C) the same classifier applied to mouse identity (five mice, each viewing a disjoint set of videos) yields only ≈ 36 % ± 3 % accuracy, modestly above chance, suggesting that the descriptors capture stimulus‑related structure far more strongly than mouse‑specific idiosyncrasies.

Methodologically, the paper highlights the choice of converting cubical complexes into abstract simplicial complexes by “closing” active squares. This over‑connection accelerates cycle filling, producing richer H₁ barcodes and more informative landscapes. The authors note that future work will compare this approach with pure triangulation or pure cubical filtrations to assess trade‑offs in sensitivity and computational cost. They also acknowledge that behavioral variables (pupil size, treadmill speed) were omitted; incorporating such covariates could explain residual variance and enable joint behavior‑neural topological analyses.

Overall, the study demonstrates that zigzag persistent homology can extract meaningful, time‑resolved topological features—specifically, coordinated cyclic co‑activations— from high‑dimensional neural recordings. These features serve as compact, interpretable signatures that discriminate between complex visual stimuli, opening a new avenue for TDA‑based neural coding investigations in dynamic, real‑world contexts.


Comments & Academic Discussion

Loading comments...

Leave a Comment