Adaptive compressed sensing - a new class of self-organizing coding models for neuroscience
Sparse coding networks, which utilize unsupervised learning to maximize coding efficiency, have successfully reproduced response properties found in primary visual cortex \cite{AN:OlshausenField96}. However, conventional sparse coding models require that the coding circuit can fully sample the sensory data in a one-to-one fashion, a requirement not supported by experimental data from the thalamo-cortical projection. To relieve these strict wiring requirements, we propose a sparse coding network constructed by introducing synaptic learning in the framework of compressed sensing. We demonstrate that the new model evolves biologically realistic spatially smooth receptive fields despite the fact that the feedforward connectivity subsamples the input and thus the learning has to rely on an impoverished and distorted account of the original visual data. Further, we demonstrate that the model could form a general scheme of cortical communication: it can form meaningful representations in a secondary sensory area, which receives input from the primary sensory area through a “compressing” cortico-cortical projection. Finally, we prove that our model belongs to a new class of sparse coding algorithms in which recurrent connections are essential in forming the spatial receptive fields.
💡 Research Summary
**
The paper introduces Adaptive Compressed Sensing (ACS), a novel learning framework that integrates sparse coding with compressed sensing to relax the stringent one‑to‑one sampling requirement of traditional sparse coding models. Conventional sparse coding assumes that each neuron receives a full, unaltered view of the sensory input (e.g., every pixel of an image patch). Neuroanatomical data, however, show that thalamo‑cortical and cortico‑cortical projections are highly sparse: only a fraction of source neurons project to a target area, and the wiring is far from the dense, precise connectivity required by classic models.
To address this mismatch, the authors replace the full‑input dictionary Ψ with a compressed‑input dictionary Θ. A fixed random projection matrix Φ (k < m) first compresses the input vector x ∈ ℝ^m to Φx ∈ ℝ^k. The ACS energy function is
E(x,a,Φ,Θ) = ½‖Φx − Θa‖² + λ‖a‖₀,
where a is the sparse code and λ controls sparsity. Learning proceeds by gradient descent on Θ, yielding a Hebbian‑like update rule identical to that used in standard sparse coding, but now the feed‑forward weights are F_F = ΦᵀΘ, i.e., they convey only a mixed, subsampled version of the original stimulus. The recurrent (competitive) weights are F_B = −ΘᵀΘ, providing lateral inhibition that shapes the code.
Mathematically, the authors prove two key results. Theorem 1 shows that when Φ is the identity (no compression), receptive fields (RF) are scalar multiples of the feed‑forward weights, reproducing the well‑known correspondence in classic sparse coding. Theorem 2 (and its corollaries) demonstrate that for a genuine random Φ, RF and FF are generally not scalar multiples; instead, the recurrent connections critically determine the shape of RF. Consequently, in the compressed regime, feedback is essential for forming biologically plausible receptive fields.
Simulation experiments use 12 × 12 natural‑image patches (m = 144). The compression matrix reduces dimensionality to k = 60, while the network contains n = 432 neurons (three‑fold over‑complete). ACS learns Θ from the compressed data and, after training, the receptive fields extracted from the network are smooth, Gabor‑like filters resembling V1 simple‑cell profiles, despite the feed‑forward weights appearing noisy and amorphous. The authors also cascade the model: the sparse code a generated in a “primary” area is again compressed (by a second Φ) and fed to a “secondary” area that runs the same ACS algorithm. The secondary area develops receptive fields similar to the primary one, confirming that the scheme can be stacked hierarchically.
Reconstruction quality is assessed via signal‑to‑noise ratio (SNR). ACS achieves mean SNR comparable to conventional compressed sensing that uses a fixed dictionary, but with lower variance across patches, indicating more stable performance. Importantly, ACS can reconstruct images using the learned receptive fields, even though the original dictionary Ψ is not directly available (Θ = ΦΨ is ill‑posed to invert).
The discussion emphasizes several neuroscientific implications. First, cortical regions need not receive a full, dense representation of upstream activity; a random subsampling suffices for learning efficient sparse codes, aligning with anatomical observations of sparse thalamo‑cortical and cortico‑cortical projections. Second, recurrent inhibition is not merely a competition mechanism but a structural necessity for shaping receptive fields when feed‑forward information is incomplete. Third, the compression‑expansion cycle proposed by ACS mirrors a plausible cortical communication strategy: each area compresses its local sparse representation, transmits it, and the downstream area expands it back into a new sparse code. This aligns with ideas of hierarchical processing and could extend beyond vision to auditory and somatosensory systems.
In summary, Adaptive Compressed Sensing offers a biologically plausible, mathematically grounded model that bridges the gap between efficient coding theories and realistic cortical wiring. By demonstrating that sparse, orientation‑selective receptive fields can emerge from subsampled inputs and that the model can be stacked across processing stages, the work provides a compelling framework for understanding how the brain may achieve efficient, hierarchical representation despite severe connectivity constraints.
Comments & Academic Discussion
Loading comments...
Leave a Comment