BioNIC: Biologically Inspired Neural Network for Image Classification Using Connectomics Principles
We present BioNIC, a multi-layer feedforward neural network for emotion classification, inspired by detailed synaptic connectivity graphs from the MICrONs dataset. At a structural level, we incorporate architectural constraints derived from a single cortical column of the mouse Primary Visual Cortex(V1): connectivity imposed via adjacency masks, laminar organization, and graded inhibition representing inhibitory neurons. At the functional level, we implement biologically inspired learning: Hebbian synaptic plasticity with homeostatic regulation, Layer Normalization, data augmentation to model exposure to natural variability in sensory input, and synaptic noise to model neural stochasticity. We also include convolutional layers for spatial processing, mimicking retinotopic mapping. The model performance is evaluated on the Facial Emotion Recognition task FER-2013 and compared with a conventional baseline. Additionally, we investigate the impacts of each biological feature through a series of ablation experiments. While connectivity was limited to a single cortical column and biologically relevant connections, BioNIC achieved performance comparable to that of conventional models, with an accuracy of 59.77 $\pm$ 0.27% on FER-2013. Our findings demonstrate that integrating constraints derived from connectomics is a computationally plausible approach to developing biologically inspired artificial intelligence systems. This work also highlights the potential of new generation peta-scale connectomics data in advancing both neuroscience modeling and artificial intelligence.
💡 Research Summary
**
The paper introduces BioNIC, a biologically inspired feed‑forward neural network designed for facial emotion classification on the FER‑2013 benchmark. BioNIC’s architecture is directly constrained by real synaptic connectivity data extracted from a single cortical column of the mouse primary visual cortex (V1) in the MICrONS “minnie65_public” dataset. The authors first retrieve cell‑type information (excitatory vs. inhibitory) and spatial positions, then construct two adjacency matrices: one counting synapses between neuron pairs and another summing synapse sizes. These matrices are used to create binary masks that restrict the weight matrices of the network to only those connections observed in the biological data.
The network begins with two convolutional layers (8 and 16 filters) that convert the 48 × 48 grayscale input into a 12 × 12 × 16 feature map. After the first convolution a lateral‑inhibition module mimics early retinal and thalamic competition, suppressing a neuron’s activation based on the average activity of its spatial neighbors. Hierarchical attention (channel and spatial) following the CBAM design is then applied to highlight salient features.
The core of BioNIC consists of four “biological layers” (A‑D) that correspond to V1 laminae (4, 2/3, 5, and 6). The number of neurons in each layer matches the actual count of excitatory cell types reported for that lamina (e.g., 266 neurons in layer A, 349 in layer B, etc.). Inter‑layer connections are masked by the adjacency matrix (Mk), while intra‑layer connections are also masked (Nk) to preserve within‑layer circuitry. Graded inhibition is implemented by counting incoming inhibitory connections for each neuron (Ik), normalizing this count, and scaling the neuron’s output with a learnable factor α (s(k)=1−α·Ik/max(Ik)+ε). This mechanism prevents runaway excitation and introduces biologically plausible excitatory‑inhibitory balance.
Training uses standard cross‑entropy loss and the Adam optimizer with L2 weight decay. Data augmentation (random resized crops, rotations, horizontal flips, brightness/contrast jitter) and synaptic noise (Gaussian perturbation of weights) are added to emulate natural sensory variability and neural stochasticity. The model is evaluated on FER‑2013, achieving an accuracy of 59.77 ± 0.27 %, statistically comparable to a conventional CNN baseline (≈60 %). Ablation studies show that removing the connectivity masks, graded inhibition, attention modules, or synaptic noise each degrades performance by 0.9–2.1 percentage points, confirming that each biologically motivated component contributes positively.
The authors discuss limitations: only a single cortical column is modeled, higher‑order visual and affective areas (e.g., insula, amygdala) are omitted, and the learning rule remains back‑propagation despite claims of Hebbian plasticity. They suggest future work to incorporate multiple columns, inter‑area connections, and genuine Hebbian or meta‑plasticity mechanisms, which could yield both higher biological fidelity and improved performance.
Overall, BioNIC demonstrates that integrating high‑resolution connectomics data as structural priors is feasible and can produce competitive performance on a real‑world computer‑vision task, opening a pathway for more tightly coupled neuroscience‑AI research.
Comments & Academic Discussion
Loading comments...
Leave a Comment