A neuromorphic hardware framework based on population coding
In the biological nervous system, large neuronal populations work collaboratively to encode sensory stimuli. These neuronal populations are characterised by a diverse distribution of tuning curves, ensuring that the entire range of input stimuli is encoded. Based on these principles, we have designed a neuromorphic system called a Trainable Analogue Block (TAB), which encodes given input stimuli using a large population of neurons with a heterogeneous tuning curve profile. Heterogeneity of tuning curves is achieved using random device mismatches in VLSI (Very Large Scale Integration) process and by adding a systematic offset to each hidden neuron. Here, we present measurement results of a single test cell fabricated in a 65nm technology to verify the TAB framework. We have mimicked a large population of neurons by re-using measurement results from the test cell by varying offset. We thus demonstrate the learning capability of the system for various regression tasks. The TAB system may pave the way to improve the design of analogue circuits for commercial applications, by rendering circuits insensitive to random mismatch that arises due to the manufacturing process.
💡 Research Summary
The paper presents a novel neuromorphic hardware architecture called the Trainable Analogue Block (TAB), which directly translates the biological principle of population coding into an analog VLSI implementation. In biological systems, large ensembles of neurons encode sensory inputs through heterogeneous tuning curves; the collective response can be decoded by linearly combining the individual firing rates. The authors mimic this strategy by constructing a three‑layer feed‑forward network (input, hidden, output) that follows the Linear Solutions of Higher Dimensional Interlayers (LSHDI) framework.
The hidden layer consists of a large number of analog “neurons” implemented as differential MOSFET pairs (M1, M2). Each pair receives the input voltage (Vin) and a fixed reference voltage (Vref). Operating in weak inversion and saturation, the differential currents follow a hyperbolic‑tangent (tanh) relationship, providing a smooth, sigmoidal non‑linearity. Crucially, Vref is deliberately varied from neuron to neuron, creating a systematic offset that shifts each neuron’s tanh curve along the input axis. In addition to this deterministic offset, the unavoidable random mismatch among transistors (threshold voltage, β‑factor, etc.) further diversifies the tuning curves. This dual source of heterogeneity reproduces the biological population coding’s essential property: a broad, overlapping set of response curves that collectively span the entire input range.
The output layer implements linear weighting of the hidden‑layer currents. Weights are encoded as a 13‑bit binary number that controls a current‑splitting network (an R‑2R ladder realized with MOSFETs). By turning on/off successive branches of the ladder, the circuit distributes a fraction of each hidden neuron’s current to the output node, effectively multiplying the hidden activation by a programmable scalar. The authors show through simulation that 11‑bit resolution is already sufficient for most regression tasks; the extra bits provide only marginal improvement.
Training is performed offline. For a given set of training inputs, the hidden‑layer responses are measured, forming a matrix H. The desired output vector T (e.g., samples of a target function) is known. The optimal linear weights β are obtained analytically via the Moore‑Penrose pseudoinverse: β = H⁺·T. Because the hidden‑layer mapping is fixed (random weights and offsets are not updated), learning reduces to a single matrix operation, which can be executed on a conventional computer and then programmed back into the hardware by setting the binary weight registers.
A prototype test chip was fabricated in a 65 nm CMOS process. The chip contains a single hidden neuron block and a corresponding output‑weight block, forming a single‑input‑single‑output (SISO) system. Measurements confirm that varying Vref indeed shifts the tanh curve, and that the current‑splitting network produces the expected linear scaling of the hidden output. Using the measured hidden responses, the authors trained the system on several regression problems, including sinusoidal, polynomial, and logarithmic functions. The reconstructed outputs closely follow the target curves, with mean‑square errors well within the range expected for analog implementations.
The authors also provide a mathematical justification for the necessity of heterogeneous tuning curves. If all hidden neurons shared identical activation functions, the hidden response matrix H would be rank‑deficient, making the pseudoinverse ill‑conditioned and preventing accurate learning. By ensuring that each neuron’s activation is offset differently (both systematically and through random mismatch), H attains full rank with high probability, guaranteeing a stable solution for β.
Compared with prior analog neural hardware, TAB flips the conventional design goal: instead of minimizing mismatch, it embraces it as a resource. This approach reduces the need for large device geometries (which would otherwise improve matching at the cost of area and power) and makes the architecture naturally scalable with technology nodes. Moreover, because the hidden layer is fixed after fabrication, the same silicon can be re‑trained for different tasks simply by reprogramming the binary weights, dramatically shortening design cycles and lowering cost. The low‑power, high‑density nature of the circuit makes it attractive for embedded applications such as sensor networks, aerospace, and edge AI where power and area budgets are tight.
In conclusion, the TAB framework demonstrates that stochastic variations inherent to modern nanometer CMOS processes can be harnessed to implement robust, trainable neuromorphic systems. By combining random device mismatch with a controllable systematic offset, the architecture achieves the diversity required for population coding, while retaining the simplicity of linear readout and offline training. This work opens a pathway toward “stochastic electronics,” where variability is no longer a flaw to be mitigated but a design feature that enables efficient, adaptable analog computation.
Comments & Academic Discussion
Loading comments...
Leave a Comment