Classification and reconstruction for single-pixel imaging with classical and quantum neural networks
Single-pixel cameras are an effective solution for imaging outside the visible spectrum, where traditional CMOS/CCD cameras have challenges. When combined with machine learning, they can analyze images quickly enough for practical applications. Solving the problem of high-dimensional single-pixel visualization can potentially be accelerated via quantum machine learning, thereby expanding the range of practical problems. In this work, we simulated a single-pixel imaging experiment using Hadamard basis patterns, where images from the MNIST handwritten digit dataset and FashionMNIST items of clothing dataset were used as objects. There were selected 64 measurements with maximum variance (6% of the number of pixels in the image). We created algorithms for classifying and reconstructing images based on these measurements using classical fully-connected neural networks and parameterized quantum circuits. Classical and quantum classifiers showed the best accuracies of 96% and 95% for MNIST and 84% and 81% for FashionMNIST, respectively, after 6 training epochs, which is a quite competitive result. In the area of intersection by the number of parameters of the quantum and classical classifiers, the quantum demonstrates results no worse than the classical one, even better by a value of about 1-3%. Image reconstruction was also demonstrated using classical and quantum neural networks after 10 training epochs; the best structural similarity index measure values were 0.76 and 0.26 for MNIST and 0.73 and 0.22 for FashionMNIST, respectively, which indicates that the problem in such a formulation turned out to be too difficult for quantum neural networks in such a configuration for now.
💡 Research Summary
This paper investigates the use of both classical fully‑connected neural networks (FCNNs) and parameterised quantum circuits (PQCs) for classifying and reconstructing images obtained from a simulated single‑pixel camera. The authors employ Hadamard basis patterns to probe 32 × 32 grayscale images from the MNIST and Fashion‑MNIST datasets. From the full set of 1,024 Hadamard measurements per image, they select the 64 patterns with the highest variance across the training set (approximately 6 % of the total), thereby creating a highly compressed measurement vector that serves as the input feature for both models.
For the classical approach, a simple FCNN is built: a 64‑input layer, one hidden layer of variable size, and a 10‑output softmax layer for classification; a deeper network (up to four hidden layers) is used for reconstruction, outputting a 1,024‑dimensional vector reshaped to a 32 × 32 image. Training uses the Adam optimizer (learning rate 1e‑4 for classification, 1e‑2 for reconstruction) with cross‑entropy loss for classification and mean‑squared error (MSE) for reconstruction.
The quantum approach encodes the 64‑dimensional measurement vector into six qubits via amplitude embedding, normalising the vector so that ∑|x_i|² = 1. The PQC consists of alternating layers of parameterised single‑qubit rotations (Ry, Rz) and entangling CNOT gates, providing non‑linearity through quantum entanglement. Training also uses Adam (learning rate 1e‑3 for both tasks). Classification employs a margin loss with a fixed margin Δ = 0.15, while reconstruction again uses MSE.
Results show that, after only six training epochs, the classical classifier reaches 96 % accuracy on MNIST and 84 % on Fashion‑MNIST, while the quantum classifier attains 95 % and 81 % respectively. When the number of trainable parameters is matched (≈2 k parameters), the quantum model slightly outperforms the classical one by 1–3 % in accuracy, suggesting that quantum superposition and entanglement can provide a modest advantage in highly compressed feature spaces.
In contrast, image reconstruction performance is markedly lower for the quantum model. After ten epochs, the structural similarity index measure (SSIM) for the classical reconstructor is 0.76 (MNIST) and 0.73 (Fashion‑MNIST), whereas the quantum reconstructor yields SSIM values of only 0.26 and 0.22. The authors attribute this gap to several factors: (i) the shallow depth of the simulated quantum circuits (limited to ~10–15 gates), (ii) information loss inherent in amplitude embedding, and (iii) the limited expressive power of current PQCs for high‑dimensional regression tasks.
The paper situates its contributions within the broader literature on single‑pixel imaging, compressive sensing, and quantum machine learning. Prior works have demonstrated pattern optimisation and deep learning for rapid reconstruction, while recent QML studies have reported quantum advantage in classification tasks using quantum neural networks, quantum convolutional networks, and hybrid quantum‑classical architectures. By applying QML to the specific constraints of single‑pixel imaging, this study provides empirical evidence that quantum models can compete with classical counterparts in classification under severe parameter constraints, but they still lag behind for reconstruction.
The authors conclude that future research should explore more efficient data‑to‑qubit encodings (e.g., angle embedding or dimensionality‑reduction preprocessing), deeper and more expressive quantum circuits, and hybrid architectures that combine quantum layers with classical non‑linearities. Such advances could close the performance gap in reconstruction and unlock practical quantum‑enhanced processing for low‑cost, single‑pixel sensors operating outside the visible spectrum.
Comments & Academic Discussion
Loading comments...
Leave a Comment