Experimental quantum reservoir computing with a circuit quantum electrodynamics system
Quantum reservoir computing is a machine learning framework that offers ease of training compared to other quantum neural networks, as it does not rely on gradient-based optimization. Learning is performed in a single step on the output features measured from the quantum system. Various implementations of quantum reservoir computing have been explored in simulations, with different measured features. Although simulations have shown that quantum reservoirs present advantages in performance compared to classical reservoirs, experimental implementations have remained scarce. This is due to the challenge of obtaining a large number of output features that are nonlinear transformations of the input data. In this work, we show that even with a circuit quantum electrodynamics system as simple as a single transmon coupled to a readout resonator, we can implement a proof-of-concept realization of quantum reservoir computing. We obtain a large number of nonlinear features from a single physical system by encoding the input data in the amplitude of a coherent drive and measuring the cavity state in the Fock basis. We demonstrate classification of two classical tasks with significantly smaller hardware resources and fewer measured features compared to classical neural networks. Our experimental results are supported by numerical simulations that show additional Kerr nonlinearity is beneficial to reservoir performance. Our work demonstrates a hardware-efficient quantum neural network implementation that can be further scaled up and generalized to other quantum machine learning models.
💡 Research Summary
Quantum reservoir computing (QRC) offers a compelling alternative to conventional quantum neural networks by sidestepping the need for gradient‑based training; only a linear read‑out layer is optimized. While numerous theoretical studies have highlighted the potential advantages of QRC—particularly its ability to exploit quantum coherence for richer nonlinear transformations and memory—experimental demonstrations have been scarce, mainly because extracting a large set of nonlinear output features from a quantum system is technically demanding.
In this work, Carles et al. present a hardware‑efficient, proof‑of‑concept implementation of QRC using a minimal circuit‑QED platform: a single transmon qubit capacitively coupled to a λ/2 superconducting resonator that serves as the reservoir. The system Hamiltonian includes a dispersive qubit‑cavity interaction (χ ≈ 2π × 22.29 MHz), a self‑Kerr term for the cavity (K_cc ≈ ‑2π × 300 kHz), and a smaller cross‑Kerr correction (K_cq ≈ ‑2π × 0.44 MHz). Input data are encoded in the amplitude α_in of a 200 ns coherent displacement pulse applied to the cavity at its resonance frequency. After the displacement, a photon‑number‑selective π_n pulse is applied to the transmon, followed by a high‑power readout that yields the occupation probability P_n of the cavity’s Fock states |n⟩ for n = 0…4.
Because the occupation probabilities of a coherent state follow a Poisson distribution, they provide a highly nonlinear mapping from the input amplitude to measurable quantities. The intrinsic Kerr nonlinearity perturbs the Poisson statistics, an effect that the authors model with a Lindblad master‑equation simulation, extracting an effective total decay rate κ_tot ≈ 2π × 560 kHz and confirming the Kerr coefficient. The Kerr‑induced frequency shift (≈ 1.7 MHz across the encoding range) is larger than the cavity linewidth, prompting the use of short pulses whose spectral width (≈ 5 MHz) comfortably covers the shifted resonance.
Feature extraction proceeds by measuring each of the five Fock‑state probabilities at four distinct times after the start of the displacement (t_i = i × 50 ns, i = 1…4), yielding 20 raw features. A linear read‑out layer is trained via ridge regression (minimizing ‖Y − F W‖² + β‖W‖²) on a dataset of 400 randomly generated sine and square wave periods, each discretized into eight points. With all 20 features the system attains 99.8 % classification accuracy on a held‑out test set; remarkably, retaining only the eight most informative features still yields 99.5 % accuracy, far surpassing classical reservoir computers that typically require ≥25 neurons for comparable performance.
The authors also explore the impact of experimental imperfections. Thermal noise, finite π_n‑pulse fidelity, and qubit relaxation during the waiting interval introduce an affine distortion p_s(n) = a P_n + b in the measured probabilities. Because the read‑out layer is linear, the training process can compensate for these systematic biases, though they increase the number of repetitions needed to achieve a given statistical precision. Decoherence is examined through simulations that vary the qubit dephasing rate κ_φ; accuracy degrades sharply as κ_φ grows, underscoring the importance of preserving quantum coherence for both nonlinearity and memory.
A second benchmark, Mackey‑Glass chaotic time‑series prediction, demonstrates the reservoir’s temporal processing capabilities. The authors feed the system with a sliding window of 20 past values, encode each point as a displacement amplitude, and measure five Fock‑state probabilities 100 ns after the encoding pulse. Using normalized root‑mean‑square error (NRMSE) as the metric, they find minima at prediction delays that are integer multiples of the series’ quasi‑oscillation period, reflecting the reservoir’s ability to capture periodic correlations. In the strongly chaotic regime, the NRMSE quickly converges to a constant baseline, as expected for a system with limited predictive horizon.
To elucidate the role of Kerr nonlinearity, the paper presents extensive numerical sweeps of K_cc. Larger Kerr values reduce the effective drive‑cavity coupling, lowering the average photon number, yet they also enhance the nonlinearity of the output feature space. An intermediate Kerr strength yields the best trade‑off, improving both classification and time‑series prediction performance. This aligns with prior theoretical work suggesting that modest quantum nonlinearities can boost robustness and expressivity, whereas overly strong Kerr can saturate the cavity and impair read‑out fidelity.
Overall, the study demonstrates that a single transmon‑cavity module can generate a rich set of nonlinear, time‑dependent features sufficient for demanding machine‑learning tasks, using far fewer physical resources than classical reservoirs. The approach leverages measurement‑induced nonlinearity rather than relying on complex input encodings, opening a pathway toward scalable quantum neural networks. Future directions include multiplexed read‑out of multiple cavities, integration of additional qubits to increase dimensionality, and the use of deep‑learning classifiers to further reduce measurement overhead. The work thus marks a significant step toward practical, hardware‑efficient quantum machine‑learning platforms.
Comments & Academic Discussion
Loading comments...
Leave a Comment