Scalable Tensor Network Simulation for Quantum-Classical Dual Kernel

Scalable Tensor Network Simulation for Quantum-Classical Dual Kernel
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper presents an efficient and scalable tensor network framework for quantum kernel circuit simulation, alleviating practical costs associated with increasing qubit counts and data size. The framework enables systematic large-scale evaluation of a linearly mixed quantum-classical dual kernel of up to 784 qubits. Using Fashion-MNIST, the classification performance of the test dataset is compared between a classical kernel, a quantum kernel, and the quantum-classical dual kernel across the feature dimensions from 2 to 784, with a one-to-one mapping between encoded features and qubits. Our result shows that the quantum-classical dual kernel consistently outperforms both single-kernel baselines, remains stable as the dimensionality increases, and mitigates the large-scale degradation observed in the quantum kernel. Analysis of the learned mixing weights indicates that quantum contributions dominate below 128 features, while classical contributions become increasingly important beyond 128, suggesting that the classical kernel provides a stabilizing anchor against concentration effects and hardware noise while preserving quantum gains at lower dimensions.


💡 Research Summary

The paper introduces a scalable tensor‑network (TN) simulation framework combined with multi‑GPU parallelism to evaluate quantum kernel circuits at sizes far beyond the limits of conventional state‑vector simulators. By representing each gate as a tensor and optimizing contraction paths and slicing, the authors reduce both memory and computational complexity, enabling near‑noiseless computation of kernel matrix entries for up to 784 qubits. The quantum circuit architecture is deliberately chosen to be “block‑product‑state” (BPS): single‑qubit data‑encoding layers (Hadamards and data‑dependent rotations) are interleaved with locally entangling blocks. This structure preserves expressive power for image‑like data while remaining highly amenable to TN contraction.

The experimental pipeline proceeds as follows. Fashion‑MNIST images (28×28) are flattened to 784‑dimensional vectors, standardized, decorrelated with PCA, and finally min‑max scaled to


Comments & Academic Discussion

Loading comments...

Leave a Comment