Approximating under the Influence of Quantum Noise and Compute Power
The quantum approximate optimisation algorithm (QAOA) is at the core of many scenarios that aim to combine the power of quantum computers and classical high-performance computing appliances for combinatorial optimisation. Several obstacles challenge concrete benefits now and in the foreseeable future: Imperfections quickly degrade algorithmic performance below practical utility; overheads arising from alternating between classical and quantum primitives can counter any advantage; and the choice of parameters or algorithmic variant can substantially influence runtime and result quality. Selecting the optimal combination is a non-trivial issue, as it not only depends on user requirements, but also on details of the hardware and software stack. Appropriate automation can lift the burden of choosing optimal combinations for end-users: They should not be required to understand technicalities like differences between QAOA variants, required number of QAOA layers, or necessary measurement samples. Yet, they should receive best-possible satisfaction of their non-functional requirements, be it performance or other. We determine factors that affect solution quality and temporal behaviour of four QAOA variants using comprehensive density-matrix-based simulations targeting three widely studied optimisation problems. Our simulations consider ideal quantum computation, and a continuum of scenarios troubled by realistic imperfections. Our quantitative results, accompanied by a comprehensive reproduction package, show strong differences between QAOA variants that can be pinpointed to narrow and specific effects. We identify influential co-variables and relevant non-functional quality goals that, we argue, mark the relevant ingredients for designing appropriate software engineering abstraction mechanisms and automated tool-chains for devising quantum solutions from high-level problem specifications.
💡 Research Summary
This paper presents a systematic, large‑scale evaluation of the Quantum Approximate Optimisation Algorithm (QAOA) and three of its most widely studied variants—Warm‑Start Init‑QAOA, Warm‑Start QAOA, and Recursive QAOA (RQAOA)—under both ideal and noisy conditions. The authors aim to answer three practical questions that arise when integrating NISQ‑era quantum processors into high‑performance computing (HPC) environments: (1) Which QAOA variant delivers the best solution quality for a given problem and hardware profile? (2) How do the number of QAOA layers (circuit depth) and the strength of quantum noise interact to affect performance? (3) Which parameters should be exposed to an automated tool chain that can select and tune the algorithm on behalf of end‑users?
To address these questions, the study focuses on three canonical NP‑complete optimisation problems—Max‑Cut, Partition, and Vertex‑Cover—each of which admits a compact QUBO encoding requiring only one qubit per graph vertex (or per number in the Partition instance). For each problem size n ∈ {5,6,7,8,9,10}, the authors generate 100 random instances, yielding a total of 600 instances per problem class. For every instance they run QAOA circuits with p ∈ {1,2,3,4} layers, using the same classical optimiser (SciPy’s COBYLA with 1 % tolerance and a maximum of 150 iterations) and the same measurement budget (1000 circuit evaluations per optimisation iteration).
The quantum simulations are performed with the Eviden Qaptiva 800 platform and its QLM library, which provides a high‑performance density‑matrix simulator capable of modelling realistic noise channels. The noise model is based on the widely used Qiskit noise model for IBM superconducting devices. It combines single‑qubit depolarising channels, two‑qubit depolarising channels, and thermal‑relaxation (T₁/T₂) channels. Two scaling factors, d_D (depolarising strength) and d_TR (thermal‑relaxation strength), allow the authors to sweep continuously from a noiseless scenario (d_D = d_TR = 0) to a realistic baseline (d_D = d_TR = 1) derived from median parameters of 46 IBM Q back‑ends (gate errors ≈0.03 % for single‑qubit gates, 1 % for C‑X, T₁ ≈ T₂ ≈ 5 µs, gate times of 5 ns for single‑qubit and 35 ns for C‑X).
Performance is measured in two dimensions. Approximation quality is defined as the ratio of the obtained objective value to the optimal value (or a problem‑specific normalisation, e.g., reciprocal cover size for Vertex‑Cover). Execution time is estimated by summing gate durations (using the baseline timings) and a median measurement time of 4.09 µs, multiplied by the number of circuit evaluations required per optimisation iteration.
Key findings can be summarised as follows:
-
Depth‑Noise Trade‑off – In the ideal (noiseless) regime, increasing the number of layers monotonically improves approximation quality, confirming the theoretical expressive power of deeper QAOA circuits. However, once realistic noise is introduced, the benefit of additional layers quickly vanishes. For d_D·d_TR ≥ 0.5, the standard QAOA’s quality drops below 0.7 already at p = 3, and further depth leads to a steep decline. This demonstrates a non‑linear, exponential degradation of state fidelity with circuit depth under depolarising and thermal‑relaxation noise.
-
Warm‑Start Variants Mitigate Noise – Both Warm‑Start Init‑QAOA (which prepares a biased initial state based on a classical approximate solution) and Warm‑Start QAOA (which also modifies the mixer Hamiltonian) achieve markedly higher robustness. Even with moderate noise (d_D·d_TR ≈ 0.6), they retain quality above 0.85 for p = 2, outperforming the standard QAOA by roughly 15 % points. The advantage stems from a reduced search space: the initial state already encodes a good approximation, so the quantum circuit needs only to refine rather than discover the solution.
-
Recursive QAOA Shows Distinct Behaviour – RQAOA iteratively extracts the most “conclusive” term of the problem Hamiltonian by sampling the quantum state, fixes the corresponding variable, and reduces the problem size. Because the algorithm relies heavily on classical post‑processing, its performance is less sensitive to circuit depth. With a fixed sample size of ten measurements per iteration, RQAOA attains an average quality of ≈0.78 on the larger instances (n = 9,10) even when d_D·d_TR = 0.8. This suggests that RQAOA can be a viable strategy when hardware noise is high and deep circuits are infeasible.
-
Execution‑Time Bottlenecks – The dominant contributor to wall‑clock time is the two‑qubit C‑X gate, which accounts for roughly 60 % of total execution time in the simulated circuits. Since C‑X error rates are an order of magnitude larger than single‑qubit errors, any hardware improvements that reduce C‑X duration or error would have a disproportionate impact on overall throughput. The authors also note that measurement latency (≈4 µs) and the fixed 1000‑evaluation budget per optimisation step constitute a non‑trivial overhead, especially for shallow circuits where the quantum portion is fast.
-
Guidelines for Automated Tool Chains – Building on the quantitative data, the authors propose a “quality‑time‑noise resistance” triangle as a decision‑making framework. End‑users can specify a priority (e.g., minimise time, maximise quality, or tolerate higher noise) and a lightweight regression model—trained on the simulation data—predicts the optimal combination of QAOA variant, number of layers, and noise scaling factors. This model can be embedded in a compiler or runtime system to automatically select the most appropriate configuration without requiring the user to understand the underlying quantum‑hardware intricacies.
The paper contributes a reproducible research package (code, data, and scripts) that allows other researchers to repeat the experiments on different hardware models or problem families. By systematically quantifying how QAOA variants react to realistic noise and by linking these observations to non‑functional requirements, the work bridges a gap between quantum algorithm theory and practical software‑engineering concerns in hybrid quantum‑classical HPC systems.
In conclusion, the study demonstrates that (i) deeper QAOA circuits are not universally better in the presence of noise, (ii) warm‑starting techniques provide a practical path to maintain high solution quality with shallow circuits, and (iii) recursive, classically‑driven variants can offer robustness when hardware error rates are high. These insights inform the design of future quantum‑aware compilers, autotuners, and runtime environments, moving the field closer to realizing tangible benefits from NISQ devices in real‑world optimisation workloads.
Comments & Academic Discussion
Loading comments...
Leave a Comment