On LLR Mismatch in Belief Propagation Decoding of Overcomplete QLDPC Codes
Belief propagation (BP) decoding of quantum low density parity check (QLDPC) codes is often implemented using overcomplete stabilizer (OS) representations, where redundant parity checks are introduced to improve finite length performance. Decoder behavior for such representations is governed primarily by finite iteration dynamics rather than asymptotic code properties. These dynamics are known to critically depend on the initialization of the decoder. In this paper, we investigate the impact of mismatched log likelihood ratios (LLRs) used for BP initialization on the performance of QLDPC codes with OS representations. Our results demonstrate that initial LLR mismatch has a strong influence on the frame error rate (FER), particularly in the low noise regime. We also show that the optimal performance is not sharply localized: the FER remains largely insensitive over an extended region of mismatched LLRs. This behavior motivates an interpretation of LLR mismatch as a regularization control parameter rather than a quantity that must be precisely matched to the quantum channel.
💡 Research Summary
This paper investigates how mismatched log‑likelihood ratios (LLRs) used to initialise belief‑propagation (BP) decoders affect the finite‑iteration performance of quantum low‑density parity‑check (QLDPC) codes when the codes are represented with overcomplete stabilisers (OS). Overcomplete representations add redundant parity checks, increasing the number of short cycles—especially length‑4 cycles—in the Tanner graph. Consequently, the decoder’s behaviour is governed more by the dynamics of a limited number of iterations than by asymptotic convergence properties, and the assumed channel parameter (the depolarising probability ε) directly determines the magnitude of the initial LLRs injected into the graph.
The authors consider two BP variants: BP4, which operates on the full quaternary GF(4) parity‑check matrix, and BP2, which works on binary projections (separate X‑ and Z‑syndrome graphs). Both decoders are tested on the same overcomplete Generalised Bicycle (GB) code with parameters (n=126, k=28, m=126). Physical errors are generated according to an independent depolarising channel with probability ε, and the syndrome is measured. The decoder is initialised with LLRs computed from an assumed depolarising probability ε₀. “Matched” operation uses ε₀=ε, while “mismatched” operation fixes ε₀=0.10 regardless of the true ε.
Monte‑Carlo simulations are performed for ε ranging from 10⁻³ to 10⁻¹, with maximum iteration counts ℓₘₐₓ=4 and ℓₘₐₓ=8. The frame error rate (FER) is estimated from 10⁵–10⁶ trials per point. The key empirical findings are:
- For both BP2 and BP4, using a mismatched LLR (ε₀=0.10) dramatically reduces FER in the low‑noise regime (ε≈10⁻³). Gains of up to two orders of magnitude are observed when ℓₘₐₓ=4.
- When the iteration budget is increased to ℓₘₐₓ=8, the benefit of mismatch diminishes but remains noticeable for very low FER values. This indicates that LLR mismatch primarily influences the transient evolution of messages rather than the ultimate decoding capability.
- The optimal ε₀ is not a sharply defined point; a broad interval around ε₀≈0.08–0.12 yields comparable FER improvements. Hence, precise channel matching is unnecessary.
To quantify the sensitivity of performance to LLR mismatch, the authors introduce an Aggregated Objective (AO) function. For a discrete set G of channel probabilities {ε₁,…,ε_T}, the AO is defined as a weighted geometric mean of the FERs obtained with a given LLR vector L₀: \
Comments & Academic Discussion
Loading comments...
Leave a Comment