Decoding square-free Goppa codes over $F_p$
We propose a new, efficient non-deterministic decoding algorithm for square-free Goppa codes over $\F_p$ for any prime $p$. If the code in question has degree $t$ and the average distance to the closest codeword is at least $(4/p)t + 1$, the proposed decoder can uniquely correct up to $(2/p)t$ errors with high probability. The correction capability is higher if the distribution of error magnitudes is not uniform, approaching or reaching $t$ errors when any particular error value occurs much more often than others or exclusively. This makes the method interesting for (semantically secure) cryptosystems based on the decoding problem for permuted and punctured Goppa codes.
💡 Research Summary
The paper introduces a novel non‑deterministic decoding algorithm for square‑free Goppa codes defined over the prime field Fₚ, where p can be any prime. The authors generalize Patterson’s classic binary decoding method to arbitrary characteristic by defining a φ‑scaled error‑locator polynomial σ₍φ₎(x) = ∏ᵢ (x − Lᵢ)^{eᵢ/φ} and deriving the corresponding key equation φ·σ′₍φ₎(x) = σ₍φ₎(x)·sₑ(x) (mod g(x)). Since φ can take any non‑zero value in Fₚ, the decoder tries all p − 1 possibilities, selecting the one that yields the smallest degree for σ₍φ₎.
The core technical contribution lies in casting the problem of finding σ₍φ₎ into a shortest‑vector problem in a polynomial lattice. Using the Mulders‑Storjohann algorithm, the lattice basis is transformed into weak Popov form, allowing the shortest non‑zero vector to be found in O(p³ t²) field operations, where t is the degree of the Goppa polynomial g(x). This vector directly provides the coefficients of σ₍φ₎, and if deg σ₍φ₎ ≤ t the decoder can recover both error locations and magnitudes.
The authors prove that, regardless of the distribution of error magnitudes, the algorithm can correct up to w = (2/p)·t errors with high probability, provided the average minimum distance of the code satisfies d_min ≥ (4/p)·t + 1. In the special case where all error values are equal (a situation common in cryptographic schemes that fix error magnitudes for semantic security), the degree condition is satisfied for w = t, meaning the decoder can uniquely correct the full designed error capacity of the code. For p = 2 the result coincides with Patterson’s ability to correct t errors; for p = 3 the bound becomes (2/3)·t, surpassing the classical t/2 limit of previously known algorithms for odd‑characteristic Goppa codes.
Experimental evaluation on randomly generated irreducible binary Goppa codes shows that the decoder’s output is unique with overwhelming probability, suggesting that most codes have an average distance well above the guaranteed lower bound. Additional simulations confirm that when error magnitudes are heavily skewed (e.g., all errors equal), the decoder successfully corrects up to t errors, outperforming existing methods for Sugiyama‑Kasahara‑Hirasawa‑Namekawa and “wild” codes.
From a cryptographic perspective, the method is particularly attractive for McEliece‑type public‑key schemes that employ a semantic‑security transformation (e.g., Fujisaki‑Okamoto). By fixing error magnitudes, the private decoder can correct t errors while any generic alternant decoder available to an attacker can only correct roughly t/2 errors, forcing the attacker to guess the remaining error pattern with a work factor on the order of (p − 1)·⌈n·t/2⌉ guesses. This dramatically raises the security margin for Goppa codes in odd characteristic, which have already been shown to possess certain advantages over binary codes.
The paper concludes by highlighting open problems: (1) reducing the need to exhaustively test all φ values, (2) improving the success probability of the lattice‑based shortest‑vector step, and (3) extending the approach to non‑square‑free Goppa polynomials. Nonetheless, the presented algorithm offers a substantial improvement in error‑correction capability for a broad class of Goppa codes and opens new avenues for post‑quantum cryptographic constructions.
Comments & Academic Discussion
Loading comments...
Leave a Comment