Learning-augmented smooth integer programs with PAC-learnable oracles
This paper investigates learning-augmented algorithms for smooth integer programs, covering canonical problems such as MAX-CUT and MAX-k-SAT. We introduce a framework that incorporates a predictive oracle to construct a linear surrogate of the objective, which is then solved via linear programming followed by a rounding procedure. Crucially, our framework ensures that the solution quality is both consistent and smooth against prediction errors. We demonstrate that this approach effectively extends tractable approximations from the classical dense regime to the near-dense regime. Furthermore, we go beyond the assumption of oracle existence by establishing its PAC-learnability. We prove that the induced algorithm class possesses a bounded pseudo-dimension, thereby ensuring that an oracle with near-optimal expected performance can be learned with polynomial samples.
💡 Research Summary
This paper studies learning‑augmented algorithms for smooth integer programs (d‑IP), a broad class of NP‑hard combinatorial optimization problems that includes canonical tasks such as MAX‑CUT and MAX‑k‑SAT. A smooth integer program is defined as the maximization of an n‑variate degree‑d polynomial p(x) over binary variables, where the polynomial is β‑smooth: every coefficient of a monomial of degree ℓ is bounded by β·n^{d‑ℓ}. The authors propose a three‑stage framework that leverages a predictive oracle providing a full‑information guess (\hat{x}\in{0,1}^n) for the optimal solution.
Stage 1 – Oracle prediction. The oracle, trained from data, outputs a binary vector (\hat{x}). The quality of this prediction is measured by the ℓ₁ distance ε = ‖(\hat{x}) – x*‖₁, where x* is an optimal solution.
Stage 2 – Linearization and LP relaxation. Using β‑smoothness, the polynomial can be recursively decomposed as
(p(x)=c+\sum_{i=1}^n x_i p_i(x))
where each p_i is a lower‑degree smooth polynomial. The algorithm linearizes p around (\hat{x}) by fixing the coefficients ρ_i = p_i((\hat{x})). To guarantee feasibility of the true optimum in the relaxed program, a tolerance δ is set to dominate the deviation between p_i((\hat{x})) and p_i(x*). Lemma 2.7 shows that for quadratic objectives |p_i((\hat{x})) – p_i(x*)| ≤ β·√(n·ε). Consequently, setting δ = β·√(n·ε) ensures that the optimal integer solution remains feasible for the linear program (2‑LP). Solving this LP yields a fractional solution y.
Stage 3 – Rounding. The fractional solution y is converted to an integral binary vector z via independent randomized rounding (each coordinate set to 1 with probability y_i). Theorem 2.9 proves that the rounding error is bounded by (\tilde O(n^{3/2})) for quadratic objectives, independent of ε. For multilinear objectives a deterministic greedy rounding eliminates this error entirely (Theorem 2.15).
Performance guarantees. Combining the LP relaxation gap (which contributes O(β·n^{3/2}·√ε)) with the rounding error yields, with high probability,
(p(z) ≥ p(x*) – O(β·n^{3/2}·√ε) – \tilde O(n^{3/2})).
When the instance is “near‑dense”, i.e., the optimal value scales as Ω(n^{d‑1/2+ξ}) for some ξ∈(0,½], the additive error translates into a multiplicative approximation ratio of
(1 – \tilde O(√ε / n^{ξ})).
Thus the algorithm is consistent (it converges to optimality as ε→0) and smooth (its degradation is proportional to √ε). In the multilinear case the rounding term disappears, giving an even tighter bound.
Learnability of the oracle. The paper does not assume the oracle exists a priori. Instead, it studies the PAC‑learnability of the oracle within the algorithm‑selection framework of
Comments & Academic Discussion
Loading comments...
Leave a Comment