Mind the Gap. Doubling Constant Parametrization of Weighted Problems: TSP, Max-Cut, and More
Despite much research, hard weighted problems still resist super-polynomial improvements over their textbook solution. On the other hand, the unweighted versions of these problems have recently witnessed the sought-after speedups. Currently, the only way to repurpose the algorithm of the unweighted version for the weighted version is to employ a polynomial embedding of the input weights. This, however, introduces a pseudo-polynomial factor into the running time, which becomes impractical for arbitrarily weighted instances. In this paper, we introduce a new way to repurpose the algorithm of the unweighted problem. Specifically, we show that the time complexity of several well-known NP-hard problems operating over the $(\min, +)$ and $(\max, +)$ semirings, such as TSP, Weighted Max-Cut, and Edge-Weighted $k$-Clique, is proportional to that of their unweighted versions when the set of input weights has small doubling. We achieve this by a meta-algorithm that converts the input weights into polynomially bounded integers using the recent constructive Freiman’s theorem by Randolph and Węgrzycki [ESA 2024] before applying the polynomial embedding.
💡 Research Summary
The paper tackles a long‑standing gap between the running times of weighted NP‑hard problems and their unweighted counterparts. While many unweighted problems (e.g., Hamiltonian Cycle, Max‑Cut, k‑Clique) have seen exponential‑time speed‑ups beyond the textbook O*(2ⁿ) bound, their weighted versions typically rely on the “polynomial embedding” technique: encode each weight w as a monomial x^w, run an algebraic algorithm in the (+,·) ring, and then extract the optimal solution. This approach inevitably introduces a pseudo‑polynomial factor O*(W), where W is the largest weight, making the algorithm impractical when W is large.
The authors propose a different parameterization based on the doubling constant of the set of input weights A. The doubling constant C(A)=|A+A|/|A| measures how much the sumset of A expands; sets with small C are called “small‑doubling” sets. Recent work by Randolph and Węgrzycki (ESA 2024) gave a constructive version of Freiman’s theorem, which guarantees that any small‑doubling set A is contained in a generalized arithmetic progression (GAP) G of bounded dimension d (independent of |A|) and that G can be found efficiently.
Leveraging this, the paper introduces a meta‑algorithm that works for any problem P_w satisfying a natural property φ: the objective value is an additive combination of the input weights, and there exists an algebraic algorithm A that computes a solution polynomial (the generating function of all feasible solution values). The meta‑algorithm proceeds as follows:
-
Structure Extraction – Apply the constructive Freiman algorithm to embed A into a GAP G = {ℓ₁x₁+…+ℓ_dx_d | 0≤ℓ_i≤L_i}. Each original weight w is represented by a coefficient tuple (ℓ₁,…,ℓ_d).
-
Order‑Preserving Monomorphism – Design a monotone injection f from the coefficient‑tuple space into a contiguous integer interval
Comments & Academic Discussion
Loading comments...
Leave a Comment