Fast Order Statistics with Group Inequality Testing
Suppose that a group test operation is available for checking order relations in a set, can this speed up problems like finding the minimum/maximum element, determining the rank of element, and computing order statistics? We consider a one-sided group inequality test to be available, where queries are of the form $u \le_Q V$ or $V \le_Q u$, and the answer is `yes’ if and only if there is some $v \in V$ such that $u \le v$ or $v \le u$, respectively. We restrict attention to total orders and focus on query-complexity; for min or max finding, we give a Las Vegas algorithm that makes $\mathcal{O}(\log^2 n)$ expected queries. We observe that rank determination can be solved with existing ``defect-counting’’ algorithms, but also give a simple Monte Carlo approximation algorithm with expected query complexity $\tilde{\mathcal{O}}(\frac{1}{δ^2} \log \frac{1}ε)$, where $1-ε$ is the probability that the algorithm succeeds and we allow a relative error of $1 \pm δ$ for $δ> 0$ in the estimated rank. We then give a Monte Carlo algorithm for approximate selection that has expected query complexity $\tilde{\mathcal{O}}(\frac{1}{δ^4}\log \frac{1}{εδ^2} )$; it has probability at least $\frac{1}{2}$ to output an element $x$, and if so, $x$ has the desired approximate rank with probability $1-ε$. Keywords: Order statistics, Group inequality testing, Randomized algorithms
💡 Research Summary
The paper investigates how a one‑sided group inequality test—queries of the form u ≤₍Q₎ V or V ≤₍Q₎ u that answer “yes” iff there exists a v ∈ V with u ≤ v (or v ≤ u)—can accelerate classic order‑statistics problems. The authors focus on total orders and measure performance solely by the number of group‑test queries.
Minimum/Maximum Finding.
A Las Vegas algorithm (MinFind) repeatedly refines a candidate element x. Starting from a random element, it checks whether any element smaller than x exists using the test S \ {x} ≤₍Q₎ x. If so, a recursive procedure Swap randomly partitions the current set into two almost equal halves and recurses on the half that must contain a smaller element. Lemma 6 shows that the rank of the new candidate is uniformly distributed among the ranks ≤ rank(x), which yields an expected halving of the rank each iteration. Consequently the expected number of iterations is O(log n). Each Swap call performs at most ⌈log₂ (n − 1)⌉ group tests, so the total expected query complexity is O(log² n). By reversing the order relation, the same algorithm finds the maximum.
Rank Determination.
The rank of an element x can be viewed as the number of “defective” items in a standard group‑testing setting, where the defect set is {y | y ≤ x}. Existing adaptive defect‑counting algorithms (e.g., the one from Bshouty & Minsky 2013) give a (1 ± δ) multiplicative approximation using 2·log log n + O(1/δ²·log 1/ε) queries with failure probability ε. The authors also present a simpler binary‑search based method. They introduce TestLE, a randomized routine that decides whether rank(x) ≤ r by sampling N = n/r elements with replacement and checking the group test S′ ≤₍Q₎ x repeatedly. Using Hoeffding’s inequality, O(1/δ²·log 1/ε) repetitions suffice to distinguish the cases rank(x) ≤ r − δr and rank(x) ≥ r + δr with error ≤ ε. Embedding TestLE in a binary search over the interval
Comments & Academic Discussion
Loading comments...
Leave a Comment