Beyond Variance: Prompt-Efficient RLVR via Rare-Event Amplification and Bidirectional Pairing

Beyond Variance: Prompt-Efficient RLVR via Rare-Event Amplification and Bidirectional Pairing
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Reinforcement learning with verifiable rewards (RLVR) is effective for training large language models on deterministic outcome reasoning tasks. Prior work shows RLVR works with few prompts, but prompt selection is often based only on training-accuracy variance, leading to unstable optimization directions and weaker transfer. We revisit prompt selection from a mechanism-level view and argue that an effective minibatch should provide both (i) a reliable positive anchor and (ii) explicit negative learning signals from rare failures. Based on this principle, we propose \emph{positive–negative pairing}: at each update, we sample a hard-but-solvable $q^{+}$ and an easy-but-brittle prompt $q^{-}$(high success rate but not perfect), characterized by low and high empirical success rates under multiple rollouts. We further introduce Weighted GRPO, which reweights binary outcomes at the pair level and uses group-normalized advantages to amplify rare successes on $q^{+}$ into sharp positive guidance while turning rare failures on $q^{-}$ into strong negative penalties. This bidirectional signal provides informative learning feedback for both successes and failures, improving sample efficiency without suppressing exploration. On Qwen2.5-Math-7B, a single paired minibatch per update consistently outperforms a GRPO baseline that selects two prompts via commonly used variance-based selection heuristics: AIME~2025 Pass@8 improves from 16.8 to 22.2, and AMC23 Pass@64 from 94.0 to 97.0, while remaining competitive with large-scale RLVR trained from a pool of 1209 training prompts. Similar gains are observed on Qwen2.5-Math-7B-Instruct.


💡 Research Summary

This paper tackles a largely overlooked problem in Reinforcement Learning with Verifiable Rewards (RLVR): how to select training prompts when only a handful can be afforded. Prior work either uses large prompt pools (hundreds to thousands of prompts) or relies on the variance of training accuracy as a heuristic for prompt selection. Both approaches suffer from high sampling noise in low‑data regimes, leading to unstable gradient directions and weaker transfer performance.

The authors propose a mechanism‑level view that each update should contain two complementary learning signals: (i) a “hard‑but‑solvable” prompt (q⁺) that yields rare successes, providing a strong positive advantage, and (ii) an “easy‑but‑brittle” prompt (q⁻) that yields rare failures, providing a strong negative advantage. By pairing these two prompts, each minibatch supplies both a reliable positive anchor and an explicit “do‑not” warning, amplifying the informational content of tail events.

To realize this idea they introduce Positive‑Negative Pairing and Weighted GRPO (WGRPO). Prompt pairing is performed by first estimating success rates for a candidate pool (e.g., AIME 2025 for hard prompts, DeepScaleR‑sub for easy prompts) using multiple rollouts of the current policy. The hard prompt is chosen so its empirical success probability p≈1/G (with G=8), and the easy prompt so p≈1−1/G. Prompts with p≈0 or p≈1 are discarded to avoid degenerate groups.

WGRPO extends the existing GRPO algorithm in two ways. Binary outcomes are first re‑weighted: a correct trajectory receives +1, an incorrect one receives –λₙₑg (λₙₑg>0). For each prompt, G samples are collected, and the group mean μ and standard deviation σ are computed. Normalized advantages are then Aᵢ = (yᵢ−μ)/σ + εₛₜ𝑑, where yᵢ is the weighted outcome. Analytically, the advantage for correct samples becomes A⁺≈(1−p)/


Comments & Academic Discussion

Loading comments...

Leave a Comment