Entropy-Gated Selective Policy Optimization:Token-Level Gradient Allocation for Hybrid Training of Large Language Models
Hybrid training methods for large language models combine supervised fine tuning (SFT) on expert demonstrations with reinforcement learning (RL) on model rollouts, typically at the sample level. We propose Entropy Gated Selective Policy Optimization (EGSPO), a three stage framework that extends sample level mixing with token level gradient modulation. Stage 1, SFT expert learning, establishes a reliable warm up policy using expert demonstrations with a pure SFT loss. Stage 2, RL rollout generation, samples trajectories from the current policy and computes per token predictive entropy. Stage 3, the EGSPO mechanism, applies entropy gated gradient allocation: a predictive entropy module routes high entropy tokens to full PPO updates to encourage exploration, and low entropy tokens to attenuated PPO updates to reduce variance and preserve knowledge. Critically, both branches incorporate the advantage function A_t, ensuring that incorrect trajectories receive consistent negative learning signals and preventing reinforcement of confident errors. EGSPO achieves consistent improvements on mathematical reasoning benchmarks, with gains of 3.8 percent on AIME and 2.9 percent on MATH over the CHORD phi baseline, while incurring only 3.4 percent additional computational overhead.
💡 Research Summary
The paper introduces Entropy‑Gated Selective Policy Optimization (EGSPO), a three‑stage framework that refines hybrid training of large language models (LLMs) by modulating gradients at the token level based on predictive entropy. Traditional hybrid methods such as CHORD‑ϕ mix supervised fine‑tuning (SFT) and reinforcement learning (RL) only at the sample granularity, treating every token in a rollout identically. Recent observations (e.g., WEST) show that a small fraction of high‑entropy tokens carries most of the learning signal in pure RL, suggesting that token‑wise differentiation could be beneficial. However, naïvely applying token‑level loss mixing raises two challenges: (1) trajectory mismatch when SFT loss is computed on model‑generated contexts, and (2) the risk of reinforcing confident but incorrect predictions if low‑entropy tokens receive attenuated updates without preserving the advantage signal.
EGSPO addresses these issues with the following pipeline:
-
Stage 1 – SFT Expert Learning: The model is first warm‑started on expert demonstrations (≈20 % of the data) using a pure SFT loss. This stabilizes the policy before any RL updates, mitigating the instability commonly observed when RL is started from scratch.
-
Stage 2 – RL Rollout Generation: The current policy generates multiple rollouts per prompt. For each token (y_t) in a rollout, predictive entropy (H(y_t) = -\sum_{v\in V} p_\theta(v|x, y_{<t})\log p_\theta(v|x, y_{<t})) is computed. Entropy quantifies the model’s uncertainty about the next token.
-
Stage 3 – Entropy‑Gated Gradient Allocation: Tokens are split into high‑entropy and low‑entropy groups using a per‑sequence adaptive threshold (top‑ρ % tokens, ρ = 10 % in experiments).
- High‑entropy tokens receive the standard PPO loss:
\
- High‑entropy tokens receive the standard PPO loss:
Comments & Academic Discussion
Loading comments...
Leave a Comment