Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models

Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Large language models (LLMs) achieve state-of-the-art accuracy on complex reasoning tasks by generating multiple chain-of-thought (CoT) traces, but using a fixed token budget per query leads to over-computation on easy inputs and under-computation on hard ones. We introduce Predictive Scheduling, a plug-and-play framework that pre-runs lightweight predictors, an MLP on intermediate transformer hidden states or a LoRA-fine-tuned classifier on raw question text, to estimate each query’s optimal reasoning length or difficulty before any full generation. Our greedy batch allocator dynamically distributes a fixed total token budget across queries to maximize expected accuracy. On the GSM8K arithmetic benchmark, predictive scheduling yields up to 7.9 percentage points of absolute accuracy gain over uniform budgeting at identical token cost, closing over 50% of the gap to an oracle with perfect foresight. A systematic layer-wise study reveals that middle layers (12 - 17) of the transformer carry the richest signals for size estimation. These results demonstrate that pre-run budget prediction enables fine-grained control of the compute-accuracy trade-off, offering a concrete path toward latency-sensitive, cost-efficient LLM deployments.


💡 Research Summary

The paper tackles the inefficiency of using a fixed token budget for each query when deploying large language models (LLMs) for chain‑of‑thought (CoT) reasoning. A uniform budget wastes compute on easy questions and starves hard ones, leading to higher latency and cost. The authors propose “Predictive Scheduling,” a plug‑and‑play framework that first runs lightweight predictors to estimate, before any generation, how many tokens a particular query will need to achieve a desired correctness probability, or how difficult the query is. Two predictor families are explored: (1) a multilayer perceptron (MLP) that consumes hidden‑state vectors extracted from each transformer layer (the


Comments & Academic Discussion

Loading comments...

Leave a Comment