Calibrated Multi-Level Quantile Forecasting

Calibrated Multi-Level Quantile Forecasting
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We develop an online method that guarantees calibration of quantile forecasts at multiple quantile levels simultaneously. In this work, a sequence of quantile forecasts is said to be calibrated provided that its $α$-level predictions are greater than or equal to the target value at an $α$ fraction of time steps, for each level $α$. Our procedure, called the multi-level quantile tracker (MultiQT), is lightweight and wraps around any point or quantile forecaster to produce adjusted quantile forecasts that are guaranteed to be calibrated, even against adversarial distribution shifts. Critically, it does so while ensuring that the quantiles remain ordered, e.g., the 0.5-level quantile forecast will never be larger than the 0.6-level forecast. Moreover, the method has a no-regret guarantee, implying it will not degrade the performance of the existing forecaster (asymptotically), with respect to the quantile loss. In our experiments, we find that MultiQT significantly improves the calibration of real forecasters in epidemic and energy forecasting problems, while leaving the quantile loss largely unchanged or slightly improved.


💡 Research Summary

The paper addresses a fundamental challenge in probabilistic forecasting: delivering multiple quantile predictions that are both calibrated and internally consistent (i.e., ordered) in an online setting where the data distribution may shift arbitrarily. Calibration means that for each quantile level α the long‑run proportion of outcomes falling below the forecast equals α. Consistency requires that at every time step the vector of quantile forecasts be monotone non‑decreasing, so that they represent a valid cumulative distribution function. Existing online methods, such as the Quantile Tracker (QT) from Angelopoulos et al. (2023), guarantee calibration for a single quantile but produce severe crossing when applied independently to several levels. Simple post‑processing tricks like sorting or isotonic regression also break calibration in general.

To solve this, the authors propose the Multi‑level Quantile Tracker (MultiQT), a lightweight wrapper that can be placed around any base forecaster (point or quantile). The algorithm maintains an offset vector θₜ that is added to the base forecasts bₜ to obtain adjusted forecasts qₜ = bₜ + θₜ. For each quantile level α, the offset is updated exactly as in QT:
θ_{α,t+1} = θ_{α,t} − η (cov_{α,t} − α),
where cov_{α,t}=1{yₜ ≤ q_{α,t}} is a binary coverage indicator and η is a learning rate. After performing these independent updates, the algorithm projects the resulting vector qₜ onto the isotonic cone K = { x ∈ ℝᵐ | x₁ ≤ x₂ ≤ … ≤ xₘ } using isotonic regression. This projection enforces monotonicity while altering the forecasts as little as possible in Euclidean distance.

The authors cast the problem as a constrained gradient‑equilibrium task. They extend the notion of “gradient equilibrium” (Angelopoulos et al., 2025) to settings where iterates must satisfy convex constraints. By employing a “lazy” stochastic gradient descent—gradient step followed by projection—they prove that if the loss function and constraint set satisfy an “inward‑flow” condition, the average gradient converges to zero, guaranteeing equilibrium. They verify that the calibration loss (the absolute deviation between empirical coverage and target α) together with the monotonicity constraint fulfills this condition, thus establishing a theoretical guarantee that MultiQT achieves the desired long‑run coverage for every quantile level.

Beyond calibration, the paper provides a no‑regret bound with respect to the standard quantile loss
L(q, y) = ∑_{α∈A}


Comments & Academic Discussion

Loading comments...

Leave a Comment