Safety Beyond the Training Data: Robust Out-of-Distribution MPC via Conformalized System Level Synthesis

Safety Beyond the Training Data: Robust Out-of-Distribution MPC via Conformalized System Level Synthesis
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We present a novel framework for robust out-of-distribution planning and control using conformal prediction (CP) and system level synthesis (SLS), addressing the challenge of ensuring safety and robustness when using learned dynamics models beyond the training data distribution. We first derive high-confidence model error bounds using weighted CP with a learned, state-control-dependent covariance model. These bounds are integrated into an SLS-based robust nonlinear model predictive control (MPC) formulation, which performs constraint tightening over the prediction horizon via volume-optimized forward reachable sets. We provide theoretical guarantees on coverage and robustness under distributional drift, and analyze the impact of data density and trajectory tube size on prediction coverage. Empirically, we demonstrate our method on nonlinear systems of increasing complexity, including a 4D car and a {12D} quadcopter, improving safety and robustness compared to fixed-bound and non-robust baselines, especially outside of the data distribution.


💡 Research Summary

The paper introduces CP‑SLS‑MPC, a novel robust model predictive control framework that guarantees high‑probability safety when a learned nonlinear dynamics model is used outside its training distribution. The method combines weighted conformal prediction (WCP) with system‑level synthesis (SLS) to produce state‑and‑control‑dependent error bounds and to embed these bounds directly into a closed‑loop MPC formulation.

First, the authors train a dynamics model (\hat f) and a separate neural network (\Sigma(x,u)) that predicts a local covariance of the model error. Using a multivariate Gaussian negative‑log‑likelihood loss, (\Sigma) captures the shape and magnitude of the error dispersion but is not calibrated. To obtain calibrated high‑confidence error sets, they apply WCP: for each nominal pair ((z_k,v_k)) they compute non‑conformity scores (s_{i,k}= (f(x_i,u_i)-\hat f(x_i,u_i))^\top \Sigma(z_k,v_k)^{-1}(f(x_i,u_i)-\hat f(x_i,u_i))), weight each score by a distance‑based factor (w_{k,i}= \rho| (z_k,v_k)-(x_i,u_i)|^2), normalize the weights, and form a weighted empirical distribution. The ((1-\alpha_k)) quantile of this distribution yields a scalar (q_{1-\alpha_k}(z_k,v_k)). The calibrated error ellipsoid is then (E(z_k,v_k)={ \epsilon : \epsilon^\top \Sigma(z_k,v_k)^{-1}\epsilon \le q_{1-\alpha_k}(z_k,v_k) }), or equivalently (E(z_k,v_k)= q_{1-\alpha_k}(z_k,v_k) L(z_k,v_k) B^{n_x}) where (L) is the Cholesky factor of (\Sigma). WCP guarantees (\Pr


Comments & Academic Discussion

Loading comments...

Leave a Comment