Regularized Model Predictive Control

Regularized Model Predictive Control
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In model predictive control (MPC), the choice of cost-weighting matrices and designing the Hessian matrix directly affects the trade-off between rapid state regulation and minimizing the control effort. However, traditional MPC in quadratic programming relies on fixed design matrices across the entire horizon, which can lead to suboptimal performance. This study presents a Riccati equation-based method for adjusting the design matrix within the MPC framework, which enhances real-time performance. We employ a penalized least-squares (PLS) approach to derive a quadratic cost function for a discrete-time linear system over a finite prediction horizon. Using the method of weighting and enforcing the equality constraint by introducing a large penalty parameter, we solve the constrained optimization problem and generate control inputs for forward-shifted horizons. This process yields a recursive PLS-based Riccati equation that updates the design matrix as a regularization term in each shift, forming the foundation of the regularized MPC (Re-MPC) algorithm. To accomplish this, we provide a convergence and stability analysis of the developed algorithm. Numerical analysis demonstrates its superiority over traditional methods by allowing Riccati equation-based adjustments.


💡 Research Summary

**
The paper addresses a fundamental limitation of conventional Model Predictive Control (MPC): the use of fixed weighting matrices (Q, R, and terminal weight P) throughout the prediction horizon, which hampers adaptability to changing dynamics or disturbances. To overcome this, the authors propose a Regularized MPC (Re‑MPC) framework that dynamically updates the design (Hessian) matrix at each time step using a Riccati‑equation‑based recursion derived from a penalized least‑squares (PLS) formulation.

The authors first formulate the constrained optimal control problem for a discrete‑time linear time‑invariant (LTI) system with linear state and input inequality constraints. Under standard controllability and detectability assumptions, the cost function is a quadratic sum of state and input penalties. They then introduce a PLS approach: an equality constraint (F\eta = \phi) is enforced by augmenting the least‑squares objective with a large penalty parameter (\mu). Lemma 1 guarantees a unique solution for the unconstrained weighted LS problem, while Lemma 2 shows that as (\mu \to \infty) the solution converges to the exact constrained optimum. This transformation enables the problem to be treated as an unconstrained quadratic program without sacrificing constraint satisfaction.

Next, the authors construct augmented system matrices (\bar A) and (\bar B) and block‑diagonal weighting matrices (\bar Q) and (\bar R). By applying the PLS solution to the MPC optimization, they derive a closed‑form optimal control law (\bar U_k^\ast = K_k x_k), where the gain matrix (K_k) is given by a matrix inverse that resembles the classic LQR solution but incorporates the penalty‑augmented weighting matrix (H_1). Crucially, after each horizon shift the terminal cost matrix (P_{k+l-1}) is updated via the recursive Riccati‑type equation

\


Comments & Academic Discussion

Loading comments...

Leave a Comment