Adaptive Behavioral Predictive Control: State-Free Regulation Without Hankel Weights

Adaptive Behavioral Predictive Control: State-Free Regulation Without Hankel Weights
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper presents adaptive behavioral predictive control (ABPC), an indirect adaptive predictive control framework operating on streaming data. An LPV–ARX predictor is identified online via kernel–recursive least squares and used to compute closed-form predictive control sequences over a finite horizon, avoiding batch Hankel constructions and iterative optimization. Nonlinear kernel dictionaries extend model expressiveness within a behavioral formulation. Numerical studies on Hammerstein and NARX systems demonstrate effective performance when the dictionary aligns with the plant class and highlight conditioning and feature-selection effects. The paper emphasizes numerical simulation, computational feasibility, and reproducibility.


💡 Research Summary

The paper introduces Adaptive Behavioral Predictive Control (ABPC), an indirect adaptive predictive control scheme that operates entirely on streaming input‑output data without the need for batch Hankel matrix construction or iterative quadratic programming. The authors build on the behavioral view of dynamical systems, which treats admissible trajectories as the primary object, and on earlier adaptive predictive controllers such as Retrospective Cost Adaptive Control (RCAC) and Predictive Cost Adaptive Control (PCAC).

ABPC’s core consists of two tightly coupled components. First, a kernel‑based recursive least‑squares (RLS) algorithm identifies a linear‑parameter‑varying autoregressive model with exogenous inputs (LPV‑ARX) in real time. A dictionary of kernel functions {γj} is evaluated on a lagged window of past inputs and outputs, producing a feature vector zk that is linear in the unknown parameter vector θ. Because the model remains linear in θ, standard RLS with a forgetting factor λ and a Tikhonov regularization term updates θ at each sampling instant, using each new data pair exactly once and without storing past trajectories.

Second, the freshly identified LPV‑ARX coefficients (Ck, Ak,i, Bk,i) are frozen over a finite prediction horizon N. By stacking the one‑step predictor repeatedly, the authors derive a Toeplitz‑structured, block‑lower‑triangular operator Ty and a corresponding input‑to‑output map Gk such that the stacked future outputs satisfy (I‑Ty)Y = σk + TuU, which can be rewritten as Y = Sk + GkU. This formulation mirrors the classic finite‑horizon structure used in Generalized Predictive Control (GPC) but avoids the explicit construction of large Hankel matrices.

A quadratic performance index is defined as

J(U) = ½ (Sk + GkU – R)ᵀ Qy (Sk + GkU – R) + ½ ΔUᵀ Ru ΔU,

where Qy ≥ 0 penalizes tracking error, Ru ≻ 0 penalizes input increments, and ΔU = D U – dk encodes first‑difference operators. Expanding the cost yields a symmetric positive‑definite Hessian H = Gkᵀ Qy Gk + Dᵀ Ru D and a linear term h = Gkᵀ Qy (Sk – R). Because Ru is chosen positive definite, H is guaranteed to be invertible, and the unique minimizer satisfies H U* = –h. The authors solve this linear system by a Cholesky factorization H = LLᵀ, followed by forward and backward substitution, which is computationally cheap and numerically robust. The first element of the optimal input sequence is applied to the plant in a receding‑horizon fashion.

The paper presents extensive numerical experiments on three families of systems: (i) Hammerstein models with polynomial and cross‑term nonlinearities, (ii) nonlinear ARX (NARX) models with various kernel choices (unit, polynomial, radial‑basis‑function), and (iii) a set of higher‑order polynomial and quaternion‑based dynamics. Results show that when the kernel dictionary matches the underlying plant class, ABPC achieves fast convergence, low tracking error, and well‑conditioned normal matrices. Polynomial kernels excel for systems whose nonlinearity is itself polynomial, while RBF kernels sometimes outperform the unit dictionary in mildly nonlinear regimes. Conversely, for plants exhibiting strong sinusoidal or oscillatory behavior that cannot be captured by the chosen dictionary, the condition number of the stacked prediction matrix deteriorates, leading to degraded performance; in these cases the unit (constant) dictionary still guarantees stability by implicitly representing the internal model.

Key contributions highlighted by the authors are:

  1. A fully online adaptive predictive controller that eliminates batch Hankel‑matrix construction and iterative QP solving, thereby reducing memory footprint and computational latency.
  2. Integration of kernel‑RLS identification with Toeplitz stacking to obtain a closed‑form Cholesky‑based optimal control law, preserving the familiar GPC‑style prediction‑control architecture while extending it to nonlinear, LPV‑ARX models.
  3. Demonstration that the proposed method nests PCAC as a special case (unit dictionary) and that expressive kernel dictionaries can broaden the class of representable dynamics without sacrificing convexity of the control problem.
  4. A systematic numerical study mapping kernel choice, matrix conditioning, and closed‑loop performance, offering practical guidance for feature‑selection in real‑time applications.

The authors acknowledge that formal stability, robustness, and convergence proofs are absent; the analysis relies on numerical evidence and the positive definiteness of the Hessian under standard weighting choices. They suggest future work on theoretical guarantees, adaptive forgetting strategies, and experimental validation on hardware platforms. Overall, ABPC represents a promising step toward truly real‑time, data‑driven predictive control for both linear and nonlinear systems, leveraging kernel methods to balance model expressiveness with computational tractability.


Comments & Academic Discussion

Loading comments...

Leave a Comment