Periodic Chandrasekhar recursions

Periodic Chandrasekhar recursions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper extends the Chandrasekhar-type recursions due to Morf, Sidhu, and Kailath “Some new algorithms for recursive estimation in constant, linear, discrete-time systems, IEEE Trans. Autom. Control 19 (1974) 315-323” to the case of periodic time-varying state-space models. We show that the S-lagged increments of the one-step prediction error covariance satisfy certain recursions from which we derive some algorithms for linear least squares estimation for periodic state-space models. The proposed recursions may have potential computational advantages over the Kalman Filter and, in particular, the periodic Riccati difference equation.


💡 Research Summary

The paper extends the classic Chandrasekhar‑type recursions originally proposed by Morf, Sidhu and Kailath (1974) to periodic time‑varying state‑space models. After recalling the standard linear periodic state‑space representation

 xₜ₊₁ = Fₜ xₜ + Gₜ εₜ ,  yₜ = Hₜ′ xₜ + eₜ ,

with periodic system matrices of period S, the authors review the conventional Kalman filter equations (2a‑2e). The one‑step prediction error covariance Σₜ obeys a Riccati difference equation (PRDE) that requires O(r³) operations per step (r = state dimension) and suffers from numerical difficulties in preserving positive definiteness.

To overcome these drawbacks the authors introduce the S‑lagged increment Δ_S Σₜ = Σₜ₊ₛ – Σₜ and derive two matrix difference equations (3) and (4) governing its evolution. The key insight is that Δ_S Σₜ can be factorized as

 Δ_S Σₜ = Yₜ Mₜ Yₜ′ ,

where Yₜ ∈ ℝ^{r×p} and Mₜ ∈ ℝ^{p×p} are respectively a rectangular matrix and a symmetric matrix of rank p ≤ r. Because p is bounded by the rank of the initial increment Δ_S Σ₁, which often is much smaller than r (especially when the output dimension m ≪ r), the factorization yields a low‑dimensional representation of the covariance dynamics.

Theorem 3.1 establishes the exact recursions for Δ_S Σₜ by manipulating the Kalman gain eKₜ = Kₜ Ωₜ⁻¹ in both forward and backward forms and by exploiting the relation Δ_S Ωₜ = Hₜ′ Δ_S Σₜ Hₜ. Substituting the factorization into (3) and (4) leads to linear updates for Yₜ and Mₜ, eliminating the quadratic term in Σₜ.

Algorithm 3.1 replaces the Kalman filter (2c)–(2d) with the following set of recursions:

 (a) Ωₜ₊ₛ = Ωₜ + Hₜ′ Yₜ Mₜ Yₜ′ Hₜ,
 (b) Kₜ₊ₛ = Kₜ + Fₜ Yₜ Mₜ Yₜ′ Hₜ,
 (c) Yₜ₊₁ = (Fₜ – Kₜ₊ₛ Ωₜ₊ₛ⁻¹ Hₜ′) Yₜ,
 (d) Mₜ₊₁ = Mₜ + Mₜ Yₜ′ Hₜ Ωₜ⁻¹ Hₜ′ Yₜ Mₜ.

The initial values Ωₛ, Kₛ (s = 1,…,S) are obtained from the standard Riccati equation, while Y₁ and M₁ are derived by factorizing the first increment Δ_S Σ₁ = Σ₁₊ₛ – Σ₁. Because the factorization is not unique, the authors discuss two practical choices. When M₁ is negative definite (which occurs for periodically stationary systems), Algorithm 3.2 provides an alternative formulation:

 Yₜ₊₁ = (Fₜ – Kₜ Ωₜ⁻¹ Hₜ′) Yₜ,
 Mₜ₊₁ = Mₜ – Mₜ Yₜ′ Hₜ Ωₜ₊ₛ⁻¹ Hₜ′ Yₜ Mₜ,

which can be linearized further by working with Mₜ⁻¹.

A substantial portion of the paper is devoted to the initialization problem. The authors distinguish two regimes. If S·m < r, the rank of Δ_S Σ₁ is small, and Y₁, M₁ can be obtained directly from the explicit expression (12) that expands Δ_S Σ₁ in terms of the periodic system matrices, Kalman gains, and noise covariances. If S·m ≥ r, the authors suggest running the standard Kalman filter for a few periods to compute Σ₁,…,Σ_S and then extracting Δ_S Σ₁. The choice of factorization critically influences the computational advantage of the proposed method.

The computational benefit is clear: when r ≫ m, the dominant cost of the Kalman filter (matrix multiplications of size r×r) is replaced by operations involving Yₜ (size r×p) and Mₜ (size p×p), where p = rank(Δ_S Σ₁) is often comparable to m rather than r. Consequently, the per‑step complexity drops from O(r³) to roughly O(r² p) or O(r p²), which can be a dramatic reduction for high‑dimensional state vectors. Moreover, because the recursions operate on the increment Δ_S Σₜ, the positive‑definiteness of Σₜ is automatically preserved as long as the factorization is performed correctly, improving numerical stability.

Finally, the authors note that the periodic Chandrasekhar recursions are directly applicable to maximum‑likelihood estimation of periodic ARMA (PARMA) models, computation of the exact Fisher information matrix, and fast recursive least‑squares algorithms for periodic systems. By avoiding the full periodic Riccati difference equation, the proposed algorithms open the way to efficient real‑time processing of large‑scale periodic models that arise in signal processing, econometrics, and control engineering.


Comments & Academic Discussion

Loading comments...

Leave a Comment