Low-rank approximation of Rippa method for RBF interpolation

Low-rank approximation of Rippa method for RBF interpolation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We study the problem of selecting the shape parameter in Radial Basis function (RBF) interpolation using leave-one-out-cross-validation (LOOCV). Since the classical LOOCV formula requires repeated solves with a dense $N \times N$ kernel matrix, we combine a Nyström approximation with the Woodbury identity to obtain an efficient surrogate objective that avoids large matrix inversions. Based on this reduced form, we compare a grid-based search with a gradient descent strategy and examine their behavior across different dimensions. Numerical experiments are performed in 1D, 2D, and 3D using the Inverse Multiquadratic RBF to illustrate the computational advantages of the approximation as well as the situations in which it may introduce additional sensitivity. These results show that the proposed acceleration makes LOOCV-based parameter tuning practical for larger datasets while preserving the qualitative behavior of the full method.


💡 Research Summary

**
This paper addresses the computational bottleneck inherent in Rippa’s leave‑one‑out cross‑validation (LOOCV) method for selecting the shape parameter ϵ in radial basis function (RBF) interpolation. The classical LOOCV requires solving a dense N × N interpolation system for each candidate ϵ, leading to an O(N³) cost that quickly becomes prohibitive for moderate‑size data sets. To overcome this, the authors combine two well‑known linear‑algebraic tools: a Nyström low‑rank approximation of the kernel matrix and the Woodbury matrix identity.

First, the full interpolation matrix A with entries A_{ij}=ϕ(‖x_i−x_j‖; ϵ) is approximated by ˜Aₙᵧₛ = C W⁻¹ Cᵀ, where C∈ℝ^{N×m} contains kernel evaluations between all data points and a set of m landmark points, and W∈ℝ^{m×m} contains kernel evaluations among the landmarks themselves. The landmarks are obtained via k‑means++ clustering, which provides a deterministic yet data‑driven selection. With m fixed, constructing C costs O(N m) and inverting the small matrix W costs O(m³).

Second, a small regularization term λ_reg I is added to obtain ˜A_reg = ˜Aₙᵧₛ + λ_reg I. Applying the Woodbury identity yields an explicit formula for the inverse of ˜A_reg that involves only the m × m matrix (W + λ_reg⁻¹ CᵀC)⁻¹. Consequently, each evaluation of the LOOCV residual vector E(ϵ) can be performed in O(N m² + m³) time, i.e., linear in N once m is fixed. This dramatically reduces the cost of the LOOCV objective without ever forming or factorizing an N × N matrix.

Having an efficient surrogate for the LOOCV objective, the authors explore two strategies for locating the optimal ϵ. The first is a conventional two‑stage grid search: a coarse logarithmic sweep over 30 candidates in


Comments & Academic Discussion

Loading comments...

Leave a Comment