Physics-informed Polynomial Chaos Expansion with Enhanced Constrained Optimization Solver and D-optimal Sampling

Physics-informed Polynomial Chaos Expansion with Enhanced Constrained Optimization Solver and D-optimal Sampling
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Physics-informed polynomial chaos expansions (PC$^2$) provide an efficient physically constrained surrogate modeling framework by embedding governing equations and other physical constraints into the standard data-driven polynomial chaos expansions (PCE) and solving via the Karush-Kuhn-Tucker (KKT) conditions. This approach improves the physical interpretability of surrogate models while achieving high computational efficiency and accuracy. However, the performance and efficiency of PC$^2$ can still be degraded with high-dimensional parameter spaces, limited data availability, or unrepresentative training data. To address this problem, this study explores two complementary enhancements to the PC$^2$ framework. First, a numerically efficient constrained optimization solver, straightforward updating of Lagrange multipliers (SULM), is adopted as an alternative to the conventional KKT solver. The SULM method significantly reduces computational cost when solving physically constrained problems with high-dimensionality and derivative boundary conditions that require a large number of virtual points. Second, a D-optimal sampling strategy is utilized to select informative virtual points to improve the stability and achieve the balance of accuracy and efficiency of the PC$^2$. The proposed methods are integrated into the PC$^2$ framework and evaluated through numerical examples of representative physical systems governed by ordinary or partial differential equations. The results demonstrate that the enhanced PC$^2$ has better comprehensive capability than standard PC$^2$, and is well-suited for high-dimensional uncertainty quantification tasks.


💡 Research Summary

The paper presents two complementary enhancements to the physics‑informed polynomial chaos expansion (PC²) framework, aiming to improve its scalability and robustness for high‑dimensional uncertainty quantification (UQ) problems. The first enhancement replaces the conventional Karush‑Kuhn‑Tucker (KKT) solver, which solves a large augmented linear system containing both the regression matrix and the constraint matrix, with a more efficient algorithm called Straightforward Updating of Lagrange Multipliers (SULM). SULM first computes an unconstrained least‑squares solution β̃ = (ΨᵀΨ)⁻¹ΨᵀY, then forms an updating operator J = −(ΨᵀΨ)⁻¹Aᵀ. The residual of the physical constraints r = c – Aβ̃ is projected onto a reduced constraint matrix Y_c = A J, and the Lagrange multipliers λ are obtained by solving the small linear system Y_c λ = r. The final physically consistent coefficients are β = β̃ + Jλ. By decoupling the constraints from the full KKT matrix, SULM reduces the dominant cubic cost from O((p+n_c)³) to a combination of O(p³), O(p²n_c), O(p n_c²) and O(n_c³), dramatically lowering memory usage and enabling reuse of β̃ in active‑learning scenarios.

The second enhancement addresses the selection of virtual points that are used to enforce the governing differential equations and boundary conditions. Rather than relying on crude Monte‑Carlo or Latin‑Hypercube sampling, the authors adopt a D‑optimal design strategy. A large candidate pool (typically three times the desired number of virtual points) is generated, and its transposed matrix Ψ_Vᵀ is factorized by singular value decomposition. The right singular vectors are then ordered using QR factorization with column pivoting, which ranks candidate points by their contribution to the determinant of Ψ_VᵀΨ_V. The top n_V points are retained, maximizing the information content (the D‑optimal criterion) and improving the conditioning of the regression problem.

The combined methodology is evaluated on six benchmark problems ranging from a 1‑D Euler equation with a random constant load, through 2‑D heat conduction PDEs, nonlinear ODEs, to high‑dimensional stochastic models built via Karhunen‑Loève expansions (up to 15 random variables). Four algorithmic configurations are compared: KKT, KKT‑D (KKT with D‑optimal sampling), SULM, and SULM‑D. Results show that SULM consistently reduces wall‑clock time by a factor of 2–5 relative to KKT, especially when the number of virtual points exceeds several hundred. Accuracy metrics (mean‑squared error, PDE residual error, and boundary‑condition error) are markedly improved by D‑optimal sampling; SULM‑D achieves the lowest errors across all test cases, often reducing MSE by 15–30 % compared with KKT‑D. Moreover, SULM‑D remains stable as the stochastic dimension grows, whereas the KKT solver frequently encounters memory failures or ill‑conditioning in high‑dimensional settings.

The study concludes that (i) SULM provides a lightweight, numerically stable alternative to the traditional KKT approach for physics‑constrained regression, (ii) D‑optimal virtual‑point selection substantially enhances surrogate stability and predictive accuracy, and (iii) the two techniques are synergistic, delivering a surrogate modeling pipeline that is both computationally efficient and physically faithful. Limitations include the current focus on linear constraints; extending SULM to nonlinear or non‑convex constraints remains an open research direction, as does the development of more sophisticated candidate‑generation schemes for D‑optimal designs in very high‑dimensional spaces.


Comments & Academic Discussion

Loading comments...

Leave a Comment