A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder
đĄ Research Summary
This paper addresses the wellâknown limitation of traditional linearâsubspace reducedâorder models (LSâROMs) when applied to advectionâdominated or sharpâgradient phenomena, which typically exhibit a large Kolmogorov nâwidth and therefore cannot be captured accurately with a lowâdimensional linear basis. To overcome this obstacle, the authors propose a nonlinearâmanifold reducedâorder model (NMâROM) that leverages a shallow masked autoencoder to learn a compact latent representation of the fullâorder model (FOM) solution manifold. The autoencoder is deliberately shallow and incorporates a masking operation that selectively disables input dimensions, thereby reducing the number of trainable parameters and facilitating an efficient hyperâreduction (HR) stage.
The NMâROM framework integrates physicsâinformed neural networks (PINNs) with conventional numerical solvers. Instead of treating neuralânetwork weights as the sole unknowns in a global residual minimization (as in classic PINN approaches), the proposed method retains the governing equationsâ discretization (e.g., backward Euler, NewtonâRaphson) and uses the trained decoder as a nonlinear trial manifold. This âphysicsâinformedâ loss enforces the governing PDE/ODE residuals while allowing the latent coordinates to evolve according to reducedâorder dynamics.
A central contribution is the development of a hyperâreduction technique tailored to the nonlinear manifold setting. By extending ideas from the Discrete Empirical Interpolation Method (DEIM), the authors select a small set of spatial points at which the nonlinear terms are evaluated. The masked autoencoderâs structure makes this sampling inexpensive, and the resulting reduced nonlinear operators are assembled only on the selected points, dramatically lowering the perâtimeâstep computational cost.
The paper systematically derives two reducedâorder formulations: NMâGalerkin and NMâLeastâSquares PetrovâGalerkin (NMâLSPG), each equipped with hyperâreduction (NMâGalerkinâHR, NMâLSPGâHR). It also provides a rigorous aâposteriori error bound that accounts for three sources of approximation error: manifold truncation, hyperâreduction, and temporal discretization. This bound offers theoretical insight into how the different components of the algorithm contribute to the overall accuracy.
Numerical experiments focus on the oneâdimensional and twoâdimensional Burgers equations, canonical test cases for advectionâdominated dynamics. Compared with LSâGalerkin and LSâLSPG of the same reduced dimension, the NMâROM variants achieve substantially lower relative errors. Moreover, the hyperâreduced NMâROMs deliver significant speedâups: a factor of 2.6 for the 1âD case and an impressive 11.7 for the 2âD case. These results demonstrate that the nonlinear manifold can capture the solution space with far fewer degrees of freedom than a linear subspace, and that the proposed HR scheme successfully mitigates the cost of evaluating nonlinear terms.
In summary, the authors present a fast, accurate, and physically consistent reducedâorder modeling strategy that combines a shallow masked autoencoder, physicsâinformed training, and tailored hyperâreduction. The approach extends the applicability of ROMs to problems where traditional linear subspace methods fail, and it opens avenues for realâtime simulation, optimization, and control of complex nonlinear dynamical systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment