Conditional Denoising Model as a Physical Surrogate Model
Surrogate modeling for complex physical systems typically faces a trade-off between data-fitting accuracy and physical consistency. Physics-consistent approaches typically treat physical laws as soft constraints within the loss function, a strategy that frequently fails to guarantee strict adherence to the governing equations, or rely on post-processing corrections that do not intrinsically learn the underlying solution geometry. To address these limitations, we introduce the {Conditional Denoising Model (CDM)}, a generative model designed to learn the geometry of the physical manifold itself. By training the network to restore clean states from noisy ones, the model learns a vector field that points continuously towards the valid solution subspace. We introduce a time-independent formulation that transforms inference into a deterministic fixed-point iteration, effectively projecting noisy approximations onto the equilibrium manifold. Validated on a low-temperature plasma physics and chemistry benchmark, the CDM achieves higher parameter and data efficiency than physics-consistent baselines. Crucially, we demonstrate that the denoising objective acts as a powerful implicit regularizer: despite never seeing the governing equations during training, the model adheres to physical constraints more strictly than baselines trained with explicit physics losses.
💡 Research Summary
The paper tackles the longstanding challenge of building fast, accurate surrogate models for complex physical systems when the governing equations are only available as a black‑box steady‑state solver. Traditional physics‑informed neural networks (PINNs) or loss‑based physics regularization either require direct access to the residual function (F(x,y;\theta)) or suffer from conflicting gradients that hinder convergence and physical fidelity. To bypass these limitations, the authors propose the Conditional Denoising Model (CDM), a generative framework that learns the geometry of the solution manifold (\mathcal{M}) by training a neural network to denoise corrupted physical states.
Core Idea
Given a clean physical state (y\in\mathcal{M}) and a conditioning vector (x) (e.g., experimental parameters), the method adds isotropic Gaussian noise of variance (\sigma^2(t)) to obtain (\tilde y). A network (g_\phi(\tilde y, x, t)) is trained to reconstruct the original (y) across a continuous noise schedule (t\in
Comments & Academic Discussion
Loading comments...
Leave a Comment