On the Role of Consistency Between Physics and Data in Physics-Informed Neural Networks
Physics-informed neural networks (PINNs) have gained significant attention as a surrogate modeling strategy for partial differential equations (PDEs), particularly in regimes where labeled data are scarce and physical constraints can be leveraged to regularize the learning process. In practice, however, PINNs are frequently trained using experimental or numerical data that are not fully consistent with the governing equations due to measurement noise, discretization errors, or modeling assumptions. The implications of such data-to-PDE inconsistencies on the accuracy and convergence of PINNs remain insufficiently understood. In this work, we systematically analyze how data inconsistency fundamentally limits the attainable accuracy of PINNs. We introduce the concept of a consistency barrier, defined as an intrinsic lower bound on the error that arises from mismatches between the fidelity of the data and the exact enforcement of the PDE residual. To isolate and quantify this effect, we consider the 1D viscous Burgers equation with a manufactured analytical solution, which enables full control over data fidelity and residual errors. PINNs are trained using datasets of progressively increasing numerical accuracy, as well as perfectly consistent analytical data. Results show that while the inclusion of the PDE residual allows PINNs to partially mitigate low-fidelity data and recover the dominant physical structure, the training process ultimately saturates at an error level dictated by the data inconsistency. When high-fidelity numerical data are employed, PINN solutions become indistinguishable from those trained on analytical data, indicating that the consistency barrier is effectively removed. These findings clarify the interplay between data quality and physics enforcement in PINNs providing practical guidance for the construction and interpretation of physics-informed surrogate models.
💡 Research Summary
The paper investigates a fundamental limitation of Physics‑Informed Neural Networks (PINNs) that arises when the training data are not perfectly consistent with the governing partial differential equation (PDE). The authors introduce the notion of a “consistency barrier,” an intrinsic lower bound on the achievable error that is imposed by mismatches between the fidelity of the data and the exact enforcement of the PDE residual in the loss function.
To formalize this concept, they denote the true analytical solution of the PDE as u(x) and the possibly noisy or discretized data as \tilde u(x). The discrepancy ε(x)=\tilde u(x)−u(x) propagates into the effective loss:
L_eff(θ)=L_PDE(θ)+L_D(θ)+ (1/N_data)∑ ε(x_j)
Comments & Academic Discussion
Loading comments...
Leave a Comment