Differentiable Modeling for Low-Inertia Grids: Benchmarking PINNs, NODEs, and DP for Identification and Control of SMIB System
The transition toward low-inertia power systems demands modeling frameworks that provide not only accurate state predictions but also physically consistent sensitivities for control. While scientific machine learning offers powerful nonlinear modeling tools, the control-oriented implications of different differentiable paradigms remain insufficiently understood. This paper presents a comparative study of Physics-Informed Neural Networks (PINNs), Neural Ordinary Differential Equations (NODEs), and Differentiable Programming (DP) for modeling, identification, and control of power system dynamics. Using the Single Machine Infinite Bus (SMIB) system as a benchmark, we evaluate their performance in trajectory extrapolation, parameter estimation, and Linear Quadratic Regulator (LQR) synthesis. Our results highlight a fundamental trade-off between data-driven flexibility and physical structure. NODE exhibits superior extrapolation by capturing the underlying vector field, whereas PINN shows limited generalization due to its reliance on a time-dependent solution map. In the inverse problem of parameter identification, while both DP and PINN successfully recover the unknown parameters, DP achieves significantly faster convergence by enforcing governing equations as hard constraints. Most importantly, for control synthesis, the DP framework yields closed-loop stability comparable to the theoretical optimum. Furthermore, we demonstrate that NODE serves as a viable data-driven surrogate when governing equations are unavailable.
💡 Research Summary
The paper addresses the pressing need for modeling frameworks that can simultaneously deliver accurate state predictions and physically consistent sensitivities in low‑inertia power grids. Using the classical Single‑Machine‑Infinite‑Bus (SMIB) system as a benchmark, the authors compare three differentiable‑learning paradigms: Physics‑Informed Neural Networks (PINNs), Neural Ordinary Differential Equations (NODEs), and Differentiable Programming (DP).
Methodology
- PINNs approximate the time‑dependent solution map x(t) with a neural network and embed the swing‑equation residuals as soft constraints in the loss function. Both data‑fit and physics‑fit terms are weighted, and unknown parameters (inertia M and damping D) are learned jointly with network weights.
- NODEs treat the vector field f(x,u) as an unknown function ˆf(x,u;θ). The network outputs the time derivative, which is integrated numerically to generate trajectories. Training minimizes the mismatch between simulated and measured trajectories. This approach is fully data‑driven and does not require explicit knowledge of the governing equations.
- DP wraps the entire ODE solver (e.g., Runge‑Kutta) as a differentiable layer. The physical equations are enforced as hard constraints; only the unknown parameters are optimized via back‑propagation through the solver. This yields exact Jacobians with respect to states and parameters, eliminating approximation errors inherent in PINNs and NODEs.
Experiments
All models are implemented in PyTorch with the torchdiffeq library, trained on noisy SMIB data generated with reduced inertia to emulate low‑inertia conditions. The evaluation consists of three parts:
-
Trajectory Extrapolation – NODE achieves the best long‑term extrapolation because it learns the underlying vector field. DP performs comparably but suffers minor numerical drift; PINN’s extrapolation degrades sharply as it only learns a time‑dependent map.
-
Parameter Identification – DP converges fastest and recovers M and D with high accuracy, thanks to the reduced search space imposed by hard physics constraints. PINN also recovers the parameters but requires more epochs and careful weighting of the physics loss. NODE cannot directly identify parameters without an auxiliary identification step.
-
Control Synthesis (LQR) – Linearizing the identified models yields A and B matrices for continuous‑time LQR design. The DP‑based controller produces feedback gains K that match the theoretical optimum, delivering closed‑loop damping and stability margins indistinguishable from the true model. NODE, used as a surrogate, yields acceptable but sub‑optimal gains, while PINN’s approximate Jacobians lead to reduced stability margins.
Key Insights
- Trade‑off: NODE offers flexibility and superior extrapolation when the physics are unknown, but lacks interpretability and direct parameter access.
- Physical Consistency: DP’s hard‑constraint formulation guarantees that learned sensitivities are physically meaningful, which is crucial for control‑oriented tasks such as LQR.
- Training Efficiency: Enforcing physics as hard constraints dramatically speeds up parameter convergence compared with soft‑constraint PINNs.
- Robustness to Noise: DP remains robust under realistic measurement noise and partial observability, whereas PINN’s performance is more sensitive to loss weighting.
Conclusions and Future Work
The study concludes that for low‑inertia grid applications where accurate Jacobians and reliable control are paramount, Differentiable Programming is the most suitable framework. NODE serves as a powerful data‑driven surrogate when governing equations are unavailable, while PINNs occupy a middle ground but suffer from limited generalization. Future directions include hybrid schemes that combine DP’s physical fidelity with NODE’s flexibility, scaling the approach to multi‑machine networks, and testing robustness in streaming IoT environments.
Comments & Academic Discussion
Loading comments...
Leave a Comment