Addressing the ground state of the deuteron by physics-informed neural networks
Machine learning techniques have proven to be effective in addressing the structure of atomic nuclei. Physics$-$Informed Neural Networks (PINNs) are a promising machine learning technique suitable for solving integro-differential problems such as the many-body Schrödinger problem. So far, there has been no demonstration of extracting nuclear eigenstates using such method. Here, we tackle realistic nucleon-nucleon interaction in momentum space, including models with strong high-momentum correlations, and demonstrate highly accurate results for the deuteron. We further provide additional benchmarks in coordinate space. We introduce an expression for the variational energy that enters the loss function, which can be evaluated efficiently within the PINNs framework. Results are in excellent agreement with proven numerical methods, with a relative error between the value of the predicted binding energy by the PINN and the numerical benchmark of the order of $10^{-6}$. Our approach paves the way for the exploitation of PINNs to solve more complex atomic nuclei.
💡 Research Summary
The paper presents a novel application of Physics‑Informed Neural Networks (PINNs) to solve the ground‑state problem of the deuteron, the simplest bound nuclear system consisting of a proton and a neutron. While previous machine‑learning approaches such as Neural‑Network Quantum States (NQS) rely solely on the variational principle, PINNs embed the full differential equation, boundary conditions, and normalization constraints directly into the loss function, allowing the network to learn the solution without any labeled data.
The authors first outline the general PINN framework, defining a composite loss consisting of: (i) a PDE residual term enforcing the Schrödinger equation, (ii) boundary‑condition penalties, (iii) a normalization term that uses a logarithmic penalty to avoid the trivial zero‑wavefunction minimum, (iv) an auxiliary integral output ν(x) that enables mesh‑free evaluation of the normalization integral, and (v) a variational term that drives the network toward the lowest possible energy during early training epochs. Hyper‑parameters for the variational term (E₀, a, b, c) and a decaying weight d(t) are introduced to balance early‑stage guidance with later‑stage convergence.
Two computational settings are explored. In coordinate space the authors employ the simplified Minnesota potential, which only couples the S‑wave. A feed‑forward network with six hidden layers of 256 neurons each is trained on 4 097 collocation points. The loss weights are tuned (see Table 1) so that the PDE term becomes dominant only after the network has already satisfied the physical constraints. The resulting binding energy converges from a positive initial value to within 1.1 % of the exact diagonalization result after roughly 5 000 epochs, and the final relative error is on the order of 10⁻⁶.
In momentum space the study tackles two realistic nucleon‑nucleon interactions: a chiral effective‑field‑theory N⁴LO potential (χEFT) and the high‑precision CD‑Bonn model, both of which generate a small D‑wave admixture in the deuteron. The Schrödinger equation becomes an integral equation coupling S‑ and D‑wave components. The network therefore outputs two wave‑functions ψ_S(k) and ψ_D(k). Automatic differentiation is used to compute the Hamiltonian action
Comments & Academic Discussion
Loading comments...
Leave a Comment