Fixed-energy inverse scattering with radial basis function neural networks and its application to neutron-alpha interactions

Fixed-energy inverse scattering with radial basis function neural networks and its application to neutron-alpha interactions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper proposes a data-driven method to solve the fixed-energy inverse scattering problem for radially symmetric potentials using radial basis function (RBF) neural networks in an open-loop control system. The method estimates the scattering potentials in the Fourier domain by training an appropriate number of RBF networks, while the control step is carried out in the coordinate space by using the measured phase shifts as control parameters. The system is trained by both finite and singular input potentials and is capable of modeling a great variety of scattering events. The method is applied to neutron-alpha scattering at 10 MeV incident neutron energy, where the underlying central part of the potential is estimated by using the measured l = 0, 1, 2 phase shifts as inputs. The obtained potential is physically sensible, and the recalculated phase shifts are within a few percent relative error.


💡 Research Summary

**
The paper introduces a novel data‑driven framework for solving the fixed‑energy inverse scattering problem for spherically symmetric potentials, employing radial basis function (RBF) neural networks within an open‑loop control architecture. Traditional inverse‑scattering techniques such as the Gelfand‑Levitan‑Marchenko (fixed‑angular‑momentum) or Newton‑Sabatier (fixed‑energy) methods rely on extensive data or suffer from instability when only a few observables are available. In low‑energy nuclear scattering, the experimentally accessible quantities are typically limited to phase shifts, total or differential cross sections, and occasionally polarization observables. Consequently, a robust method that can work with a minimal set of inputs is highly desirable.

The authors propose a two‑stage procedure. In the first stage, a collection of RBF neural networks is trained to map measured phase shifts (the only inputs) to the scattering potential expressed in momentum (k‑) space, V₀(k). Training data are generated from analytically solvable model potentials, both regular (Gaussian, Woods‑Saxon) and singular (δ‑function, step‑like) forms, ensuring that the networks learn a wide variety of spectral features. The RBF architecture consists of a single hidden layer of Gaussian kernels whose centers, widths, and linear output weights are optimized. By exploiting the universal approximation property of RBF networks, the authors demonstrate that even with a modest number of hidden units the mapping from a few phase‑shift values to a continuous function V₀(k) can be learned with high fidelity.

In the second stage, the estimated V₀(k) is inverse‑Fourier transformed to obtain a coordinate‑space potential V₀(r). Because the Fourier transform smooths high‑frequency noise, the resulting V₀(r) is already physically reasonable. Nevertheless, to enforce agreement with the measured phase shifts, the authors employ a Simulated Annealing (SA) optimizer that perturbs the parameters of V₀(r) (depth, range, surface diffuseness) while minimizing a cost function defined as the summed squared differences between calculated and experimental phase shifts. The SA algorithm provides a global search capability, avoiding local minima that could trap gradient‑based methods.

Mathematically, the paper revisits the radial Schrödinger equation, the definition of phase shifts via the asymptotic form of the wave function, and the variable‑phase approach (VPA) which yields a first‑order nonlinear differential equation linking the potential to the phase function. Instead of solving the VPA equation directly—an operation that becomes numerically stiff for higher angular momenta—the authors let the neural network learn the implicit nonlinear relationship, thereby sidestepping the most computationally demanding step.

The methodology is validated in two ways. First, synthetic tests using known potentials confirm that the RBF network can reconstruct V₀(k) with relative errors below 1 % for regular potentials and below 3 % for singular ones. Second, the approach is applied to real experimental data: neutron–α elastic scattering at an incident neutron energy of 10 MeV, for which phase shifts for l = 0, 1, 2 are available. After the SA‑driven correction, the resulting central potential exhibits a depth of roughly –30 MeV and a range around 2 fm, consistent with established phenomenological models. Re‑computed phase shifts from this potential differ from the measured values by an average of 2.4 % (maximum deviation ≈ 3 %), which is comparable to or slightly better than previous inverse‑scattering attempts using multilayer perceptrons.

Key strengths of the work include: (i) the ability to work with a minimal set of observables, (ii) the use of Fourier‑space representation to improve numerical stability, (iii) the combination of supervised RBF training with a global optimization step that respects physical constraints (positivity, smoothness, boundedness), and (iv) a systematic exploration of hyper‑parameters (number of hidden units, kernel widths, learning rates, SA temperature schedule). Limitations are also acknowledged: the current implementation is restricted to spherically symmetric, elastic channels; extension to coupled‑channel, spin‑orbit, or non‑elastic processes would require additional input data and more complex network architectures. Moreover, the computational cost grows with the number of hidden units, which may become prohibitive for very large training sets or real‑time applications.

In conclusion, the paper demonstrates that RBF neural networks, when embedded in a carefully designed open‑loop control loop, can serve as an effective surrogate for the highly nonlinear mapping between phase shifts and interaction potentials at fixed energy. The successful application to neutron–α scattering suggests that the method could become a valuable tool for rapid potential reconstruction in low‑energy nuclear physics, and possibly in other fields where inverse scattering plays a role (e.g., acoustic or electromagnetic imaging). Future work is proposed to address non‑spherical potentials, multi‑channel scattering, and to integrate more advanced machine‑learning techniques such as physics‑informed neural networks or Bayesian uncertainty quantification.


Comments & Academic Discussion

Loading comments...

Leave a Comment