ELM-FBPINNs: An Efficient Multilevel Random Feature Method

ELM-FBPINNs: An Efficient Multilevel Random Feature Method
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Domain-decomposed variants of physics-informed neural networks (PINNs) such as finite basis PINNs (FBPINNs) mitigate some of PINNs’ issues like slow convergence and spectral bias through localisation, but still rely on iterative nonlinear optimisation within each subdomain. In this work, we propose a hybrid approach that combines multilevel domain decomposition and partition-of-unity constructions with random feature models, yielding a method referred to as multilevel ELM-FBPINN. By replacing trainable subdomain networks with extreme learning machines, the resulting formulation eliminates backpropagation entirely and reduces training to a structured linear least-squares problem. We provide a systematic numerical study comparing ELM-FBPINNs and multilevel ELM-FBPINNs with standard PINNs and FBPINNs on representative benchmark problems, demonstrating that ELM-FBPINNs and multilevel ELM-FBPINNs achieve competitive accuracy while significantly accelerating convergence and improving robustness with respect to architectural and optimisation parameters. Through ablation studies, we further clarify the distinct roles of domain decomposition and random feature enrichment in controlling expressivity, conditioning, and scalability.


💡 Research Summary

The paper introduces a novel physics‑informed neural network framework called ELM‑FBPINN, which merges the domain‑decomposition and partition‑of‑unity (POU) ideas of Finite‑Basis PINNs (FBPINNs) with the random‑feature, linear‑training paradigm of Extreme Learning Machines (ELMs). In a standard PINN, the solution of a PDE is represented by a neural network whose parameters are optimized through gradient‑based methods, a process that is often slow, hyper‑parameter‑sensitive, and prone to spectral bias toward low‑frequency components. FBPINNs alleviate spectral bias by splitting the computational domain into overlapping subdomains, each equipped with a small neural network multiplied by a compactly supported window function; however, they still require back‑propagation for each subnetwork, retaining the same optimisation challenges.

ELM‑FBPINN replaces each trainable subnetwork with an ELM: the hidden‑layer weights are drawn randomly once and frozen, while only the output weights are learned. Because the governing PDE operators and boundary operators are assumed linear, the physics‑informed loss becomes a quadratic form in the output‑weight vector, reducing the training problem to a linear least‑squares system. The global solution is expressed as a sum over levels and subdomains:

\


Comments & Academic Discussion

Loading comments...

Leave a Comment