Progressive multi-fidelity learning with neural networks for physical system predictions

Progressive multi-fidelity learning with neural networks for physical system predictions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Highly accurate datasets from numerical or physical experiments are often expensive and time-consuming to acquire, posing a significant challenge for applications that require precise evaluations, potentially across multiple scenarios and in real-time. Even building sufficiently accurate surrogate models can be extremely challenging with limited high-fidelity data. Conversely, less expensive, low-fidelity data can be computed more easily and encompass a broader range of scenarios. By leveraging multi-fidelity information, prediction capabilities of surrogates can be improved. However, in practical situations, data may be different in types, come from sources of different modalities, and not be concurrently available, further complicating the modeling process. To address these challenges, we introduce a progressive multi-fidelity surrogate model. This model can sequentially incorporate diverse data types using tailored encoders. Multi-fidelity regression from the encoded inputs to the target quantities of interest is then performed using neural networks. Input information progressively flows from lower to higher fidelity levels through two sets of connections: concatenations among all the encoded inputs, and additive connections among the final outputs. This dual connection system enables the model to exploit correlations among different datasets while ensuring that each level makes an additive correction to the previous level without altering it. This approach prevents performance degradation as new input data are integrated into the model and automatically adapts predictions based on the available inputs. We demonstrate the effectiveness of the approach on numerical benchmarks and a real-world case study, showing that it reliably integrates multi-modal data and provides accurate predictions, maintaining performance when generalizing across time and parameter variations.


💡 Research Summary

The paper addresses the common problem in scientific computing where high‑fidelity (HF) data are scarce and expensive, while low‑fidelity (LF) data are cheap, abundant, and often come from heterogeneous sources (e.g., coarse simulations, simplified models, sensor measurements). Traditional multi‑fidelity methods usually assume homogeneous data (same physical quantity at different resolutions) and struggle with multi‑modal, asynchronous datasets. To overcome these limitations, the authors propose a Progressive Multi‑Fidelity Neural Network (PMF‑NN) framework that can ingest diverse data types sequentially and improve predictions without degrading previously learned knowledge.

Core Architecture

  • Level‑wise Encoders: Each fidelity level l has a dedicated encoder Φ⁽ˡ⁾ tailored to the modality of its input (feed‑forward for vectors, LSTM for time series, CNN/ViT for images, etc.). The encoder maps raw input x⁽ˡ⁾ ∈ ℝ^{d_in⁽ˡ⁾} to a latent vector h⁽ˡ⁾ ∈ ℝ^{d_h⁽ˡ⁾}, typically with d_h⁽ˡ⁾ ≪ d_in⁽ˡ⁾.
  • Latent Concatenation: All latent vectors up to the current level are concatenated: h_tot⁽ˡ⁾ =

Comments & Academic Discussion

Loading comments...

Leave a Comment