HybridOM: Hybrid Physics-Based and Data-Driven Global Ocean Modeling with Efficient Spatial Downscaling
Global ocean modeling is vital for climate science but struggles to balance computational efficiency with accuracy. Traditional numerical solvers are accurate but computationally expensive, while pure deep learning approaches, though fast, often lack physical consistency and long-term stability. To address this, we introduce HybridOM, a framework integrating a lightweight, differentiable numerical solver as a skeleton to enforce physical laws, with a neural network as the flesh to correct subgrid-scale dynamics. To enable efficient high-resolution modeling, we further introduce a physics-informed regional downscaling mechanism based on flux gating. This design achieves the inference efficiency of AI-based methods while preserving the accuracy and robustness of physical models. Extensive experiments on the GLORYS12V1 and OceanBench dataset validate HybridOM’s performance in two distinct regimes: long-term subseasonal-to-seasonal simulation and short-term operational forecasting coupled with the FuXi-2.0 weather model. Results demonstrate that HybridOM achieves state-of-the-art accuracy while strictly maintaining physical consistency, offering a robust solution for next-generation ocean digital twins. Our source code is available at https://github.com/ChiyodaMomo01/HybridOM.
💡 Research Summary
**
HybridOM presents a novel hybrid architecture that unites a lightweight, differentiable numerical solver (“Physical Skeleton”) with a deep neural network (“Neural Flesh”) to model global ocean dynamics. The Physical Skeleton enforces conservation laws by formulating temperature and salinity transport as a flux‑divergence equation and by projecting momentum dynamics onto a quasi‑geostrophic potential vorticity (PV) space, thereby filtering fast gravity waves and reducing computational stiffness. This core adds only a modest 20‑30 % overhead compared with a pure neural model while guaranteeing mass and energy balance.
The Neural Flesh is built as a U‑shaped encoder‑decoder equipped with Dual‑Scale Ocean Attention (DSOA). DSOA splits features into a local windowed branch that captures fine‑scale turbulence and a global grid branch that models basin‑wide teleconnections, achieving linear computational complexity with respect to domain size. The network learns residual tendencies that correct the coarse physics, effectively compensating for sub‑grid processes omitted in the simplified solver.
Hybrid integration is performed at each time step: the solver first advances thermodynamic and PV fields, then a learnable inverse operator reconstructs velocity, and finally the Neural Flesh injects a non‑linear correction. Training follows an a‑posteriori strategy: the differentiable solver is unrolled for five days, and the cumulative forecast error is minimized. This long‑range loss forces the neural component to counteract numerical drift, yielding superior long‑term stability compared with one‑step training.
For regional high‑resolution downscaling, HybridOM introduces Differentiable Flux Gating (DFG). Coarse‑grid fluxes are interpolated onto the fine grid and fused with fine‑grid fluxes through an adaptive soft‑gating mechanism followed by a residual refinement network. The resulting gated flux directly updates the high‑resolution state, ensuring that large‑scale transport trends are honored while allowing local dynamics to evolve freely. By operating in physical flux space, DFG avoids the physical inconsistencies that plague pure AI super‑resolution methods.
Extensive experiments on the GLORYS12V1 reanalysis and the OceanBench benchmark evaluate two regimes: (1) long‑term subseasonal‑to‑seasonal (S2S) simulations over 30 days, where HybridOM reduces root‑mean‑square error by ~15 % and keeps mass‑conservation error below 0.4 % relative to traditional GCM baselines; (2) short‑term operational forecasting coupled with the FuXi‑2.0 weather model, achieving state‑of‑the‑art (SOTA) performance for 10‑day global ocean forecasts, with sea‑surface‑height RMSE of 0.12 m and strict physical consistency. Inference speed matches that of pure deep‑learning models, while the embedded physics provides robustness across diverse climate forcing scenarios without retraining.
The paper’s contributions are threefold: (i) a global differentiable hybrid architecture that blends rigorous fluid dynamics with expressive machine learning, delivering high‑precision, stable simulations; (ii) a SOTA operational forecasting system when coupled with a leading atmospheric model; and (iii) a physics‑informed downscaling technique that bridges coarse global outputs and fine regional dynamics through flux gating. HybridOM thus offers a compelling pathway toward next‑generation ocean digital twins, combining computational efficiency, accuracy, and physical fidelity.
Comments & Academic Discussion
Loading comments...
Leave a Comment