A Solver for Massively Parallel Direct Numerical Simulation of Three-Dimensional Multiphase Flows
We present a new solver for massively parallel simulations of fully three-dimensional multiphase flows. The solver runs on a variety of computer architectures from laptops to supercomputers and on 65536 threads or more (limited only by the availability to us of more threads). The code is wholly written by the authors in Fortran 2003 and uses a domain decomposition strategy for parallelization with MPI. The fluid interface solver is based on a parallel implementation of the LCRM hybrid Front Tracking/Level Set method designed to handle highly deforming interfaces with complex topology changes. We discuss the implementation of this interface method and its particular suitability to distributed processing where all operations are carried out locally on distributed subdomains. We have developed parallel GMRES and Multigrid iterative solvers suited to the linear systems arising from the implicit solution of the fluid velocities and pressure in the presence of strong density and viscosity discontinuities across fluid phases. Particular attention is drawn to the details and performance of the parallel Multigrid solver. The code includes modules for flow interaction with immersed solid objects, contact line dynamics, species and thermal transport with phase change. Here, however, we focus on the simulation of the canonical problem of drop splash onto a liquid film and report on the parallel performance of the code on varying numbers of threads. The 3D simulations were run on mesh resolutions up to $1024^3$ with results at the higher resolutions showing the fine details and features of droplet ejection, crown formation and rim instability observed under similar experimental conditions. Keywords:
💡 Research Summary
The paper presents a highly scalable parallel solver for three‑dimensional incompressible multiphase flows, written entirely in Fortran 2003 and built on an MPI domain‑decomposition framework. The core of the method is the Level Contour Reconstruction Method (LCRM), a hybrid Front‑Tracking/Level‑Set technique that retains the sharp interface representation of Front‑Tracking while using a distance‑function field to periodically reconstruct the interface mesh. By storing interface elements as independent triangular patches and avoiding any logical connectivity, LCRM makes all interface operations local to each element, which is crucial for parallelization.
To accommodate the moving Lagrangian interface within a distributed memory environment, the authors introduce two buffer zones for each subdomain: a traditional boundary buffer for exchanging Eulerian field data (velocity, pressure) and an “extended interface buffer” that holds interface triangles and distance‑function values crossing subdomain boundaries. The extended buffer allows each MPI rank to perform interface advection, surface‑tension force calculation, and periodic reconstruction independently, requiring only limited communication of distance‑function data to maintain global continuity.
The Navier‑Stokes equations are solved in a single‑field formulation that incorporates strong density and viscosity jumps via an indicator function derived from the distance field. The resulting linear systems are tackled with a parallel GMRES solver preconditioned by a custom multigrid algorithm. The multigrid hierarchy respects the same domain decomposition at every level and includes physics‑aware smoothing to ensure robust convergence even with large property contrasts.
Performance is demonstrated on the canonical problem of a droplet impacting a thin liquid film. Simulations are carried out on uniform Cartesian grids up to 1024³ cells, capturing droplet ejection, crown formation, and rim instability with fidelity comparable to experimental observations. Strong‑scaling tests show near‑linear speed‑up from 256 to 65 536 MPI threads; the multigrid preconditioner reduces the total solution time to less than 30 % of the time spent on explicit operations.
The authors conclude that the combination of a locally‑operating hybrid interface method, the extended interface buffer, and a physics‑aware multigrid solver enables truly massive parallelism for DNS of complex multiphase flows. The framework is extensible to immersed solid objects, contact‑line dynamics, and phase‑change transport, positioning it as a versatile platform for future high‑resolution multiphysics simulations.
Comments & Academic Discussion
Loading comments...
Leave a Comment