Deep Learning of Mean First Passage Time Scape: Chemical Short-Range Order and Kinetics of Diffusive Relaxation

Deep Learning of Mean First Passage Time Scape: Chemical Short-Range Order and Kinetics of Diffusive Relaxation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Processes slow compared to atomic vibrations pose significant challenges in atomistic simulations, particularly for phenomena such as diffusive relaxations and phase transitions, where repeated crossings and the shear number of thermally activated transitions make direct numerical simulations impossible. We present a computational framework that captures atomic-scale diffusive relaxation over extended timescales by learning the mean first passage time (MFPT) with a deep neural network. The model is trained via a self-consistent recursive formulation based on the Markovian assumption, relying solely on local residence times and transition probabilities between neighboring states. Furthermore, we leverage deep reinforcement learning (DRL)-accelerated atomistic simulations to expedite the identification of thermodynamic equilibrium and the generation of accurate physical transition probabilities. Applied to vacancy-mediated chemical short-range order (SRO) evolution in equiatomic CrCoNi, our method uncovers disorder-to-order transition timescales in quantitative agreement with experimental measurements. By bridging the gap between simulation and experiment, our approach extends atomistic modeling to previously inaccessible timescales and offers a predictive tool for navigating process-structure-property relationships.


💡 Research Summary

The paper introduces a novel computational framework that enables atomistic simulations of diffusion‑driven relaxation processes over experimentally relevant timescales by learning the Mean First Passage Time (MFPT) with a deep neural network. Traditional molecular dynamics (MD) is limited to nanoseconds–microseconds because of the need for femtosecond time steps, and existing accelerated dynamics methods (hyper‑dynamics, parallel replica, diffusive MD, etc.) still struggle when many thermally activated events occur repeatedly.

The authors formulate MFPT as the average time for a given atomic configuration s₀ to reach a thermodynamic equilibrium set G (e.g., the chemically short‑range ordered state in a high‑entropy alloy). Assuming Markovian dynamics, MFPT satisfies a recursive Bellman‑type equation:

 MFPT(s→G) = t_residence(s) + Σₐ∈Aₛ P(a) MFPT(s′→G)

where t_residence(s) is the mean residence time in state s, P(a) is the transition probability of action a (s→s′), and the sum runs over all possible neighboring actions. This equation is analogous to the Bellman equation in reinforcement learning (RL) and can be solved iteratively.

To train a neural network θ that approximates MFPT(s), the authors need a dataset of state‑transition triples (s, s′, t_residence(s)). They generate this data efficiently using deep reinforcement learning (DRL)–accelerated atomistic simulations. Two DRL agents are introduced:

  1. DRL‑LSS (Lower‑Energy State Sampler) – an RL policy that performs physically admissible vacancy hops to rapidly descend the free‑energy landscape and locate the equilibrium region G.
  2. DRL‑TKS (Transition‑Kinetics Simulator) – a second RL policy that, starting from intermediate states sampled by DRL‑LSS and by conventional Metropolis Monte Carlo (MMC), generates kinetic‑consistent trajectories of vacancy hops.

The transition barriers E_A and attempt frequencies ν_A required for computing P(a) are predicted by a graph neural network (GNN) reaction model. This GNN is trained on a large dataset generated with the universal machine‑learning interatomic potential MACE‑MP‑0, avoiding expensive nudged‑elastic‑band calculations. The GNN achieves high accuracy on both barrier heights and logarithms of attempt frequencies for in‑distribution test data.

With the transition network in hand, the authors construct a loss function based on the recursive MFPT equation and apply temporal‑difference (TD) learning to update θ via back‑propagation. The resulting MFPT function, termed a “timescape,” directly maps any atomic configuration to its expected relaxation time toward equilibrium.

The methodology is demonstrated on equiatomic CrCoNi, a medium‑entropy alloy where vacancy‑mediated diffusion drives the formation of chemical short‑range order (SRO). DRL‑LSS combined with MMC identifies the centroid and fluctuation width of the SRO equilibrium region G. DRL‑TKS then supplies a rich set of kinetic trajectories across a range of temperatures (300–800 K). The trained MFPT model predicts SRO ordering times that quantitatively match experimental annealing measurements (from seconds to hours), whereas conventional kinetic Monte Carlo would require simulation times equivalent to centuries.

Key advantages of the approach include:

  • Data efficiency – DRL accelerates the discovery of equilibrium states and provides diverse transition samples without exhaustive enumeration.
  • Physical fidelity – Transition probabilities are grounded in TST via GNN‑predicted barriers, preserving the underlying physics while avoiding costly first‑principles calculations.
  • Direct time prediction – By learning MFPT, the framework yields an explicit physical time scale, enabling direct comparison with experiments and facilitating process‑structure‑property mapping.

Limitations are acknowledged: the Markov assumption may break down for systems with long‑range correlations or memory effects, and the state space can become prohibitively large for multi‑defect or multi‑component systems, requiring additional dimensionality‑reduction or hierarchical sampling strategies.

In summary, the paper presents a powerful, scalable strategy that couples deep reinforcement learning, graph‑based reaction modeling, and recursive MFPT learning to bridge the gap between atomistic simulations and real‑world timescales. This opens new avenues for predictive materials design where processing conditions, microstructural evolution, and functional properties can be quantitatively linked.


Comments & Academic Discussion

Loading comments...

Leave a Comment