Convergence of Noise-Free Sampling Algorithms with Regularized Wasserstein Proximals

Convergence of Noise-Free Sampling Algorithms with Regularized Wasserstein Proximals
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this work, we investigate the convergence properties of the backward regularized Wasserstein proximal (BRWP) method for sampling a target distribution. The BRWP approach can be shown as a semi-implicit time discretization for a probability flow ODE with the score function whose density satisfies the Fokker-Planck equation of the overdamped Langevin dynamics. Specifically, the evolution of the density, hence the score function, is approximated via a kernel representation derived from the regularized Wasserstein proximal operator. By applying the dual formulation and a localized Taylor series to obtain the asymptotic expansion of this kernel formula, we establish guaranteed convergence in terms of the Kullback-Leibler divergence for the BRWP method towards a strongly log-concave target distribution. Our analysis also identifies the optimal and maximum step sizes for convergence. Furthermore, we demonstrate that the deterministic and semi-implicit BRWP scheme outperforms many classical Langevin Monte Carlo methods, such as the Unadjusted Langevin Algorithm (ULA), by offering faster convergence and reduced bias. Numerical experiments further validate the convergence analysis of the BRWP method.


💡 Research Summary

The paper investigates the convergence properties of the Backward Regularized Wasserstein Proximal (BRWP) method, a deterministic, noise‑free algorithm for sampling from a target distribution. The authors first reinterpret the probability‑flow ordinary differential equation (ODE) that governs the evolution of the density associated with overdamped Langevin dynamics. This ODE, ∂ₜρ = ∇·(∇V ρ) + β⁻¹Δρ, is known to be the Wasserstein‑2 gradient flow of the Kullback‑Leibler (KL) divergence D_KL(ρ‖ρ*). By regularizing the Wasserstein proximal operator (RWPO) with an entropy term, they obtain a closed‑form kernel representation:

K_h^V ρ(x) = Z ∫ exp


Comments & Academic Discussion

Loading comments...

Leave a Comment