The geometry of the adapted Bures--Wasserstein space
The adapted Bures–Wasserstein space consists of Gaussian processes endowed with the adapted Wasserstein distance. It can be viewed as the analogue of the classical Bures–Wasserstein space in optimal transport for the setting of stochastic processes, where the standard Wasserstein distance is inadequate and has to be replaced by its adapted counterpart. We develop a comprehensive geometric theory for the adapted Bures–Wasserstein space, thereby also providing the first results on the fine geometric structure of adapted optimal transport. In particular, we show that the adapted Bures–Wasserstein space is an Alexandrov space with non-negative curvature and provide explicit descriptions of tangent cones and exponential maps. Moreover, we show that Gaussian processes satisfying a natural non-degeneracy condition form a geodesically convex subspace. This subspace is characterized precisely by the property that its tangent cones are linear and hence coincide with the tangent space.
💡 Research Summary
The paper introduces the “adapted Bures–Wasserstein space,” a geometric framework for Gaussian processes equipped with the adapted Wasserstein distance (AW₂). Classical optimal transport, based on the 2‑Wasserstein distance (W₂), fails to capture the progressive revelation of information in stochastic processes because it ignores the underlying filtration. To remedy this, the authors define AW₂, which respects the natural filtration generated by independent standard Gaussian increments and measures the cost of transporting one adapted process into another while preserving the adapted distribution.
Focusing on Gaussian processes, the authors parametrize the covariance structure via block lower‑triangular matrices L ∈ ℒ, where each block L_{t,s} (for s ≤ t) maps the independent Gaussian increments G_s to the process value X_t = Σ_{s≤t} L_{t,s} G_s. Two processes X = L G and Y = M G have an adapted Bures–Wasserstein distance d_ABW(L,M) that coincides with the AW₂ distance between them. Crucially, d_ABW admits a closed‑form matrix optimisation: \
Comments & Academic Discussion
Loading comments...
Leave a Comment