Estimation of log-Gaussian gamma processes with iterated posterior linearization and Hamiltonian Monte Carlo

Estimation of log-Gaussian gamma processes with iterated posterior linearization and Hamiltonian Monte Carlo
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Stochastic processes are a flexible and widely used family of models for statistical modeling. While stochastic processes offer attractive properties such as inclusion of uncertainty properties, their inference is typically intractable, with the notable exception of Gaussian processes. Inference of models with non-Gaussian errors typically involves estimation of a high-dimensional latent variable. We propose two methods that use iterated posterior linearization followed by Hamiltonian Monte Carlo to sample the posterior distributions of such latent models with a particular focus on log-Gaussian gamma processes. The proposed methods are validated with two synthetic datasets generated from the log-Gaussian gamma process and a multiscale biocomposite stiffness model. In addition, we apply the methodology to an experimental Raman spectrum of argentopyrite.


💡 Research Summary

This paper addresses the challenging problem of Bayesian inference for log‑Gaussian‑gamma (LG‑Gamma) processes, a class of hierarchical stochastic models where positive measurements are modeled as gamma‑distributed observations and the logarithms of the gamma shape and rate parameters are each assigned independent Gaussian process (GP) priors. The resulting latent field consists of two high‑dimensional GP realizations (α for the log‑shape and β for the log‑rate) together with their GP hyper‑parameters (means, signal variances, length‑scales, and nugget variances). Consequently, the posterior lives in a space of dimension 2K + 2D + 6, where K is the number of observation locations and D the input dimensionality. Traditional Markov chain Monte Carlo (MCMC) methods, including Hamiltonian Monte Carlo (HMC), become computationally prohibitive in such settings because each iteration requires evaluating the full non‑linear likelihood and its gradient with respect to all latent variables.

The authors propose two complementary strategies that combine iterated posterior linearization (IPL) with HMC to obtain accurate posterior samples at a fraction of the cost of a naïve HMC implementation. IPL is an iterative scheme that approximates the non‑linear observation model π(y | α,β) by a first‑order Taylor expansion around the current posterior estimate of (α,β). This yields a conditionally Gaussian likelihood, allowing an exact Gaussian update for the latent fields. By repeatedly updating the linearization point and recomputing the Gaussian posterior moments, IPL progressively corrects for the non‑linearity while keeping the computational burden low (the dominant cost is solving a linear system involving the GP covariance matrices). After convergence, IPL provides an approximate Gaussian posterior for (α,β) characterized by a mean vector and covariance matrix. These moments are then used as an informed proposal distribution for HMC, which samples the remaining GP hyper‑parameters (μ_α,θ_α,μ_β,θ_β). Because the high‑dimensional latent fields are effectively integrated out by IPL, HMC operates in a much lower‑dimensional space, leading to faster mixing and reduced wall‑clock time.

The second strategy builds on the IPL‑derived Gaussian approximation as the initial distribution of a tempering (or annealing) sequence. A series of intermediate target distributions is defined by raising the likelihood to a temperature β_t∈


Comments & Academic Discussion

Loading comments...

Leave a Comment