Convergence of adaptive mixtures of importance sampling schemes

Convergence of adaptive mixtures of importance sampling schemes
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In the design of efficient simulation algorithms, one is often beset with a poor choice of proposal distributions. Although the performance of a given simulation kernel can clarify a posteriori how adequate this kernel is for the problem at hand, a permanent on-line modification of kernels causes concerns about the validity of the resulting algorithm. While the issue is most often intractable for MCMC algorithms, the equivalent version for importance sampling algorithms can be validated quite precisely. We derive sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and show that Rao–Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions do not benefit from repeated updating.


💡 Research Summary

The paper addresses a fundamental difficulty in Monte Carlo simulation: the choice of proposal distributions for importance sampling (IS). When the proposal is poorly matched to the target distribution, the variance of IS estimators can become infinite, rendering the simulation ineffective. To mitigate this problem, the authors study adaptive mixtures of a finite set of fixed proposal kernels within the framework of Population Monte Carlo (PMC), an iterated version of sampling‑importance‑resampling (SIR).

At each iteration (t) the algorithm draws (N) particles from a mixture proposal
\


Comments & Academic Discussion

Loading comments...

Leave a Comment