Stochastic adaptation of importance sampler
Improving efficiency of importance sampler is at the center of research in Monte Carlo methods. While adaptive approach is usually difficult within the Markov Chain Monte Carlo framework, the counterpart in importance sampling can be justified and validated easily. We propose an iterative adaptation method for learning the proposal distribution of an importance sampler based on stochastic approximation. The stochastic approximation method can recruit general iterative optimization techniques like the minorization-maximization algorithm. The effectiveness of the approach in optimizing the Kullback divergence between the proposal distribution and the target is demonstrated using several simple examples.
💡 Research Summary
The paper addresses the long‑standing problem of improving the efficiency of importance sampling (IS) by adaptively tuning the proposal distribution. While adaptive schemes are notoriously difficult to justify within the Markov chain Monte Carlo (MCMC) framework because they must preserve detailed balance, the authors argue that IS permits far more flexibility: the importance weights automatically correct for any change in the proposal. Their central idea is to view the adaptation of the proposal parameters as a stochastic approximation (SA) problem aimed at minimizing the Kullback‑Leibler (KL) divergence between the target density π and the proposal density f(·|θ). Minimizing KL(π‖fθ) is equivalent to maximizing the π‑expected log‑likelihood Eπ
Comments & Academic Discussion
Loading comments...
Leave a Comment