Posterior Uncertainty for Targeted Parameters in Bayesian Bootstrap Procedures
We propose a general method to carry out a valid Bayesian analysis of a finite-dimensional `targeted’ parameter in the presence of a finite-dimensional nuisance parameter. We apply our methods to causal inference based on estimating equations. While much of the literature in Bayesian causal inference has relied on the conventional ’likelihood times prior’ framework, a recently proposed method, the ‘Linked Bayesian Bootstrap’, deviated from this classical setting to obtain valid Bayesian inference using the Dirichlet process and the Bayesian bootstrap. These methods rely on an adjustment based on the propensity score and explain how to handle the uncertainty concerning it when studying the posterior distribution of a treatment effect. We examine theoretically the asymptotic properties of the posterior distribution obtained and show that our proposed method, a generalized version of the ‘Linked Bayesian Bootstrap’, enjoys desirable frequentist properties. In addition, we show that the credible intervals have asymptotically the correct coverage properties. We discuss the applications of our method to mis-specified and singly-robust models in causal inference.
💡 Research Summary
This paper introduces a general Bayesian framework for inference on a finite‑dimensional “targeted parameter” when a finite‑dimensional nuisance parameter is also present. Unlike the traditional “likelihood × prior” approach, the authors exploit the relationship between the Dirichlet process (DP) and the Bayesian bootstrap (BB). By placing a DP(α) prior on the unknown data‑generating distribution and letting α→0, the posterior collapses to a DP based on the empirical distribution with Dirichlet(1,…,1) weights. Consequently, a posterior sample for any functional of the distribution can be obtained by repeatedly drawing Dirichlet weights and solving either a weighted loss‑minimization problem or a weighted estimating‑equation system.
The first methodological component, the Loss‑Likelihood Bootstrap, defines a loss ℓ(O;θ) whose minimizer θ₀ is the target. For each bootstrap replicate, random Dirichlet weights w_i are drawn and the weighted loss Σ_i w_i ℓ(O_i;θ) is minimized, yielding a draw θ^{(b)}. Theorem 2.1 shows that, under standard regularity, √n(θ̂_n − θ₀) and √n(θ^{(b)} − θ̂_n) share the same asymptotic normal distribution N(0, J⁻¹ I J⁻¹). This establishes a Bayesian‑frequentist duality: the posterior credible intervals around the bootstrap mean have the same asymptotic coverage as classical confidence intervals.
The second component replaces loss minimization with estimating equations. Let m(O;θ) be a p‑dimensional vector such that E_F
Comments & Academic Discussion
Loading comments...
Leave a Comment