An efficient method of posterior sampling for Poisson INGARCH models

An efficient method of posterior sampling for Poisson INGARCH models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We develop an efficient posterior sampling scheme for the Poisson INGARCH models. The proposed method is based on the approximation of the posterior density that exploits the Poisson limit of the negative binomial distribution. It allows us to rewrite the model in a form amenable to Pólya-Gamma data augmentation scheme, which yields simple conditionally Gaussian updates for the autoregressive coefficients. Sampling from the approximate posterior is straightforward via Gibbs-type iterations and remains numerically stable even under strong temporal dependence. Using this sampler as a proposal distribution will enhance the efficiency in Metropolis-Hastings algorithm and adaptive importance sampling. Numerical simulations indicate accurate posterior estimates, high effective sample sizes, and rapidly mixing chains.


💡 Research Summary

The paper addresses the computational bottleneck that has long limited Bayesian inference for Poisson integer‑valued generalized autoregressive conditional heteroskedastic (INGARCH) models. While maximum‑likelihood estimation for these count time‑series models is straightforward, existing Bayesian methods either rely on fixed proposal distributions that perform poorly under strong persistence or require expensive latent‑variable schemes that scale badly with the length of the series. The authors propose a unified, highly efficient posterior‑sampling framework that combines three key ideas: (1) an approximation of the Poisson likelihood by a negative‑binomial distribution whose dispersion parameter (r_t) is chosen so that the negative‑binomial CDF stays within a pre‑specified tolerance of the true Poisson CDF; (2) a Pólya‑Gamma data‑augmentation that turns the (approximate) negative‑binomial likelihood into a Gaussian scale mixture, thereby rendering the conditional posterior of the autoregressive coefficients (\theta) Gaussian; and (3) a state‑dependent Gaussian proposal constructed from the expected value of the Pólya‑Gamma latent variables, which is then corrected by a Metropolis–Hastings (MH) step using the exact Poisson likelihood.

When the link function is the canonical log, the log‑intensity is linear in the design matrix (D) and the conditional posterior of (\theta) is exactly Normal with mean (\mu) and covariance (V) that depend on the current latent variables (\omega_t). Sampling (\omega_t) directly from a Pólya‑Gamma distribution with shape (r_t+x_t) can be costly because the computational effort grows linearly with (r_t). To avoid this, the authors replace the random (\omega_t) by its conditional expectation (\bar\omega_t) (a first‑order delta method). This yields a deterministic, state‑dependent Gaussian proposal (g(\theta^\ast\mid\theta^{(k-1)})) that adapts automatically to the local curvature of the posterior. The proposal is then accepted or rejected according to the exact Poisson likelihood, guaranteeing that the Markov chain targets the true posterior regardless of the quality of the approximation.

For models that employ a non‑linear softplus link (s_c(x)=c\log(1+e^{x/c})), the log‑intensity is no longer linear in (\theta). The authors linearize the log‑intensity around the current iterate using a first‑order Taylor expansion, (\log\lambda_t(\theta)\approx o_t+J_t^\top\theta), where the gradient (J_t) is computed recursively. This linearization is exact for the log link and provides a highly accurate local approximation for the softplus case. Consequently, the same Pólya‑Gamma augmentation and state‑dependent Gaussian proposal can be employed without modification, extending the method to a broad class of Poisson INGARCH specifications.

Beyond MCMC, the paper develops an adaptive importance sampling (AIS) scheme that reuses the Gaussian proposal but eliminates the accept/reject step. Importance weights are smoothed using Pareto‑smoothed importance sampling (PSIS) to control weight variability and avoid degeneracy. The dispersion parameters (r_t) are not tuned individually; instead, a uniform bound on the relative error between the Poisson and negative‑binomial CDFs is imposed, which automatically determines a common set of (r_t) values that keep the proposal within a prescribed distance from the target posterior across all time points.

Extensive simulation studies compare the proposed method against conventional fixed‑proposal Metropolis–Hastings and other latent‑variable MCMC algorithms for both log‑link and softplus INGARCH models. Results show a 3–5‑fold increase in effective sample size (ESS), dramatically reduced autocorrelation times, and robust performance even when the autoregressive coefficients imply strong persistence (e.g., (\alpha\approx0.9)). In a real‑world application to crime‑count data—an archetypal over‑dispersed count series—the method yields tighter predictive intervals and more stable parameter estimates than existing Bayesian approaches.

Computationally, each iteration requires a single forward recursion to compute the design matrix and, for the softplus case, the Jacobian terms, both costing (O(np)) operations, and a Gaussian update costing (O(p^2)). The Pólya‑Gamma expectations are obtained analytically, avoiding costly sampling of many Pólya‑Gamma draws. Consequently, the algorithm scales well to series of length (n\approx10^4) or larger, making it suitable for modern high‑frequency count data.

In summary, the authors present a novel, extensible framework for Bayesian inference in Poisson INGARCH models that (i) leverages a negative‑binomial approximation to enable Pólya‑Gamma augmentation, (ii) constructs a locally adaptive Gaussian proposal via expected latent variables, (iii) retains exactness through MH correction, (iv) accommodates non‑linear link functions through gradient‑based linearization, and (v) enhances efficiency further with adaptive importance sampling and Pareto smoothing. The combination of these techniques delivers fast‑mixing chains, high ESS, and reliable uncertainty quantification, filling a long‑standing gap in the Bayesian analysis of count time‑series models.


Comments & Academic Discussion

Loading comments...

Leave a Comment