Bayesian Linear Models: A compact general set of results

Bayesian Linear Models: A compact general set of results
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

I present all the details in calculating the posterior distribution of the conjugate Normal-Gamma prior in Bayesian Linear Models (BLM), including correlated observations, prediction, model selection and comments on efficient numeric implementations. A Python implementation is also presented. These have been presented and available in many books and texts but, I believe, a general compact and simple presentation is always welcome and not always simple to find. Since correlated observations are also included, these results may also be useful for time series analysis and spacial statistics. Other particular cases presented include regression, Gaussian processes and Bayesian Dynamic Models.


💡 Research Summary

The paper presents a unified and compact treatment of Bayesian linear models (BLM) based on the Normal‑Gamma conjugate prior. Starting from the general linear observation model Y = Xθ + ε with ε ∼ N(0, λ⁻¹Σ), where Σ is a known n × n covariance matrix that may encode arbitrary correlations among observations, the author derives the full joint posterior distribution of the regression coefficients θ and the precision λ. The prior is specified as θ | λ ∼ N(θ₀, (λA₀)⁻¹) and λ ∼ Ga(α₀, β₀). By completing the square in the exponent, the posterior is shown to retain the Normal‑Gamma form with updated parameters: Aₙ = A₀ + XᵀAX, θₙ = Aₙ⁻¹(A₀θ₀ + XᵀAy), αₙ = α₀ + n/2, and βₙ = β₀ + ½


Comments & Academic Discussion

Loading comments...

Leave a Comment