Bayesian Signal Component Decomposition via Diffusion-within-Gibbs Sampling
In signal processing, the data collected from sensing devices is often a noisy linear superposition of multiple components, and the estimation of components of interest constitutes a crucial pre-processing step. In this work, we develop a Bayesian framework for signal component decomposition, which combines Gibbs sampling with plug-and-play (PnP) diffusion priors to draw component samples from the posterior distribution. Unlike many existing methods, our framework supports incorporating model-driven and data-driven prior knowledge into the diffusion prior in a unified manner. Moreover, the proposed posterior sampler allows component priors to be learned separately and flexibly combined without retraining. Under suitable assumptions, the proposed DiG sampler provably produces samples from the posterior distribution. We also show that DiG can be interpreted as an extension of a class of recently proposed diffusion-based samplers, and that, for suitable classes of sensing operators, DiG better exploits the structure of the measurement model. Numerical experiments demonstrate the superior performance of our method over existing approaches.
💡 Research Summary
The paper tackles the classic problem of separating multiple signal components that are linearly mixed and corrupted by noise, a situation that appears in speech separation, communications, biomedical imaging, and many other domains. While traditional model‑driven approaches rely on analytically specified priors (e.g., sparsity, smoothness, low‑rank) and data‑driven deep learning methods learn priors from examples, each of these families has limitations: analytic priors cannot capture the rich statistics of real signals, and learned priors usually require retraining when the number of components or the sensing operator changes. Moreover, recent diffusion‑based posterior samplers have been proposed for inverse problems, but they treat the whole unknown as a single variable and therefore ignore the inherent multi‑component structure of the decomposition problem.
The authors propose a unified Bayesian framework that combines Gibbs sampling with plug‑and‑play (PnP) diffusion priors, called Diffusion‑within‑Gibbs (DiG). The key idea is to assign an independent diffusion model to each component. Each diffusion model encodes a prior that can be purely analytic, purely data‑driven, or a hybrid of both (model‑driven constraints are incorporated into the training of the score‑based network). During inference, DiG iteratively updates each component by sampling from its conditional posterior given the current estimates of the other components and the observation. The conditional posterior is approximated by running the corresponding diffusion model in reverse time, starting from a Gaussian initialization and using the learned score function (implemented as a denoising neural network). Because each component’s prior is learned separately, the method is modular: new components can be added or swapped without retraining the whole system.
The paper provides a rigorous theoretical analysis. Assuming perfectly trained diffusion models (i.e., the learned score equals the true score), the authors prove that the Markov chain generated by DiG is asymptotically consistent: its stationary distribution coincides with the true joint posterior p(s₁,…,s_K | y). The proof builds on the equivalence between the reverse‑time stochastic differential equation of a diffusion model and the conditional Langevin dynamics induced by the Gibbs update. The authors also show that DiG can be viewed as an extension of existing diffusion‑based posterior samplers: those methods correspond to the special case K = 1, whereas DiG explicitly exploits the linear mixing structure, leading to more efficient exploration in high‑dimensional, under‑determined settings.
Experimental validation is carried out on both synthetic data and a real‑world cardiac signal extraction task. In synthetic experiments with three components mixed through random linear operators, DiG achieves 3–5 dB higher PSNR than competing diffusion samplers that treat the mixture as a single variable, while using the same amount of training data. In the cardiac example, the authors separate the true ECG waveform from motion‑induced interference. They train a diffusion prior for the ECG using a small set of clean beats (model‑driven periodicity is also injected) and another diffusion prior for the interference using noisy recordings. DiG successfully recovers the ECG with accurate morphology and provides credible intervals, demonstrating uncertainty quantification that is unavailable in deterministic methods. Notably, the method retains high performance even when the training set is reduced by an order of magnitude, highlighting the benefit of the hybrid prior design.
The paper concludes with several promising directions: extending DiG to nonlinear forward models, designing joint diffusion networks that capture dependencies between components, and accelerating the sampler for real‑time applications. Overall, DiG offers a principled, modular, and empirically powerful solution for Bayesian signal component decomposition, bridging the gap between model‑driven theory and data‑driven deep generative models.
Comments & Academic Discussion
Loading comments...
Leave a Comment