Total Variation Sparse Bayesian Learning for Block Sparsity via Majorization-Minimization

Total Variation Sparse Bayesian Learning for Block Sparsity via Majorization-Minimization
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Block sparsity is a widely exploited structure in sparse recovery, offering significant gains when signal blocks are known. Yet, practical signals often exhibit unknown block boundaries and isolated non-zero entries, which challenge traditional approaches. A promising method to handle such complex sparsity patterns is the difference-of-logs total variation (DoL-TV) regularized sparse Bayesian learning (SBL). However, due to the complex form of DoL-TV term, the resulting optimization problem is hard to solve. This paper develops a new optimization framework for the DoL-TV SBL cost function. By introducing an exponential reparameterization of the SBL hyperparameters, we reveal a novel structure that admits a majorization-minimization formulation and naturally extends to unknown noise variance estimation. Sparse recovery results on both synthetic data and extended source direction-of-arrival estimation demonstrate improved accuracy and runtime performance compared to benchmark methods.


💡 Research Summary

The paper addresses the challenging problem of recovering block‑sparse signals when block boundaries are unknown and isolated non‑zero entries may be present. Traditional block‑sparse recovery methods (e.g., block‑SBL, PCSBL) assume known block partitions and perform poorly when these assumptions are violated, such as in direction‑of‑arrival (DOA) estimation with both point and extended sources or in automotive occupancy‑grid mapping where clusters vary in size and isolated obstacles appear.

A promising but computationally difficult approach is to combine Sparse Bayesian Learning (SBL) with a difference‑of‑logs total variation (DoL‑TV) regularizer. The DoL‑TV term promotes piecewise‑constant hyperparameters, thus encouraging block structures, while still allowing isolated entries. However, the DoL‑TV term is non‑convex in the original hyperparameters γ and noise variance λ, and existing optimization schemes (e.g., EM) either require a known noise variance or rely on alternating updates that are inefficient and lack convergence guarantees.

The authors propose a novel optimization framework that overcomes these limitations. First, they re‑parameterize the hyperparameters and noise variance exponentially: γ = exp(z) and λ = exp(β). This transformation reveals that the log‑determinant term log|e^β I + H diag(e^z) Hᵀ| is convex in (z,β) (proved in Proposition 1), and that the regularizer τ‖D z‖₁ and the prior term R(z,β) are also convex. The only remaining non‑convex component is the trace term tr(Y H Σ⁻¹Yᵀ). By introducing an auxiliary matrix Θ, the authors derive an upper bound for this term (inequality (4)) that becomes tight when Θ is set to the closed‑form expression (5). Consequently, each majorization‑minimization (MM) iteration constructs a convex surrogate function g(z,β|Θ) that majorizes the original cost.

To solve the convex subproblem at each MM step, the authors employ the Alternating Direction Method of Multipliers (ADMM). ADMM splits the problem by introducing an auxiliary variable u = D z to isolate the non‑differentiable ℓ₁ TV term, and a dual variable d to enforce the constraint u = D z. The ADMM updates consist of: (1) jointly updating (z,β) via a standard convex solver (the subproblem is smooth after fixing u and d); (2) updating u by soft‑thresholding (closed‑form); (3) updating the dual variable d. This inner ADMM loop is executed for a modest number of iterations (often just one) to keep computational cost low.

A key advantage of the exponential re‑parameterization is that β remains a free variable, allowing simultaneous estimation of the noise variance λ = e^β. Thus the method works under realistic conditions where the noise level is unknown, unlike previous EM‑DoL‑TV SBL which requires λ as an input.

The authors evaluate the proposed Exp‑DoL‑SBL (with and without known noise variance) against several baselines: standard SBL, PCSBL, EM‑DoL‑TV SBL, and Adaptive‑TV SBL. Two experimental settings are considered:

  1. Synthetic block‑sparse signals – M=40 measurements, N=300 unknowns, L=5 snapshots, three contiguous blocks of length 5 plus five isolated non‑zeros. SNR varies from 5 dB to 30 dB. Metrics include normalized squared error (NSE), F1‑score for support recovery, and runtime. Exp‑DoL‑SBL consistently achieves lower NSE and higher F1‑score than all baselines, even when the noise variance is estimated jointly. Runtime is comparable or better across most SNRs, with only a slight increase at the lowest SNR.

  2. Extended‑source DOA estimation – A uniform linear array with 20 sensors, a grid of 200 angles, 40 snapshots, and two spatially extended sources covering


Comments & Academic Discussion

Loading comments...

Leave a Comment