Shannon Theoretic Limits on Noisy Compressive Sampling
In this paper, we study the number of measurements required to recover a sparse signal in ${\mathbb C}^M$ with $L$ non-zero coefficients from compressed samples in the presence of noise. For a number of different recovery criteria, we prove that $O(L)$ (an asymptotically linear multiple of $L$) measurements are necessary and sufficient if $L$ grows linearly as a function of $M$. This improves on the existing literature that is mostly focused on variants of a specific recovery algorithm based on convex programming, for which $O(L\log(M-L))$ measurements are required. We also show that $O(L\log(M-L))$ measurements are required in the sublinear regime ($L = o(M)$).
💡 Research Summary
This paper investigates the fundamental limits on the number of linear measurements required to recover a sparse complex‑valued signal in the presence of additive Gaussian noise. The authors consider a signal x ∈ ℂ^M with exactly L non‑zero entries (‖x‖₀ = L) and focus on the “linear sparsity” regime where L ≈ M/β for a constant β > 2. Three recovery criteria are examined: (1) exact support recovery (0‑1 loss), (2) partial support recovery (at least a fraction α of the true support is identified), and (3) energy recovery (the recovered coefficients capture at least a (1 − γ) fraction of the total signal energy).
The measurement model is y = Ax + n, where A is an N × M matrix with i.i.d. complex Gaussian entries (mean 0, variance 1) and n ∼ 𝒩𝒞(0, ν²I_N) is circularly‑symmetric Gaussian noise. The authors adopt an information‑theoretic decoder based on joint typicality: for each candidate support set J of size L, the decoder checks whether the residual energy ‖Π_⊥^{A_J} y‖₂² is within a small deviation δ of its expected value Nν². If exactly one J satisfies this condition, it is declared as the support estimate. Although this decoder is computationally infeasible, it enables sharp achievability and converse results.
Key technical tools include chi‑square tail bounds for the residual energy and careful handling of the minimum non‑zero amplitude μ = min_{i∈supp(x)}|x_i|. Lemma 3.3 shows that for the true support I, the residual behaves like pure noise, yielding a joint‑typicality probability that approaches 1 exponentially fast in N. For any incorrect support J with |I∩J| = K < L, the residual contains a deterministic component proportional to the energy of the missed coefficients, causing the joint‑typicality probability to decay exponentially in N·Δ, where Δ reflects the gap between signal energy and noise variance.
From these probabilistic estimates the authors derive the following main theorems (constants C_i depend only on β, the noise variance ν, and the performance parameters α, γ):
-
Theorem 2.1 (Achievability, Metric 1): If L μ⁴ log L → ∞ and N > C₁ L, exact support recovery is asymptotically reliable. This condition forces the total signal power P = ‖x‖₂² to grow with N.
-
Theorem 2.3 (Converse, Metric 1): If N < C₂ L log P, no decoder can achieve vanishing error probability for exact support recovery.
-
Theorem 2.5 (Achievability, Metric 2): With bounded μ and constant total power P, partial support recovery is reliable whenever N > C₃ L.
-
Theorem 2.7 (Converse, Metric 2): If N < C₄ L, partial support recovery is impossible.
-
Theorem 2.9 (Achievability, Metric 3): Assuming the non‑zero coefficients decay at a common rate and P is constant, energy recovery succeeds for N > C₅ L.
-
Theorem 2.11 (Converse, Metric 3): If N < C₆ L, energy recovery cannot be guaranteed.
Corollaries accompanying each theorem show that for a fixed Gaussian measurement matrix, the error probability decays exponentially in M whenever the measurement count exceeds the respective constant multiple of L, and conversely it approaches 1 exponentially fast when N falls below the threshold.
In the sub‑linear sparsity regime (L = o(M)), the authors prove that O(L log(M − L)) measurements are both necessary and sufficient, aligning with the best known results for practical L₁‑based algorithms (e.g., LASSO). Thus, the paper establishes a clear dichotomy: in the linear sparsity regime, the information‑theoretic limit is Θ(L), whereas in the sub‑linear regime the logarithmic factor reappears.
The significance of these results lies in separating algorithm‑independent limits from the performance of existing convex‑programming methods, which typically require O(L log(M − L)) measurements. By demonstrating that O(L) measurements suffice in principle, the work highlights a substantial gap between what is theoretically possible and what is currently achievable with tractable algorithms. Moreover, the analysis reveals that the power requirement differs across metrics: exact support recovery demands that the signal power grow with N, while partial support and energy recovery can be achieved with bounded power.
Overall, the paper provides a rigorous Shannon‑theoretic foundation for noisy compressive sampling, clarifies the measurement complexity in different sparsity regimes, and sets a benchmark for future algorithmic developments aiming to close the gap between practical methods and the fundamental limits.
Comments & Academic Discussion
Loading comments...
Leave a Comment