Performance Analysis of Signal Detection using Quantized Received Signals of Linear Vector Channel
Performance analysis of optimal signal detection using quantized received signals of a linear vector channel, which is an extension of code-division multiple-access (CDMA) or multiple-input multiple-output (MIMO) channels, in the large system limit, is presented in this paper. Here the dimensions of channel input and output are both sent to infinity while their ratio remains fixed. An optimal detector is one that uses a true channel model, true distribution of input signals, and perfect knowledge about quantization. Applying replica method developed in statistical mechanics, we show that, in the case of a noiseless channel, the optimal detector has perfect detection ability under certain conditions, and that for a noisy channel its detection ability decreases monotonically as the quantization step size increases.
💡 Research Summary
The paper investigates the theoretical performance of an optimal detector that operates on quantized received signals in a linear vector channel, a model that encompasses both CDMA and MIMO systems. The authors consider a K‑input, N‑output channel described by y = Hx₀ + σ₀ν, where x₀ ∈ {±1}ᴷ is the transmitted binary vector, H is an N×K random matrix with i.i.d. Gaussian entries of zero mean and variance 1/N, ν is standard Gaussian noise, and σ₀² is the noise variance. At the receiver, each analog component yₙ is quantized by an A/D converter with step size d, producing integer outputs nₙ that satisfy (nₙ−½)d < yₙ < (nₙ+½)d. The conditional probability of a quantized output given the transmitted vector and the channel matrix is expressed through Q‑functions (Eq. 3), which fully characterizes the “quantization channel.”
The optimal detector is defined as one that knows the true channel matrix, the true prior distribution P₀(x)=2⁻ᴷ, and the exact quantization model. It computes the posterior P(x|n,H) ∝ P(n|Hx)P(x) and makes a Maximizer of Posterior Marginals (MPM) decision for each component. Because exact marginalization is computationally prohibitive, the authors resort to the replica method from statistical physics to evaluate the average bit error probability in the large‑system limit (K,N → ∞ with β = K/N fixed).
Using replica symmetry, the error probability is shown to take the form P_b = Q(√E), where the effective signal‑to‑noise parameter E and an auxiliary order parameter m satisfy coupled fixed‑point equations (7) and (8). Equation 7 is a tanh‑type self‑consistency condition for m, while equation 8 involves the first derivative of the averaged quantized likelihood ρ̄₀ and integrates over the quantization index n. Depending on the values of σ₀, d, and β, these equations may admit up to three solutions, classified as “good,” “intermediate,” and “bad” based on the resulting error probability. The physically relevant solution is identified by minimizing a free‑energy functional F (Eq. 10).
In the noiseless case (σ₀ = 0), the equations admit the solution m = 1, E → ∞, which yields P_b = 0 for any β provided the quantization step is sufficiently small relative to √β. This indicates that, under appropriate quantization resolution, the detector can achieve perfect recovery despite the presence of quantization. The bifurcation diagram (Fig. 1) shows a critical value of d/√β below which only the perfect‑detection solution exists; above this threshold multiple solutions coexist.
For noisy channels (σ₀ > 0), perfect detection is impossible because quantization adds an extra source of distortion. The analysis shows that the effective parameter E decreases monotonically as the quantization step d increases, leading to a monotonic increase in the error probability. Figures 3(a) and 3(b) illustrate this behavior for system loads β = 1.0 and β = 1.8, respectively, across a range of Eb/N₀ values. The curves also reveal a region where the “bad” solution becomes the free‑energy minimum, implying that low‑complexity algorithms such as belief propagation would suffer a performance collapse in that regime.
The main conclusions are twofold: (1) In the absence of channel noise, quantization does not degrade detection performance as long as the step size satisfies d/√β < (critical value), allowing the optimal detector to recover the transmitted bits perfectly. (2) When channel noise is present, detection performance degrades smoothly with larger quantization steps; thus, finer quantization (smaller d) is beneficial, especially for low‑SNR scenarios.
These results provide concrete guidelines for the design of A/D converters and digital front‑ends in large‑scale wireless systems. By linking the quantization resolution to the system load β and the operating SNR, engineers can predict the trade‑off between hardware complexity (number of quantization bits) and achievable detection performance. Moreover, the successful application of the replica method demonstrates its utility for analyzing non‑linear operations such as quantization in high‑dimensional communication models.
Comments & Academic Discussion
Loading comments...
Leave a Comment