Delay Asymptotics with Retransmissions and Incremental Redundancy Codes over Erasure Channels
Recent studies have shown that retransmissions can cause heavy-tailed transmission delays even when packet sizes are light-tailed. Moreover, the impact of heavy-tailed delays persists even when packets size are upper bounded. The key question we study in this paper is how the use of coding techniques to transmit information, together with different system configurations, would affect the distribution of delay. To investigate this problem, we model the underlying channel as a Markov modulated binary erasure channel, where transmitted bits are either received successfully or erased. Erasure codes are used to encode information prior to transmission, which ensures that a fixed fraction of the bits in the codeword can lead to successful decoding. We use incremental redundancy codes, where the codeword is divided into codeword trunks and these trunks are transmitted one at a time to provide incremental redundancies to the receiver until the information is recovered. We characterize the distribution of delay under two different scenarios: (I) Decoder uses memory to cache all previously successfully received bits. (II) Decoder does not use memory, where received bits are discarded if the corresponding information cannot be decoded. In both cases, we consider codeword length with infinite and finite support. From a theoretical perspective, our results provide a benchmark to quantify the tradeoff between system complexity and the distribution of delay.
💡 Research Summary
This paper investigates how coding and receiver memory affect the distribution of transmission delays over a Markov‑modulated binary erasure channel. The authors model the channel as a slotted system where each transmitted bit is either received correctly (state 1) or erased (state 0), with the current state depending on the previous k states, thus capturing temporal correlation. The long‑term success probability γ (the channel capacity) is derived from the stationary distribution of the underlying Markov chain.
Information packets of length l symbols are encoded with an erasure code of rate β (0 < β < 1), producing a codeword of length L_c = l/β bits. The codeword can be split into r equal‑size trunks; each trunk is transmitted sequentially (incremental redundancy). Two receiver models are considered: (I) a memory‑enabled decoder that stores all successfully received bits across transmissions, and (II) a memory‑less decoder that discards bits if decoding fails, forcing a full retransmission of the entire codeword.
The paper derives delay expressions for both models. For the memory‑enabled case, the total delay T_m(r) equals the number of trunk transmissions N_m(r) times the trunk length L_c/r. Using large‑deviation theory, a rate function Λ_n(β, Π) is defined, where Π is the Markov transition matrix. The authors prove that when the codeword length has an exponential tail (infinite support with decay rate λ), the delay is always light‑tailed. The exponential decay rate of the tail is min{Λ_o1, λ} for a fixed‑rate code (r = 1) and min{Λ_o2, Λ_o3} for r > 1, where Λ_o1, Λ_o2 depend on β and Π, and Λ_o3 depends on the relation between β and γ. Increasing the number of trunks r improves the decay rate, showing the advantage of incremental redundancy over plain repetition.
When the codeword length is bounded (finite support b), the delay still exhibits a light‑tailed “main body” whose decay rate mirrors the infinite‑support case, but the “waist” of this body scales linearly with b (≈ n_o · b). Thus, even with MTU limits, the benefits of memory and incremental redundancy persist.
For the memory‑less decoder, the optimal strategy is to send the whole codeword at once. The analysis reveals a threshold phenomenon: if the code rate β exceeds the channel capacity γ, the delay tail becomes heavy (power‑law) with exponent λ·Λ_1(β, Π); if β < γ, the tail remains exponential with decay rate min{λ, Λ_1(β, Π)}. Hence, selecting a code rate below the channel capacity eliminates heavy‑tailed delays even without receiver memory.
Numerical simulations confirm the theoretical results, illustrating how (i) memory usage always yields light‑tailed delays, (ii) incremental redundancy (larger r) further reduces delay, and (iii) the β vs γ threshold governs the presence of heavy tails in the memory‑less case.
Overall, the work provides a rigorous benchmark for the trade‑off between system complexity (memory, number of redundancy increments) and delay performance. It shows that modest redundancy (β < γ) and/or receiver memory can dramatically mitigate the heavy‑tailed delays that plague traditional retransmission protocols, offering valuable design guidance for low‑latency wireless networks, IoT devices, and any application where tail latency is critical.
Comments & Academic Discussion
Loading comments...
Leave a Comment