ARQ for Network Coding

A new coding and queue management algorithm is proposed for communication networks that employ linear network coding. The algorithm has the feature that the encoding process is truly online, as opposed to a block-by-block approach. The setup assumes …

Authors: Jay Kumar Sundararajan, Devavrat Shah, Muriel Medard

1 ARQ for Netw ork Coding Jay Kumar Sundararajan, De v avrat Shah and Muriel M ´ edard Laboratory for Information and Decision Systems Massac husetts Institute of T echnolog y , Cambridge, MA 02139, USA. Email: { jaykumar ,dev a vrat,medard } @mit.edu Abstract A ne w coding and queue managemen t algorithm is proposed for communication networks that employ linear network coding . The algorithm has the feature that the e ncoding process is truly online, as opposed to a block-by - block approach . The setup assumes a pack et erasure broadcast channe l with stochastic arriv als a nd full feedback, but the prop osed scheme is potentially applicable to mo re gen eral lossy networks with link-by-link feedb ack. The algorithm guarantees that the physical queu e size at the sender tracks the backlog in degrees of freedom (also called the virtual queue size). The new notion of a node “seeing” a packet i s introduced. In terms of this idea, our algorithm may be vie wed as a natural extension of ARQ schemes to coded networks. Our approach, kno wn as the drop-when -seen algorithm, is compared with a baseline qu euing approach called drop-when -decoded. It is sho wn that the expected queue size for our approac h is O  1 1 − ρ  as opposed to Ω  1 (1 − ρ ) 2  for the baseline approach , where ρ is the load factor . I . I N T RO D U C T I O N Digital fountain codes ([1], [2]) are a well-known so lution to the problem of r eliable commu nication over a packet erasure channel. They h a ve low complexity an d req uire no fee dback, except to signal successful decodin g of a block. However , fountain co des do not extend readily to a network setting. Consider a two-link tandem network. If the midd le node applies a fo untain code o n coded packets it has received so far , this does n ot mean that the overall code from the sender to the receiver will have the prope rties of a fountain code. In this sense, the fountain code a pproach is not composab le across links. A dec ode and re-encod e scheme will b e sub-optimal in terms of delay as p ointed o ut by [3]. In comp arison, the rand om linear network coding solutio n [4], is com posable b ecause it removes the n eed for decodin g at intermed iate nodes. Th is solution en sures that with high p robability , the transmitted p acket will have the innovation gua rantee property , i.e. , it will bring new information to e very rece i ver , except in the case when th e receiver already knows as much as the sen der . T hus, every successful rece ption will b ring a unit of ne w inform ation. This schem e is sh o wn to achieve cap acity f or the case of a mu lticast con nection. This work is supported by the DARP A ITMANET gran t a nd by the NSF grant CNS-0627021 (Ne TS:NBD: XORs in the Air: Practical W irele ss Network Coding) DRAFT 2 Howe ver , bo th fountain code s an d ran dom linear network coding have the prob lem of decod ing d elay . Bo th schemes are block-b ased. While this works for a file download setting, many applications today need to bro adcast a continuous stream of packets in r eal-time. The above schemes would segment the stream into blocks (also called generation s) an d p rocess o ne b lock at a time. If playback can begin only after re cei ving a full b lock, then high throug hput would require a la r ge delay . This raises the interesting question – can playb ack b e gin even befor e the full b lock is received? In gener al, playback is possible till the point up to wh ich all packets have been recovered, which we call the fr ont of con tiguous knowledge . Traditionally , no in centi ve is placed on deco ding a part o f the m ess age using a p art of the codeword. In a streamin g a pplication ho we ver , d ecoding older packets earlier reduces delay . Per formance depends on not only how mu c h d ata is tr ansferred in a given time, b ut also which part of the data . In other words, we are more in terested in packet delay than block delay . T hese issues have been studied b y [5 ] and [6] in a point-to -point setting. Ho we ver , in a network setting , the pro blem is not well understood . In related w ork, [7], [8] a ddress the que st ion of ho w many original packets are revealed before the whole blo ck is decod ed. Howe ver , playback delay depen ds on not just the number of r ecov ered packets, b ut also the ord er in which they are re cov ered. Suppose we h a ve full feedback, the n reliable commun ication ov er a packet erasure c hannel can be achie ved using Automatic Repeat reQu est (ARQ). This simple scheme achieves 100% throughp ut, in-ord er delivery and the lo west possible p acket delay , an d it is composab le across links. In case of tan dem networks, this is the op timal solution. Howe ver , link-b y-link ARQ canno t achieve the multicast c apacity of a general network. Th e well- kno wn butterfly network is an example. Besides, ARQ does n ot work well with bro adcast-mode links because retran smitting a packet that some receivers did n ot g et is wasteful fo r the o thers that already h a ve it. In contr ast, network codin g readily extends to b roadcast-mode lin ks and also achieves m ulticast cap acity of any network. Our n e w scheme combines the ben efits o f network c oding and ARQ b y a ckno wledging deg r ees of freedom instead of origina l packets . (Here, de gr ee of fr eedom refers to a new dimension in the ap propriate vector space. ) This new framework allows the code to in corporate r ecei vers’ states of kn o wledge an d th ereby enables the sender to con trol the ev olution o f th e fro nt of contigu ous knowledge. The scheme may thu s be viewed as a fir s t step in feedback -based control o f the tradeoff between throu ghput and dec oding delay , along the lines sug gested in [9]. This ne w k ind of feedback is also useful in queue management. Consider a packet erasure broadcast ch annel. The network cod ing solution o f [4] requires th e sender to gen erate a linear com bination using p otentially all packets of a generation that ha ve arrived so far . If feedback is used on ly to signal completion of a generation, then the sen der will have to stor e th e entire generation till it is decoded . If instead, receivers ACK each packet upon decoding, then the send er can drop pac k ets that all r ecei vers have decoded. Howe ver , e ven storing a ll u ndecoded packets may be suboptim al. Consider a situation wher e the sender has n packets and all recei vers have r ecei ved ( n − 1 ) linear combinations: ( p 1 + p 2 ), ( p 2 + p 3 ), . . . , ( p n − 1 + p n ). No packet can be decod ed by any recei ver , so the sender cannot drop any packet. Ho wev er , th e backlog in degrees of freedom is just 1. It would be enoug h if th e send er s tores any one of the p i ’ s. Th e degrees of freedom backlog is also called the “virtual queue ” ([1 0], [1 1]). W e id eally want th e physical q ueue to track the vir tual q ueue. Now , even if the DRAFT 3 Time Sender’ s queue Tra nsmitted pa cke t Channel state A B Decoded Seen but not decoded Decoded Seen but not decoded 1 p 1 p 1 → A, 9 B p 1 - - - 2 p 1 , p 2 p 1 ⊕ p 2 → A, → B p 1 , p 2 - - p 1 3 p 2 , p 3 p 2 ⊕ p 3 9 A, → B p 1 , p 2 - - p 1 , p 2 4 p 3 p 3 9 A, → B p 1 , p 2 - p 1 , p 2 , p 3 - 5 p 3 , p 4 p 3 ⊕ p 4 → A, 9 B p 1 , p 2 p 3 p 1 , p 2 , p 3 - 6 p 4 p 4 → A, → B p 1 , p 2 , p 3 , p 4 - p 1 , p 2 , p 3 , p 4 - T ABLE I A N E X A M P L E OF T H E D RO P - W H E N - S EE N A L G O R I T H M receivers get degrees of freedom at the spe cified rate, it is not clear when they would decod e the original packets. Hence, the drop -when-decode d scheme will n ot ach ie ve this goal. In this work, we sh o w that we can achieve this goal if we allow A CKs on degrees of fre edom. A. Ou r contribution a nd its implication s W e propose a n e w o nline codin g and queue update algorithm that u ses ACKs on degrees of freedom to guar antee that the physical q ueue size a t th e sender will track the back log in degre es of freedo m. W e introdu ce the notion of a nod e “seeing” a message packet, which is defined as follows. ( Note: in this work, we treat pa ck ets a s vectors over a finite field .) Definition 1 (Seeing a pa c ket): A no de is said to have seen a packet p if it has enou gh information to compute a linear combination of the f orm ( p + q ) , wher e q is itself a linear com bination in v olving only packets that arrived after p at the sender . ( Decoding im plies seeing, as we can pick q = 0 .) Our new scheme is called the dr op-when-seen a lgorithm because a pa c ket is dr opped if all r eceivers have seen it . The in tuition is that if all receivers ha ve seen p , it is enou gh for the send er’ s tran smiss ions to in volve only packets beyond p . Af ter d ecoding these packets, the receivers can compute q and hence obtain p as well. Therefo re, even if the re cei vers have not decod ed p , no inform ation is lo st b y dropping it. Whereas ARQ AC Ks a packet upon decodin g, our schem e A CKs a p acket when it is seen. This p rov es useful when there is a broad cast con straint. W e present a deterministic coding sch eme that gu arantees that the coded packet, if received su ccessfully , would simultaneo usly cause each receiver to see its next unseen packet . W e will p rove later that seein g a new pa ck et tran slates to r ecei ving a n e w degree of freedom. This means, the innov ation guaran tee pr operty is satisfied and 100% thr oughput can b e achieved. The example below explains this a lgorithm for a simple two-receiver case. Section IV -A extends this scheme to more receivers. 1) Example: T able I shows a sample of how the p roposed id ea work s in a packet era sure b roadcast channel w ith two receivers A and B. The sender’ s que ue is shown after the arriv al point an d b efore the tr ansmission point of a slo t. In each slot, th e sender picks the oldest unseen packet fo r A an d B. I f they are the same pa ck et, then that DRAFT 4 packet is sent. If not, th eir XO R is sen t. Th is rule will cause both rec ei vers to see their oldest unseen packet. In slo t 1, p 1 reaches A but not B. In slot 2, ( p 1 ⊕ p 2 ) reach es A and B. Since A kn o ws p 1 , it can also decod e p 2 . As fo r B, it has no w seen (but not d ecoded) p 1 . At this p oint, since A and B ha ve seen p 1 , the sender drops it. This is fine because, B will e ventually decode p 2 (this happen s in slot 4), at which time it can obtain p 1 . Similarly , as shown in the table, p 2 , p 3 and p 4 will be dro pped in slots 3, 5 an d 6 re s pectively . Howe ver , the drop-wh en-decoded policy will d rop p 1 and p 2 in slot 4, an d p 3 and p 4 in slot 6. Thu s, our n e w stra te gy clearly keeps th e qu eue shorter . This is form ally proved in Theo rem 1 an d Cor ollary 3 . 2) Implication s of ou r new scheme: • The in formation deficit at a r ecei ver is restricted to a window of p ackets that ad v ances in a streamin g manner and has a stable size ( namely , the set of u nseen packets). In this sense, the pr oposed encoding scheme is truly online. All rece i vers see packets in or der . • The phy sical queue size is upper-boun ded by the sum o f th e degrees o f f reedom backlog between the sender and recei vers. Our scheme th us form s a na tural bridge between the virtual and physical queu e sizes. It can be used to exten d results on the stability of vir tual q ueues such as [10 ], [11 ], [12 ] to physical qu eues. • At most n packets are inv olved in the coded p acket. Th is reduces the d ecoding complexity and the overhead for storing the codin g coefficients. • W e assume a single packet erasure b roadcast chan nel. But, we believe our algo rithm is comp osable and can be extended to a tand em network o f broad cast links, and with suitab le mo difications, it can be app lied to a more gen eral setup like the one in [13 ]. • As for decod ing delay , we can say that if a receiver r ecei ves a packet while it is a lead er in terms of numb er of re cei ved degrees of freed om, then it will be able to deco de all p ackets up to th at point. Now , some seen packets m ight be decod ed ev en befor e a receiver beco mes a leade r . The evolution of d ecoded packets n eeds further study . The sch eme we proposed in [14] also showed that the p hysical queue track s the v irtual queue s . Howe ver , unlike [14] our c urrent work p rovides an explicit coding sche me that enables us to prove new re sults. In other work, [15] also combin es feedback an d codin g to address decoding delay . Howe ver , their no tion of delay ignore s the o rder in which packets are decod ed. Moreover , they do no t co nsider a stream of stoch astic arriv als. I I . T H E S E T U P A sender wants to broadcast a stream of pa ck ets to n receiv ers over a packet erasu re broadca s t channe l. T ime is slotted. W e focus on linear co des – every transmission is a lin ear com bination o f packets from the inco ming stream. A nod e can co mpute a n y linear c ombination whose coe f ficient vector is in the sp an of the co ef ficient vectors of previously re cei ved code d p ackets. This lead s to the following definition. Definition 2 (Knowledge of a no de): The k nowledge of a no de is the set of all linear combina tions of origina l packets that it can c ompute, ba sed on the info rmation it has received so far . The coefficient vectors of these linear combinatio ns form a vector space called th e kn owledge spa ce o f th e node. DRAFT 5 Slot number t Point of arrival Point of departure for p hysical queu e Point of transmission Time Point wher e stat e variables are measu red Point of feedback Fig. 1. Relat i ve timing of arri v al, service an d departu re points wi thin a slot The sender has o ne ph ysical q ueue with no preset size constraints. W e use the notion of a virtual queue to represent the b acklog in degre es of freed om between the sender and receiver . Ther e is one virtu al queue for each receiver . Definition 3 (V irtual qu eue): Th e size of th e j th virtual queue is d efined to b e the difference between the dimension of th e knowledge space o f the sender and that of the j th receiver . Arrivals: P ackets arrive into the sender’ s physical queue accord ing to a Bern oulli process of rate λ . An arrival at the physical queu e translates to an ar ri v al at each virtual queu e. Service: The cha nnel accepts one packet per slot. Each rece i ver receives the packet with no erro rs with probability µ o r an er asure occurs with probab ilit y (1 − µ ) . Erasur es occur ind ependently acro ss receivers and across slots. Receiv ers ca n d etect erasu res. W e assume the innovation guarante e property hold s. Then, we can map successful reception to service of the vir tual queu e. Thu s, in each slot, every virtual queue is served in dependently with probab ility µ . Service o f the physical queu e will depend on the queu e up date schem e used . F eedba c k: In Algorithm 1, fe edback is sent when a w indo w o f packets is decoded, in or der to indicate successful decodin g. F or Algo rithm 2, the feedback is needed in e very slot to indicate an erasure. W e assume perfect delay-free feedback . T iming: Figure 1 shows the relative timing of various events within a slot. For simp licity , we assume tha t the tran smiss ion, unless erased by the c hannel, reaches the receivers be fore they send feedb ack fo r that slot, an d feedback from all receivers reaches the sender be for e the end of the same slot . Thus, the feedb ack incorporates the current slot’ s reception also. Let ρ := λ/µ . In what follows, we will compare the expected q ueue size for the baseline dro p-when-deco ded sche me and the new dro p-when-seen schem e, asymp totically as ρ → 1 − . I I I . A L G O R I T H M 1 – D RO P W H E N D E C O D E D The co ding scheme assum ed is an o nline version of [4 ], with no preset g eneration size. Th e sender tran smits a random linear combinatio n of all packets curre ntly in the queue. For any re cei ver , packets at the sender are unknowns, and each received packet is an equation in th ese unknowns. Deco ding c an happen whenever the difference between the number of equations a nd un knowns in v olved b ecomes zer o. This difference is ess entially the backlog in degrees of fr eedom, i.e. , th e vir tual q ueue size. Thus, successful de coding at a r eceiver h appens whe n th e corresponding DRAFT 6 λµ λ + µ λ 0 1 2 3 λµ µ λ + µ λ λµ µ λ + λµ µ λ + µ λ µ λ µ λ µ λ Fig. 2. Marko v chai n for t he siz e of a virtual que ue. ¯ λ := 1 − λ ; ¯ µ := 1 − µ virtual q ueue becomes empty 1 . When e ver a rece i v er is able to decode in this man ner , i t informs the sen der . Based on this, the sender tracks which receiv ers have d ecoded each packet, and drop s a packet if it h as bee n decoded by all receivers. W e assume the field size is large enou gh to ign ore the p robability that the coded packet is not innovati v e. W e will n o w study the behavior of the virtual qu eues in steady state. Bu t first, we intro duce som e n otation: Q ( t ) := Size o f the physical queu e at the end o f slot t Q j ( t ) := Size of the j th virtual qu eue at th e end of slot t Figure 2 shows the Markov chain for Q j ( t ) . If ρ < 1 , the ch ain is po siti v e r ecurrent. Its steady state d is tribution is given by [ 16]: π k = (1 − α ) α k , k ≥ 0 , wher e α = λ (1 − µ ) µ (1 − λ ) . Thus, the steady state expe cted size of any vir tual queue is: lim t →∞ E [ Q j ( t )] = ∞ X j =0 j π j = (1 − µ ) · ρ (1 − ρ ) (1) Next, we analyz e the p hysical qu eue size under this sch eme. Let T be th e time an arbitrary arriv al in steady state spends in the ph ysical q ueue befo re d eparture, excluding the slot in wh ich the arriv al occurs. The p acket will not depart until each virtua l queue h as be come emp ty at least once sinc e its arrival. Let D j be the tim e until the next emptying of the j th virtual queu e af ter the n e w arrival. The n, T = max j D j and so , E [ T ] ≥ E [ D j ] . Hence, we focus on E [ D j ] . W e condition on the new arriv al seeing state Q j = k bef ore join ing the q ueue. T hen, th e state at the en d o f th e slot in wh ich th e packet arri ves, is k if the cha nnel is ON in that slot and ( k + 1) other wise. Now , D j is simply th e first passage time from the state at the end o f that slot to state 0 . It can be shown that the expected first passage time from state u to state 0 for u > 0 is g i ven by Γ u, 0 = u µ − λ . Now , due to the pr operty that Berno ulli arriv als see time averages (B AST A) [1 7], the arr i v al sees the same distribution for Q j as th e steady state distribution given above. W e can then find E [ D j ] thus: E [ D j ] = ∞ X k =0 P ( New arriv al sees state k ) E [ D j | State k ] = ∞ X k =0 π k [ µ Γ k, 0 + (1 − µ )Γ k +1 , 0 ] = 1 − µ µ · ρ (1 − ρ ) 2 If the chain is positive recurre nt ( ρ < 1 ), w e can use Little’ s law to find steady state expected phy si cal qu eue size: E [ Q ] = λ E [ T ] ≥ λ E [ D j ] . This leads to th e f ollo wing result: 1 It may be possible to find some unkno wns e v en before the virtu al queue becomes empty . Howe v er , this is a higher order ef fect and we ignore it . DRAFT 7 Theor em 1: The expected size of th e physical queue in steady state for Algo rithm 1 is Ω  1 (1 − ρ ) 2  . I V . A L G O R I T H M 2 – D RO P W H E N S E E N Algorithm 2 was briefly introdu ced in Sectio n I -A . The a lgorithm uses the notion of redu ced row echelon for m (RREF) of a matrix in representing the knowledge of a receiv er . The d efinition and properties o f the RREF ca n be found in [1 8]. Let V be the knowledge space of som e receiv er . Supp ose m p ackets h a ve ar ri ved at the sender so far . Then , V must be a sub space of F m q and can be r epresented using a dim ( V ) × m matrix over F q whose rows form a basis o f V . Multiple rep resentations are p ossible dependin g o n the b asis chosen. Howe ver , if we in sis t that the matrix must be in RREF , we get a uniqu e representation . T his un ique RREF basis can be ob tained by perf orming a Gau ss ian elimination on any other basis matrix. In the RREF basis, the first n onzero entry of any row is called a pivot . Any column with a piv ot is called a pivot column . The nu mber of pivot colum ns equals the numb er of nonze ro rows, which is dim [ V ] . The k th packet to have ar ri ved at the sender is said to have an ind e x k and is deno ted p k . The columns are or dered so that colu mn k maps to p acket p k . The next theor em conn ects the seen packets and the RREF basis. Theor em 2: A no de has seen a packet with ind e x k if and only if the k th column of the R REF basis B of the knowledge space V of the no de is a pivot co lumn. Pr oof: Th e ‘ if ’ p art is clear . For the ‘on ly if ’, suppose column k of B do es not con tain a piv ot. In any line ar combinatio n o f the rows, rows with p i v ot after column k can not contribute anything to column k . Rows with pivot before co lumn k will result in a n onzero term in some co lumn to the left of k . So th e first non zero term of any vector in V cannot be in colum n k , i.e. , p k could no t have been seen. Cor ollary 1: I f r eceiver j has seen pack et p k , then it k nows exactly o ne linear combin ation of the fo rm p k + q such that q in volves only unseen pa c kets with index mor e than k . Cor ollary 2: Th e nu mber of pack ets seen b y a r eceiver is eq ual to the dimen sion of its knowledge spa ce . Definition 4 (W itne ss ): W e denote the u nique linear combin ation g uaranteed by Corollary 1 as W j ( p k ) , the witness for receiver j seeing p k . The cen tral id ea of the new alg orithm is to keep track of seen packets instead of decod ed packets. After eac h slot, ev ery recei ver info rms the sender whether an erasure occu rred, using perfect feedb ack. The aim is to use this feedback to en sure the send er stores ju st enough data to be ab le to satisfy the innovation guarantee pro perty . The two main par ts of the algo rithm are the co ding an d q ueue u pdate modules. The co ding m odule co mputes a linear com bination g , which will ca use any receiver that receives it, to see its oldest unseen pac k et. First, the sender comp utes each receiver’ s k no wledge space usin g feed back and finds its oldest unseen pac k et. Only these packets will b e in v olved in g , an d hence we call them the transmit set . A receiver can can cel packets inv olved in g that it has alread y seen, by subtractin g suitab le mu ltiples of the corresp onding witnesses. T herefore, the coefficients for g should be picked such that f or each r ecei ver , after cancelin g the seen packets, the r emaining coefficient of the o ldest unseen pac k et is n on-zero. Theorem 3 proves that this is possible if DRAFT 8 the field size is at least n . W ith tw o receivers, the coding module is a simple XOR based sch eme ( see T able I). The coding mod ule readily implies the following queue update rule. D r op a packet if all receiv ers have seen it, since the coding module will not use it ever again. Also, wh ile co mputing the kn o wledge spaces of the receivers, the sender only n eeds to track the pr ojection of these spaces on d imensions currently in the qu eue. Thus, the algor ithm can be implemente d in an in cremental m anner and the co mplexity tracks th e qu eue size. A. Th e formal description of the algo rithm The dr op-when- seen algorithm: The alg orithm works with the RREF b ases of the rece i vers’ knowledge spaces. The repr esentation is in the for m of co ef ficient vector s in terms o f the curren t queue con tents and no t the original packet stream. 1) Initialize matrices B 1 , . . . , B n to the empty matrix . 2) Incorporate new arrivals: Sup pose th ere a re a new arriv als. Add the new packets to th e end of th e queue . Append a zeros to ev ery row in each B j . 3) T ransmission: If the queue is empty , do nothing ; else com pute g u si ng the coding mo dule an d tr ansmit it. 4) Incorporate channel state feedback: For e very receiv er j = 1 to n , do: If receiver j recei ved the transmission , includ e the coefficient vector o f g in terms of the cu rrent q ueue contents, as a new row in B j . Perform Ga ussian elim ination. 5) Separate out packets that all receiver s have seen: Update the fo llo wing sets and ba s es: S j := Set o f in dices of piv ot columns of B j S ∆ := ∩ n j =1 S j (set of p ackets seen by all receivers). New B j := Sub- matrix of curren t B j obtained by exclud ing colum ns in S ∆ and corr esponding pivot rows. 6) Update the q ueue: Drop th e p ackets with indices in S ∆ . 7) Go back to step 2 for the next slot. The cod ing mo dule: Let { u 1 , u 2 , . . . , u m } be the set of indices of the oldest unseen packets of the re cei vers, sorted in ascending ord er ( m ≤ n , since the oldest un seen p acket m ay b e th e same for some receiv ers). Exclude recei vers whose oldest unseen packets have not yet arrived at the sen der . Let R ( u i ) b e the set of receivers wh ose oldest unseen packet is p u i . W e now present the codin g mod ule to select the linear com bination fo r tr ansmission. 1) Loop over oldest u nseen pack ets For j = 1 to m , do: All receivers in R ( u j ) have seen packets p u i for i < j . Now , ∀ r ∈ R ( u j ) , find y r := P j − 1 i =1 α i W r ( p u i ) , where W r ( p u i ) is the witne ss fo r receiver r seein g p u i . Pick α j ∈ F q such tha t α j is different from the coefficient o f p u j in y r for each r ∈ R ( u j ) . 2) Compute the transmit pa c ket: g := P m i =1 α i p u i DRAFT 9 Theor em 3: If the field size is a t least n , then the coding mo dule picks a linear combination that will cause any r eceiver to see its oldest unseen packet upon successful r eception. Pr oof: First we show that a suitable choice always exists for α j . For r ∈ R ( u 1 ) , y r = 0 . So p ick α 1 = 1 . For j > 1 , | R ( u j ) |≤ ( n − 1) . Even if each y r for r ∈ R ( u j ) h as a different coefficient for p u j , that covers on ly ( n − 1) different field elements. If q ≥ n , then ther e is a choice left in F q for α j . ∀ j , ∀ r ∈ R ( u j ) , receiver r knows y r . No w , g and y r have the same coefficient for all packets with ind e x less than u j , and a different coefficient fo r p u j . Hence , g − y r will inv olve p u j and only pac k ets with index beyon d u j . So r can see p u j . Theorem 2 implies that seeing an u nseen p acket corresp onds to receiving an unknown degree of fre edom. Henc e, Theorem 3 essentially say s that the innovation g uarantee property is satisfied an d hence the sch eme is thr oughput optimal. B. Con necting the physical and virtual queu e sizes W e will n eed the following n otation: S ( t ) := Set of packets arrived at sender till th e en d o f slot t V ( t ) := Send er’ s knowledge space after inco rporating the arriv als in slot t . This is simply eq ual to F | S ( t ) | q V j ( t ) := Receiv er j ’ s knowledge spac e at the end o f slot t S j ( t ) := Set of p ackets receiver j has seen till end of slo t t Lemma 1: F or S 1 , S 2 , . . . , S k ( k ≥ 1 ), subsets of a set S : | S | − | ∩ k i =1 S i | ≤ k X i =1 ( | S | − | S i | ) (2) W e omit th e pr oof. W e apply this lemma on the sets S ( t ) an d S j ( t ) , j =1 to n . Since the queu e holds pa ck ets not seen by all rece i vers, Q ( t ) = | S ( t ) | − | ∩ n j =1 S j ( t ) | . Also, from Corollary 2, | S j ( t ) | = dim [ V j ( t )] . Hence th e RHS of (2) becomes P n j =1  dim [ V ( t )] − dim [ V j ( t )]  , wh ich is the sum of vir tual queu e sizes. T his imp lies the n e xt theorem. Theor em 4: F or A lgorithm 2, the ph ysical queue size at the sen der is upp er -bounded by the sum of the virtual queue sizes, i. e. , th e sum of the degr ees-o f-fr eed om backlog be tw een the sende r and the r eceivers. Theorem 4 an d th e result in (1) lead to this cor ollary . Cor ollary 3: Th e expected size of the physical queue in steady sta te for Algorithm 2 is O  1 1 − ρ  . V . C O N C L U S I O N S A N D E X T E N S I O N S Comparing the r esults in Th eorem 1 and Corollary 3, we see that the queue size fo r the new Alg orithm 2 is significantly lo wer than Algorithm 1. This will pr o ve useful in reducin g congestion. Th e new algo rithm allows the physical q ueue size to track the virtual queu e size. This extend s stability and other queu ing-theoretic results on virtual qu eues to physical queues. W e believe the pr oposed scheme will be r ob ust to d elayed or imper fect feed back, just like co n v entional ARQ. The scheme read ily extends to a tandem n etw ork o f broa dcast link s (with no mergers) if the in termediate nod es DRAFT 10 use the evidence packets in place of the original packets. W e expect that it will also extend to other topolog ies with suitable modificatio ns. W e have p roposed a natural extension o f ARQ for coded n etw orks. This is th e first step to wards the go al of using feedback on degrees of freed om to control the tr adeof f between thro ughput and d ecoding delay , b y d ynamically adjusting the exten t to wh ich p ackets are mixed in the n etw ork. R E F E R E N C E S [1] M. Luby , “L T codes, ” in Proc. of 43r d A nnua l IEEE Symposium on F oundations of Computer Scien ce , Nove mber 2002 , pp. 271–282. [2] A. Shokrollahi , “Raptor code s, ” in Pr oc. of 2004 IEEE Internation al Symposium on Information Theory (ISI T 2004) , J ul y 2004 . [3] P . Pakzad, C. Fragouli, and A. Shokrollahi, “Coding schemes for line netwo rks, ” in Pr oc. of 2005 IEEE International Symposium on Informatio n Theory (ISIT 2005) , 2005. [4] D. S. Lun, M. M ´ edard, and M. Effro s, “On coding for reliabl e communication ove r pack et networks, ” in 42nd Annual Allerton Confer enc e on Commun icatio n, Contr ol, and Computing , September – October 200 4. [5] E . Martinian, “Dynami c information and constr aints in source and cha nnel coding, ” PhD Thesis, Massachusetts Institute of T echnology , Dept. of EECS, Sep. 2004. [6] A. Sahai, “Why delay and block length are not the same thing for channel coding with feedback, ” in Proc. of UCSD W orkshop on Informatio n Theory and its Applicat ions. In vite d P ape r , Feb . 2006. [7] S. Sanghav i, “Intermed iate performance of rateless codes, ” in Pr oc. of 20 07 IEEE Information Theory W orkshop , September 200 7. [8] A. Beimel, S. Dole v , and N. Singer , “R T obli vious erasure correct ing, ” in Pro c. of 2004 IEEE Information Theory W orkshop , October 2004. [9] C. Fragouli, D. S. Lun, M. M ´ edard, and P . Pakzad, “On feedback for network coding, ” in Proc. of 2007 Confer ence on Informatio n Scienc es and Syste ms (CISS 200 7) , Ma rch 2007. [10] T . Ho and H. V iswanathan, “Dynamic algori thms for multicast with intra-session net work coding, ” in 43rd Allerto n Annual Confer enc e on Commun icatio n, Contr ol and Computing , 2005. [11] A. Eryilmaz and D. S. Lun, “Contro l for inte r-session network coding, ” in Pr oc . of NetCod , 2007. [12] J. Sundara rajan, M. M ´ edard, M. Kim, A. Eryilmaz, D. Shah, and R. Koette r , “Network coding in a multicast s wi tch, ” in Pr oc. of IEEE INFOCOM , 2007. [13] D. S. Lun, “Effic ient operatio n of coded packet netw orks, ” PhD Thesis, Massachusetts Institute of T e chnolog y , Dept. of EECS, June 2006. [14] J. Sundararaj an, D. Shah, and M. M ´ edard , “On queue ing in co ded networ ks – queue size follo ws deg rees of free dom, ” in Pr oc. of IEEE W orkshop on Information Theory for W i re less Netw orks , July 2007 . [15] L . Kel ler , E. Drinea, and C. Fragouli , “Online broadc asting with ne twork c oding, ” in Pr oc . of NetCod , 2008. [16] J. J. Hunter , Mathemati cal T e chni ques of Applied Pr obabili ty , V ol. 2, Discr ete T ime Models: T e chni ques and Application s . NY : Academic Press, 19 83. [17] H. T akagi, Queuei ng Analysis, V ol. 3: Discre te-T ime Systems . Amste rdam: Else viser Sc ience B. V ., 1993. [18] M. Artin, Alg ebra . Engl e wood Clif fs, NJ: Prentice- Hall, 1991. DRAFT

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment