A Sublinear Algorithm for Sparse Reconstruction with l2/l2 Recovery Guarantees

Compressed Sensing aims to capture attributes of a sparse signal using very few measurements. Cand\`{e}s and Tao showed that sparse reconstruction is possible if the sensing matrix acts as a near isometry on all $\boldsymbol{k}$-sparse signals. This …

Authors: Robert Calderbank, Stephen Howard, Sina Jafarpour

A Sublinear Algorithm for Sparse Reconstruction with ℓ 2 /ℓ 2 Reco v ery Guarantees Robert Calderbank Mathematics & Electrical Engineering Princeton Univ ersity NJ 085 44, USA Stephen How ard DSTO PO Box 1500 Edinburgh 5111, Australia Sina Jafarpour Computer Science Princeton Univ ersity NJ 08544 , USA Abstract —Compressed Sensing aims to captu re attributes of a sparse signal using very few measurements. Cand ` es and T ao showed that sp arse reconstruction is p ossible if th e sensing matrix acts as a near isometry on all k -sparse signals. Th is property hold s with ov erwhelming probability if the entries of the matrix ar e generated by an iid Gaussian or Bernoulli process. There h as been significant re cent i nterest in an alternative signal processing framework; exploiti ng deterministi c sensing matrices that with ov erwhelming p robability act as a near isometry on k -sparse vec tors with uniformly random support, a geometric condition that is called the Statisti cal Restricted Isometry Property or StRIP . Thi s paper considers a family of deterministic sensi ng matrices satisfying the StRIP that ar e based on Delsarte-Goethals Codes codes (binary chir ps) and a k -sparse reconstruction algorithm with sublin ear complexity . In the presence of stochastic noise in the data d omain, this paper derive s b ounds on the ℓ 2 accuracy of approxima tion in terms of the ℓ 2 norm of the measurement n oise and the accuracy of the best k -sp arse approximation, also measured in the ℓ 2 norm. This type of ℓ 2 /ℓ 2 bound is tigh ter than the standard ℓ 2 /ℓ 1 or ℓ 1 /ℓ 1 bounds. I . I N T RO D U C T I O N The centr al goal of compressed sensing is to c apture at- tributes of a signal using very few measur ements. In mo st work to date, this broader o bjective is exemplified b y the importan t special case in which a k - sparse vector α in R C with C large is to be r econstructe d from a small num ber N of linear me asurements with k < N ≪ C . In th is pr oblem, the measuremen t data is a vector f = Φ α , where Φ is an N × C matrix called the sensing matrix . The work of Don oho [1] and of Can d ` es, Rom berg and T ao [2] provides fu ndame ntal insight into the geometry of sensing matrices. The R estricted Isometry Pr operty ( RIP) form ulated by Cand ` es and T ao [3] is that the sensing matrix ac ts as a near isom etry on all k -sparse vector s, and this condition is sufficient for sparse reconstructio n. There are two broad families of recon struction algo rithms, those based on convex optimization and those based on g reedy iteration. The basis pursuit algorithm s try to find the sparse approx imation by relaxing the n on-co n vex ℓ 0 loss to a convex o ptimization task such as ℓ 1 minimization , and LASSO [2]. Th e M atching Pursuit algorithm s [ 4]–[6 ] on the oth er hand try to solve The work of R. Calder bank and S. Jafarpour is supported in part by NSF under grant DMS 0701226, by ONR under grant N00173-06-1-G006, and by AFOSR under grant F A9550-05 -1-0443. the recovery pr oblem iterati vely . At eac h iteration, one or a list of coo rdinates is selected g reedily to provide th e best approx imation to the vector in the measurement domain. The vector in the m easuremen t domain is then u pdated accor dingly at th e end of each iteratio n. Ad jacency matrices of expan der graphs have been shown to provide similar perf orman ce [ 7]– [9]. One disadvantage of these Basis Pursuit and Matching Pursuit alg orithms is th at comp utational complexity is sup er- linear in the dimension o f the data dom ain, w hich is ty pically very large if k ≪ C . In th is paper, focusing on average case perfor mance, we propose and analyze a Chirp Reconstruc tion Algorithm that reco nstructs a k -spar se vector iterativ ely b y forming the power spectrum of the measured sup erposition . By contr ast the complexity of Chirp Reco nstruction depen ds only on th e sparsity level k and the number of measur ements N . A second disadvantage is th at ev en thou gh reconstructing a k -sparse signal in the p resence of noise in the data- domain is a fun damentally imp ortant prob lem, bo unds on the accuracy of ap prox imation of BP and MP alg orithms are not very tight. Let α k be α restricted to its k most significant entries, µ be the noise vector , an d ˆ α ∗ be the outpu t of the recovery algorith m. An algorithm is said to provide ℓ p /ℓ q recovery gu arantees if k α − ˆ α ∗ k p ≤ C 1 ( k ) k α − α k k q + C 2 k µ k p . The sparse reconstructio n algo rithms that use random dense matrices provide ℓ 2 /ℓ 1 guaran tees, and th e expander-based reconstruc tion algor ithms provide ℓ 1 /ℓ 1 guaran tees. The rea- son again g oes to the worst-case vs stoch astic mo deling of the noise in the data domain. A result by Cohen et. al [10] shows that no reconstructio n algorith m can provide ℓ 2 /ℓ 2 reconstruc tion guaran tees un less N = Ω( C ) . Nevertheless, we show th at if the signal con sists of k significant entries covered b y C iid Gaussian no ise, which is the case fo r many compressed sen sing ap plications, it is possible to derive ℓ 2 /ℓ 2 guaran tees. Calderbank et al. [11] have considered deterministic sensing matrices that with overwhelming probab ility act as a near isometry on k -spar se vecto rs, and we ref er to this geo metric proper ty as the Sta tistical Restricted Isometry Property: Definition 1 . ( ( k, ǫ, δ ) -StRIP mat rix) An N × C (sensing) matrix Φ is said to be a ( k , ǫ, δ ) -STRIP , if for k - sparse vecto rs α ∈ R C , the inequalities N (1 − ǫ ) k α k 2 ≤ | | Φ α | | 2 ≤ N (1 + ǫ ) k α k 2 , (1) hold with pr obability exceed ing 1 − δ (with respect to a unifor m distrib utio n of the vectors α amon g all k - sparse vectors in R C of the same norm ). The fram ew ork includ es sensing m atrices for whic h the columns are d iscr ete chirps either in the standard Fourier domain [12] or the W alsh-Hadamar d domain [ 13]. Chirp Reconstruction is similar to Matching Pursuit in that at eac h iter ation it iden tifies a significan t co mponen t of the k -sparse signal. The overall compu tational complexity of Chirp Reconstru ction app lied to Reed Muller sensing matrices is O ( k N lo g 2 N ) . Th e StRIP pro perty of the Reed Muller sensing matrices makes it possible to accurately recover the coefficients o f the k significant compo nents leadin g to robust recovery guarantees in the pr esence of noise both in th e data and in th e m easurement do mains. These gu arantees apply w ith overwhelming probab ility to the class of approx imately k - sparse signals. I I . D E L S A RT E - G O E T H A L S C O D E S Here m is odd , the rows of the sensing matrix Φ are in dexed by binary m -tup les x , and the column s ar e in dexed by pa irs P, b , wher e P is an m × m b inary symmetric matrix and b is a binary m -tuple. The entry ϕ P,b ( x ) is giv en by ϕ P,b ( x ) = i wt ( d P )+2 wt ( b ) i xP x ⊤ +2 bx ⊤ (2) where d p denotes the main d iagonal of P , an d wt denotes the Hamming weight ( the n umber of 1 s in the binary vector). The Delsarte-Goeth als set DG ( m, r ) is a binary vector space containing 2 ( r +1) m binary symm etric matr ices with the proper ty that the difference of any two distinct m atrices has rank at least m − 2 r (See [14]). The Delsarte-Goethals sets are nested: D G ( m, 0) ⊂ D G ( m, 1) ⊂ · · · ⊂ D G ( m, ( m − 1 ) 2 ) . The first set D G ( m, 0) is the classical Kerdock set, and the last set D G ( m, ( m − 1) / 2 ) is the set of all bin ary sy mmetric matrices. The r th Delsarte-Goeth als sensin g matrix is deter - mined b y D G ( m, r ) and h as N = 2 m rows and C = 2 ( r +2) m columns. and the colu mn sum s in the r th Delsarte-Goethals sensing matrix satisfy      X x ϕ P,b ( x )      2 = 0 or N 2 − t / m for some t ∈ { m − 2 r, · · · , m } . (3) W e will use the fo llowing lemmas which char acterize the proper ties of the De lsarte-Goethals m atrices. For detailed proof s see [11 ]. Lemma 1. Let G = G ( m, r ) be th e set o f column vectors ϕ P,b wher e ϕ P,b ( x ) = i wt ( d P )+2 wt ( b ) i xP x ⊤ +2 bx ⊤ , for x ∈ F m 2 wher e b ∈ F m 2 and wher e the binary symmetric matrix P varies over th e Delsarte-Goethals set D G ( m, r ) . Then G is a gr oup of or der 2 ( r +2) m under pointwise multiplication. The following Theorem has been proved by Calder bank et.al. Theorem 2. Supp ose the N × C matrix Φ is derived fr om a D G ( m, r ) family , and let η = 1 − 2 r / m . Then for an y k , ǫ with k < 1 + ( C − 1) ǫ , Φ is ( k , ǫ, δ ) -StRIP with δ := 2 exp h − [ ǫ − ( k − 1) / ( C − 1)] 2 N η 32 k i . I I I . T H E C H I R P R E C O N S T RU C T I O N A L G O R I T H M In th is section we intr oduce th e Chirp Reconstruction Algo- rithm, used fo r the purpo se of efficient sparse reco nstruction in the p resence of noise. Le t π = { π 1 , · · · , π C } be a r andom permutatio n of { 1 , · · · , C } , and let α be an almost k -sp arse vector wh ose k significant entries are position ed according to { π 1 , · · · , π k } . Let α k be α r estricted to its best k -term approx imation. Calderbank et.al. showed that if Φ is ( k , ǫ, δ ) StRIP , then with pr obability 1 − δ , k Φ( α − α k ) k 2 ≤ k α − α k k 1 . (4) Furthermo re, if we assum ed th at α is exactly k -sparse en- compassed with C iid wh ite noise with variance σ 2 C , then since th e rows o f Φ for m a tight-f rame with redun dancy C / N , it follows that no ise samples on distinct measu rements are indepen dent g aussian, with v arianc e C σ 2 C / N . As a resu lt, using the con centration bound s for χ 2 distribution, it fo llows th at with overwhelming pro bability k 1 √ N Φ( α − α k ) k 2 ≤ k α − α k k 2 (5) Let µ be the noise in the measur ement d omain. Then compressive sensing using the matrix 1 √ N Φ maps a vector α to f = 1 √ N Φ α + µ = y + ν , where y = 1 √ N Φ α k , and ν = 1 √ N Φ( α − α k ) + µ . Th e goal is then to ap proxim ate α k from f . The chirp r econ- struction algorith m [12], [13 ] is a repurp osing o f the chir p detection algorithm commonly used in navigation radars which is known to work extreme ly well in th e presence of noise, and is described as Algo rithm 1 . At e ach iteration t , giv en the residu al measuremen t vector f t , first the autocorr elation function is app lied to f t , i.e f t is pointwise m ultiplied with a shif ted v ersion of itself. Then app lying the fast Hadama rd transform for ms the power spectrum of f t , which as we will show , con sists of k tones cor respond ing to the position of the k significant entries of α , and a n oise term uniform ly spread across all Hadamard coefficients, which accounts for the noise ν , and chirp like cro ss-terms. In o ther words, since the sens- ing matrix is obtained by exponentiatin g quadratic functions, forming the power spectrum pro duces a spar se superp osition of pure fr equencies (in the examp le below , these are W alsh function s in the bin ary doma in) against a bac kgrou nd of chirp- like cro ss term s. The alg orithm the n iterati vely lea rns the terms in the sparse sup erposition by varying the offset a . These terms can be peeled o ff in decreasing order of signal strength or p rocessed in a list. E xperime ntal results show close approa ch to the information theoretic lower boun d on the required numb er of m easurements [13]. Algorithm 1 Chirp Reconstruction Algorithm Input: N dimensional vector f 1 = 1 √ N Φ α k + ν , Out- put: An approx imation ˆ α ∗ to the k -sparse signal α k 1: for t = 1 , · · · , k o r while k f t k 2 ≥ ǫ do 2: for j = 1 , · · · , m do 3: Let a j be the j th standard basis vector . Using a j pointwise multiply f t with its shifted vector . 4: Compute the fast W alsh- Hadamard transform o f the computed auto-co rrelation: Eq uation (8). 5: Find the po sition o f the next peak l t,j in the Hadamard domain .Decode the next row of the j th row of P π t . 6: end for 7: Pointwise m ultiply f t with i xP π t x ⊤ , an d find the cor- respond ing value b π t , by finding the next pe ak in the power sp ectrum. 8: Determine the corr espondin g value ˆ α + π t which mini- mizes k √ N f t − ˆ α π t ϕ P π t ,b π t k 2 . 9: Set f t +1 . = f t − ˆ α + π t ϕ P π t ,b π t . 10: end for 11: Let Φ π k 1 be Φ restricted to the recovered k columns. Output ˆ α ∗ . = arg min k 1 √ N Φ π k 1 ˆ α − f k 2 . The first step is pointwise m ultiplication of the sparse superpo sition with a shifted copy of itself, which gives y ( x + a ) y ( x ) + ν ( x + a ) ν ( x ) + y ( x + a ) ν ( x ) + ν ( x + a ) y ( x ) (6) By Cauchy-Sch wartz inequality and StRIP prop ery , it is easy to verif y that the total energy of the last three ter ms in (6) is bounded b y 3 k ν k 2 k α k k 2 . The first ter m itself can b e decomp osed into pure tone s 1 N P k j =1 | α j | 2 ( − 1) a ⊤ P π j x , and chirp terms 1 N X i 6 = j α i α j ϕ P π i ,b π i ( x + a ) ϕ P π j ,b π j ( x ) . (7) Then the (fast) Hadamard tran sform conc entrates the energy associated with p ure to nes into ( at m ost) k W alsh-Hadam ard tones with energies | α j | 4 . This algo rithm may g et into troub le when two of the pure tones fall into the same basis. Th is problem can b e r esolved to a large extent by varying the offset a [1 3]. In th e next section, we show that th e the the fast Hadamard transfor m d istributes the en ergy of Equ ation (7) unifor mly acro ss all N ton es in the fas t Hadama rd domain. Moreover , by Azuma’ s inequality , it is easy to verify that the total e nergy o f the ch irps terms (E quation (7 )) is with high - probab ility at most 2 P i 6 = j | α i || α j | N 2 . The impact of redu cing the signal strength in the k c oncentr ated peaks wh ich does not make a problem in detecting the largest peak in the presenc e of sufficiently large SNR. I V . A N A L Y S I S O F T H E A L G O R I T H M The l th Fourier coef ficient of the term (7) is Γ l a = 1 N 3 / 2 X j 6 = t α j α t X x ( − 1) l ⊤ x ϕ P π j ,b π j ( x + a ) ϕ P π t ,b π t ( x ) . (8) In this section we show that with overwhelming prob ability , for all Fourier coefficients l ,   Γ l a   ≤ q k N η k α k k 2 , where the probab ility is with respect to the pe rmutation π . W e sho w th is by a p robabilistic argumen t. First we show that E π    Γ l a    = 0 , and then by con structing an ap prop riate martin gale sequ ence, and app lying the Azuma’ s inequ ality we sh ow that   Γ l a   is highly concentrated around its expectation. Let T be the set of all k -tup les ( t 1 , · · · , t k ) , such that { t 1 , · · · , t C } is a permutatio n of { 1 , · · · , C } . F or all d istinct i, j in { 1 , · · · , k } , and ( t 1 , · · · , t k ) in T defin e h ( t i , t j ) . = X x ( − 1) ℓx ⊤ ϕ P t i ,b t i ( x + a ) ϕ P t j ,b t j ( x ) , (9) and Γ ℓ a ( t 1 , · · · , t k ) . = 1 N 3 2 X i 6 = j α i α j h ( t i , t j ) , (10) Then (8 ) can be written as Γ ℓ a ( π 1 , · · · , π k ) . W e first show th at E π    Γ ℓ a ( π 1 , · · · , π k )    = 0 . Lemma 3. Let G be th e gr oup of colu mns of Φ with r espect to p ointwise multiplication . The map G × G → {± 1 , ± i } given by ( g , h ) → g ( x + a ) h − 1 ( x ) is a surjective ho momorphism, and X g 6 = h g ( x + a ) h − 1 ( x ) = − X g g ( x + a ) g − 1 ( x ) . Pr oof: P g , h g ( x + a ) h − 1 ( x ) = 0 . Lemma 4 . E π  Γ ℓ a ( π )  is zer o. Pr oof: W e can rewrite E π i 6 = j [ X x ( − 1) ℓx ⊤ ϕ P π i ,b π i ( x + a ) ϕ P π j ,b π j ( x )] in the form 1 C ( C − 1) X x ( − 1) ℓx ⊤ X g 6 = h g ( x + a ) h − 1 ( x ) . (11) The initial factor is just the frequen cy with which any adm is- sible pair is chosen, and the second su m is taken over the column group G . Lemma 3 allows us to rewrite ( 11) as − 1 C ( C − 1) X x ( − 1) ℓx ⊤ X g g ( x + a ) g − 1 ( x ) = − 1 C ( C − 1) X P i aP a ⊤ X x ( − 1) ( aP + ℓ ) x ⊤ X b ( − 1) ab ⊤ , (12) where the ou ter sum is ta ken over all b inary symmetric matrices in the Delsarte-Goeth als Codes en sembles. Since a 6 = 0 , the sum P b ( − 1) ab ⊤ = 0 is always zero Theorem 5. Let π be a random permutation o f { 1 , · · · , C } . Then with pr obability at least 1 − δ for a ny co efficient l we have Γ ℓ a ( π 1 , · · · , π k ) ≤ s 8 k log  N δ  N 1 − r /m k α k 2 . (13) Pr oof: Define the martingale sequence Z 1 , · · · , Z k as Z i = E π  Γ ℓ a ( π 1 , · · · , π k ) | π 1 , · · · , π i  , (14) and den ote π j i . = ( π i , · · · , π j ) . Since the colu mns of Φ for m a group und er p ointwise mu ltiplication, using Equatio n (3 ) we get     sup u E π  Γ ℓ a ( π k 1 ) | π i − 1 1 , u  − inf l E π  Γ ℓ a ( π k 1 ) | π i − 1 1 , l      ≤ | α i || P j 6 = i α j | N m − r m . (15) Note that by Cauchy-Schwartz in equality X i   | α i || X j 6 = i α j | 2   ≤ k X i | α i | 2 ! 2 . Consequently , b y applying Azuma’ s in equality we get Pr π  Γ ℓ a ( π 1 , · · · , π k ) ≥ ǫ  ≤ exp  − N 1 − r /m ǫ 2 8 k k α k 4 2  . Applying the union boun ds on all N possible ch oices of l completes the proof. Consequently , the chirp-like terms ha ve uniform distribution across all N to nes in the fast hadamar d domain. Consequen tly , if k ≪ C , and the SNR is sufficiently large, it is possible to iterativ ely recover the po sitions of the k significant entrie s of the vector α . Having re covered the support π k 1 of α k , it is possible to r econstruct a better appr oximation for α k by minimizing k 1 √ N Φ pi k 1 ˆ − f k 2 , which has the a nalytical solution ˆ α ∗ . = √ N  Φ † π k 1 Φ π k 1  − 1 Φ † π k 1 f . (16) The following boun d on th e app roximatio n error of ˆ α ∗ then follows fr om the StRIP property . Theorem 6. Let Φ be ( k , ǫ , δ ) -StRI P . Let α be an a lmost k - sparse vector su ch tha t α k has a uniformly random sup port { π 1 , · · · , π k } . Let ˆ α ∗ defined by Eq uation (16 ). Then with pr obability 1 − δ , k ˆ α ∗ − α k k 2 ≤ 2 (1 − ǫ )  1 √ N k Φ( α − α k ) k 2 + k µ k 2  . Pr oof: Since Φ is ( k , ǫ, δ ) -StRIP , and α k and ˆ α ∗ are two k -spar se vectors with the same rando m support, with probab ility 1 − δ, (1 − ǫ ) k ˆ α ∗ − α k k 2 ≤ 1 √ N k Φ( ˆ α ∗ − α k ) k 2 . By the triangle inequality 1 √ N k Φ( ˆ α ∗ − α k ) k 2 ≤ k 1 √ N Φ ˆ α ∗ − f k 2 + k ν k 2 . On the other hand, by definition of ˆ α ∗ we have k 1 √ N Φ ˆ α ∗ − f k 2 ≤ k 1 √ N Φ α k − f k 2 ≤ k ν k 2 . Putting all together, an d recalling that k ν k 2 ≤ 1 √ N k Φ( α − α k ) k 2 + k µ k 2 Completes the proof . As a result, it follows from Equatio n (4), that with prob a- bility at least 1 − 2 δ , k ˆ α ∗ − α k k 2 ≤ 2 (1 − ǫ )  1 √ N k α − α k k 1 + k µ k 2  , and furth ermore, con sidering Eq uation (5), if the sig nal in the data domain c onsists of k -significant entries covered by white noise, then with overwhelming pro bability k ˆ α ∗ − α k k 2 ≤ 2 (1 − ǫ ) ( k α − α k k 2 + k µ k 2 ) . R E F E R E N C E S [1] D. Donoho, “Compressed Sensing, ” IEEE transacti ons on Information Theory , V ol. 52 (4), pp. 1289-1306 , April 2006. [2] E. Cand` es, J. Romber g, and T . T ao, “Robust unce rtainty principle s: E xact signal reconstru ction from highly incomplete frequenc y informat ion, ” IEEE transactions on Information Theory , V ol. 52 (2), pp. 489-509 , 2006. [3] E. Cand ` es and T . T ao, “Near optimal sign al recove ry from random project ions: Uni versal encoding s trate gies, ” IEEE T ransactio ns on Informatio n Theory , V ol. 52 (12), pp. 5406-5425 , December 2006. [4] J. Trop p and A. Gilbert, “Signa l rec overy from random m easurement s via orthogonal m atchin g pursuit, ” IEEE T rans. on Informati on Theory , 53(12) pp. 4655-4666 , December 2007. [5] W . Dai and O. Milenko vic, “Subspace pursuit for compressi ve sensing: Closing the gap between performance and comple xity , ” to appear in IEEE T ransactions on Information Theory , 2009. [6] D. Needel l and J. A. Tropp, “CoSaMP: Iterati ve signal reco very from incomplete and inaccur ate samples., ” A pplied and Computational Harmonic Analysis, V ol. 26 (3), pp. 301-321 , May 2009. [7] R. Berinde, A. Gilbert , P . Indyk, H. Karl off, and M. Strauss, “Combining geometry and combinatoric s: a unified approach to sparse signal recov- ery ., ” 46th Annual Allerton Confer ence on Communi cation, Contr ol, and Computing , pp. 798-805 , September 2008. [8] S. Jafarpo ur, W . Xu, B. Hassibi , and R. Calderb ank, “Efficie nt com- pressed Sensing using Optimize d E xpander Graphs, ” to appear in IEEE T ransactions on Information Theory , 2009. [9] P . Indyk and M. Ruzic, “Near -optimal s parse reco very in the ℓ 1 norm, ” 49th Annual IEEE Symposium on F oundations of Computer Science , 2008 (FOCS ’08), pp. 199-207 , 2008. [10] A. Cohen, W . Dahmen, and R. DeV ore, “Compressed sensing and best k -term approx imation, ” Journal of American Mathe matical Society V ol. 22, pp. 211-231 , 2009. [11] R. Calderbank, S. H o ward, and S. J afar pour , “Construction of a large class of Matrices satisfying a Statisti cal Isometry Propery , ” to appear in J ournal of Specia l T opics in Signal Proce ssing , 2009. [12] L. Applebaum, S. Ho ward, S. Searle, and R. Calderban k, “Chirp sensing codes: Dete rministic compressed sensing measurements for fast reco very , ” Applie d and Computatio nal Harmonic Analysis, V ol. 26 (2), pp. 283-290 , March 2009. [13] S. Ho ward, R. Calderb ank, and S. Searle , “ A fast reconstr uction algorit hm for deterministic compressi ve sensing using second order Reed-Mul ler code s, ” Confer ence on Information Science s and Systems (CISS), Princet on, ISBN: 978-1-4244- 2246-3, pp: 11 - 15 , March 2008. [14] A. R. Hammons, P . V . Kumar , A. R. Calde rbank, N . J. A. Sloane, and P . Sole, “The Z 4 -linea rity of Kerdock Codes, Preparata, Goethals, and relate d cod es, ” IEE E T ransact ions on Information Theory , V ol. 40 (2), pp. 301-319 , March 1994.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment