On Random Construction of a Bipolar Sensing Matrix with Compact Representation
A random construction of bipolar sensing matrices based on binary linear codes is introduced and its RIP (Restricted Isometry Property) is analyzed based on an argument on the ensemble average of the weight distribution of binary linear codes.
Authors: Tadashi Wadayama
On Random Construction of a Bipolar Sensing Matrix with Compact Representation T adashi W adayama Nagoya Institute of T echnology Nagoya, Aichi, J AP AN Email: waday ama@nitech.ac.jp Abstract —A random construction of bipolar sensing matrices based on binary linear codes is intro duced and its RIP (Restricted Isometry Property) is analyzed based on an argument on the ensemble a vera ge of the weight distribution of binary linear codes. I . I N T RO D U C T I O N Research in co mpressed s ensing [2] [3] is expan ding rap idly . The sufficient co ndition fo r ℓ 1 -recovery based o n the Re- stricted Isometry Property (RIP) [3 ] [4] is one of the celebrated results in this field. The design of sensing m atrices with small RIP co nstants is a theoretically interesting and chal- lenging problem. Currently , random constructio ns provide the strongest results, and the analysis of ran dom co nstruction s is based on large d eviations o f maximum and minimum sing ular values of random matrices [5] [3]. In the pr esent paper, a ran dom constructio n of b ipolar sensing matrices based on binary linear codes is in troduc ed and its RIP is analyzed. The column vectors of the proposed sensing matrix ar e nonzero code words of a randomly ch osen binary linear co de. Using a gener ator ma trix, a p × m sensing matrix can be rep resented b y O ( p log 2 m ) -bits. The existence of sensing m atrices with the RIP is shown based on an argument o n the ensemble a verage of the weight distrib ution of binar y linear cod es. I I . P R E L I M I N A R I E S A. Notation The symbols R and F 2 represent the field of real numb ers and the finite field with two elements { 0 , 1 } , r espectively . T he set of all p × m real matrices is den oted b y R p × m . In th e present pa per, th e notation x ∈ R p indicates that x is a colu mn vector o f len gth p . T he n otation || · || p denotes ℓ p -norm (1 ≤ p < ∞ ) defined by || x || p △ = p X i =1 | x i | p ! 1 /p . (1) The ℓ 0 -norm is defined by || x || 0 △ = | supp(x) | , (2) where sup p(x) denotes the index set of nonzero co mponen ts of x . The functions w h ( · ) and d h ( · , · ) ar e the Hamming weig ht and Hamm ing distance fu nctions, respectively . B. Restricted isome try pr operty (R IP) Let Φ △ = { φ 1 , . . . , φ m } ∈ R p × m be a p × m real matrix, where the ℓ 2 -norm of the j -th ( j ∈ [1 , m ]) column vector φ j is normalized to on e, namely , || φ i || 2 = 1 . The notation [ a, b ] represents the set o f consecu ti ve integers fro m a to b . The restricted isometry proper ty of Φ introd uced by Candes and T ao [3] plays a key ro le in a sufficient con dition o f ℓ 1 - recovery . Definition 1: A vector x ∈ R m is ca lled an S -sparse ( S ∈ [1 , m ]) vector if || x || 0 ≤ S . If there e xists a re al nu mber δ (0 ≤ δ < 1) satisfyin g (1 − δ ) || x || 2 2 ≤ || Φ x || 2 2 ≤ (1 + δ ) || x || 2 2 (3) for any S -sparse vecto r x ∈ R m , then we say that Φ has the RIP of order S . If Φ has the RIP of order S , then the smallest constant satisfyin g ( 3) is called the RIP constant of Φ , wh ich is deno ted by δ S . Assume that Φ has the RIP with sma ll δ S . In such a case, any sub-matrix composed fro m Q -columns (1 ≤ Q ≤ S ) of Φ is nearly ortho norm al. Recently , Candes [ 4] rep orted the relation between th e RIP and the ℓ 1 -recovery prop erty . A portio n of the main results of [4] is sum marized as follows. L et S ∈ [1 , m ] , and assume tha t Φ has the RIP with δ 2 S ≤ √ 2 − 1 . (4) For any S -sparse vector e ∈ R m (i.e., || e || 0 ≤ S ) , the so lution of the following ℓ 1 -minimization p roblem minimize || d || 1 subject to Φ d = s (5) coincides exactly with e , where s = Φ e . Note that [4] con sid- ers stro nger reco nstruction results (i.e., robust reco nstruction ). The matrix Φ in (5) is c alled a sensing ma trix . C. R elation b etween incoh er en ce a nd the R IP The incoherence of Φ defined below and the RIP co nstant are closely r elated. Definition 2: The in coheren ce of Φ is define d b y µ (Φ) △ = max i,j ∈ [1 ,m ] ,i 6 = j | φ T i φ j | . (6) The following lemma shows the relatio n between the inco- herence and the RIP constant. Similar bo unds ar e we ll known (e.g.,[9]). Lemma 1: Assume that Φ ∈ R p × m is giv en. For any S ∈ [1 , m ] , δ S is u pper bou nded by δ S < µ (Φ) S. (7) An elementary proof (different from that in [9]) is presented in Ap pendix. I I I . C O N S T RU C T I O N O F S E N S I N G M AT R I C E S B A S E D O N B I N A RY L I N E A R C O D E S In this section, we present a construction method for sensing matrices based on b inary linear cod es. A sensing matrix obtained fr om this constru ction has a concise description . A sensor can store a generato r m atrix of a bina ry linear code, instead of th e en tire sensing m atrix. A. Binary to b ipolar conver sion fu nction The function β p : F p 2 → R p is c alled a binary to b ipolar conver sion fun ction defined b y β : x ∈ F p 2 7→ 1 √ p ( e − 2 x ) ∈ R p , (8) where e is an all-one column vector of length p . Namely , using the b inary to bipo lar conversion f unction, a binary sequence is converted to a { +1 / √ p, − 1 / √ p } -sequence. The following lem ma demon strates that the inner pro duct of two bipo lar seque nces β p ( a ) and β p ( b ) is determine d fro m the Hammin g distance between th e binar y seq uences a and b . Lemma 2: For any a, b ∈ F p 2 , the inn er pr oduct of β p ( a ) and β p ( b ) is given by β p ( a ) T β p ( b ) = 1 − 2 d h ( a, b ) p . (9) (Proof) Let β p ( a ) = ( a 1 , . . . , a p ) T and β p ( b ) = ( b 1 , . . . , b p ) T . Define Y 1 and Y 2 by Y 1 △ = { i ∈ [1 , p ] : a i = b i } , Y 2 △ = { i ∈ [1 , p ] : a i 6 = b i } , (10) where | Y 1 | = p − d h ( a, b ) and | Y 2 | = d h ( a, b ) . Equatio n (9) is derived as follo ws: β p ( a ) T β p ( b ) = p X i =1 a i b i = p X i ∈ Y 1 a i b i + p X i ∈ Y 2 a i b i = p X i ∈ Y 1 1 p + p X i ∈ Y 2 − 1 p = ( p − d h ( a, b )) 1 p + d h ( a, b ) − 1 p = 1 − 2 d h ( a, b ) p . (11) It is easy to confirm that β p ( a ) is normalized, i.e., || β p ( a ) || 2 = 1 , for any a ∈ F p 2 . B. Construction o f the sensing matrix Let H ∈ F r × p 2 ( p > r ) be a binar y r × p parity check matrix where 2 p − r ≥ p holds. Th e binary linear coded C ( H ) defin ed by H is given by C ( H ) △ = { x ∈ F p 2 : H x = 0 r } , (12) where 0 r is a zero -colum n vector of length r . The following definition gives the construction o f sensing m atrices. Definition 3: Assume that all of the no nzero c odewords o f C ( H ) are denoted by c 1 , c 2 , . . . , c M (based on any predefined order) , where M = 2 p − r ank ( H ) − 1 ≥ 2 p − r − 1 . T he sensing matrix Φ( H ) ∈ R p × m is defined b y Φ( H ) △ = ( β p ( c 1 ) , β p ( c 2 ) , . . . , β p ( c m )) , (13) where m = 2 p − r − 1 . If Φ ( H ) has the RIP of order S , the RIP con stant corre spondin g to Φ( H ) is den oted b y δ S ( H ) . Since th e order of the columns is unimpo rtant, we do no t distinguish between sensing matrices of dif f erent column order (or cho ice o f co dewords fr om C ( H ) ). If th e weig hts of all non zero codewords of C ( H ) are very close to p/ 2 , then the incoh erence of Φ( H ) b ecomes small, as describ ed in d etail in th e fo llowing lem ma. Lemma 3: Assume that ǫ (0 < ǫ < 1 ) is gi ven and th at 1 − ǫ 2 p ≤ w h ( c ) ≤ 1 + ǫ 2 p (14) holds for any c ∈ C ( H ) \ 0 p . In such a case, the incoh erence Φ( H ) is upper b ound ed by µ (Φ( H )) ≤ ǫ. (15) (Proof) For any p air of codewords a, b ( a 6 = b ) ∈ C ( H ) , th e Hamming weigh t of a + b is in the range: 1 − ǫ 2 p ≤ w h ( a + b ) ≤ 1 + ǫ 2 p. (16) due to th e linea rity of C ( H ) . Th is mean s that 1 − ǫ 2 p ≤ d h ( a, b ) ≤ 1 + ǫ 2 p (17) holds for any a, b ∈ C ( H )( a 6 = b ) . Using Lemm a 2, we immediately obta in ∀ i, j ( i 6 = j ) ∈ [1 , m ] , − ǫ ≤ β p ( c i ) T β p ( c j ) ≤ ǫ, (18) where Φ( H ) = ( β p ( c 1 ) , β p ( c 2 ) , . . . , β p ( c m )) . (19) The definitio n of incoherence and the above ineq ualities lead to an upper b ound on the inc oheren ce: µ (Φ( H )) ≤ ǫ. (20) C. Analysis b ased on en semble average of weig ht distribution W e h ere con sider binar y linear c odes whose weig ht d is- tribution is tightly co ncentrated around the Hamming weight p/ 2 . Befor e starting the analysis, we in trodu ce the weight distribution { A w ( H ) } w ∈ [1 ,n ] , which is defined by A w ( H ) △ = |{ c : c ∈ C ( H ) , w h ( c ) = w } | . (21) In the present paper, we co nsider an en semble o f b inary par ity check matrices, which is ref erred to h erein as the random ensemble . T he r andom en semble R r,p contains all bin ary r × p matrices. Eq ual p robab ility P ( H ) = 1 / 2 r p is assigned to each matrix in R r,p . Let f b e a real- valued fun ction d efined on R r,p , which can b e considered as a random variable d efined over the ensemb le R r,p . The expec tation of f with respe ct to the ensemble R r,p is defined by E R r,p [ f ] △ = X H ∈ R r,p P ( H ) f ( H ) . (22) The e xpectation of weight distributions with respect to the random en semble has b een repo rted [8] to be E R r,p [ A w ( H )] = p w 2 − r . (23) In the f ollowing, a com bination of average weig ht distribu- tion and Ma rkov ineq uality is used to show th at th e RIP hold s for Φ( H ) with overwhe lmingly high p robability . Lemma 4: Assume that we draw a parity che ck matrix from R r,p . Th e probab ility o f selecting H th at satisfies µ (Φ( H )) ≤ ǫ is lo wer bounded b y 1 − 2 1 − r ⌊ ( 1 − ǫ 2 ) p ⌋ X w =0 p w . (24) (Proof) Let us d efine K ǫ ( H ) as K ǫ ( H ) △ = ⌊ ( 1 − ǫ 2 ) p ⌋ X w =1 A w ( H ) + p X w = ⌈ ( 1+ ǫ 2 ) p ⌉ A w ( H ) (25) for H ∈ R r,p . Th e conditio n K ǫ ( H ) = 0 implies th at 1 − ǫ 2 p ≤ w h ( c ) ≤ 1 + ǫ 2 p (26) for any c ∈ C ( H ) \ 0 p . Namely , if K ǫ ( H ) = 0 holds, then µ (Φ( H )) is p roven to b e smaller th an or eq ual to ǫ b y Lemma 3. Next, we e valuate the ensemble expectation of K ǫ ( H ) : E R r,p [ K ǫ ( H )] = ⌊ ( 1 − ǫ 2 ) p ⌋ X w =1 E R r,p [ A w ( H )] + p X w = ⌈ ( 1+ ǫ 2 ) p ⌉ E R r,p [ A w ( H )] = ⌊ ( 1 − ǫ 2 ) p ⌋ X w =1 2 − r p w + p X w = ⌈ ( 1+ ǫ 2 ) p ⌉ 2 − r p w < 2 1 − r ⌊ ( 1 − ǫ 2 ) p ⌋ X w =0 p w . (27) The final inequ ality is d ue to the following identity o n the binomial coefficients: ∀ w ∈ [0 , p ] , p w = p p − w . (28) Using th e Ma rkov inequality , we o btain the fo llowing upper bound on th e pr obability of the event K ǫ ( H ) ≥ 1 : P r ob [ K ǫ ( H ) ≥ 1] ≤ E R r,p [ K ǫ ( H )] < 2 1 − r ⌊ ( 1 − ǫ 2 ) p ⌋ X w =0 p w . (29) Since K ǫ ( H ) takes a non-negative integer-value, we have P r ob [ K ǫ ( H ) = 0] > 1 − 2 1 − r ⌊ ( 1 − ǫ 2 ) p ⌋ X w =0 p w . (30) This comp letes the p roof. The fo llowing theor em is th e main contribution of the present pap er . Theor em 1: Assume that H is chosen ran domly accor ding to the probability assignm ent of R r,p . If S < Z r p log 2 m , (31) holds, then δ 2 S ( H ) < √ 2 − 1 hold s with p robability greater than 1 − 2 1 − p + r , (32) where m = 2 p − r − 1 . The constant Z is given by Z △ = √ 2 − 1 2 √ 6 . (33) (Proof) A simpler upper bo und o n 2 1 − r ⌊ ( 1 − ǫ 2 ) p ⌋ X w =0 p w (34) is requ ired. Using the ine quality o n bino mial co efficients [6]: p w ≤ 2 pH ( w /p ) , (35) we have 2 1 − r ⌊ ( 1 − ǫ 2 ) p ⌋ X w =0 p w ≤ 2 1 − r ⌊ ( 1 − ǫ 2 ) p ⌋ X w =0 2 pH ( w /p ) < 2 1 − r × p × 2 pH ( 1 − ǫ 2 ) = 2 1 − r +log 2 p + pH ( 1 − ǫ 2 ) , (36) where H ( x ) is th e bin ary entropy fun ction de fined by H ( x ) △ = − x lo g 2 x − (1 − x ) log 2 (1 − x ) . (37) In or der to c onsider the expo nent of an upper bou nd, we take the logarithm of (34) and obtain an upper bound of the exponent: log 2 2 1 − r ⌊ ( 1 − ǫ 2 ) p ⌋ X w =0 p w < 1 + lo g 2 ( m + 1) − p + log 2 p + p H 1 − ǫ 2 (38) < 1 + 2 lo g 2 ( m + 1) − 1 2 pǫ 2 . (39) In the above deriv atio n, we used th e relation r = p − log 2 ( m + 1) (40) and the assump tion 2 p − r ≥ p . A qu adratic upp er b ound on the b inary entro py f unction (Lem ma 6 in Ap pendix ) was also exploited to bou nd the entropy te rm. Letting ǫ △ = s 6 log 2 ( m + 1) p , (41) we have 1 + 2 lo g 2 ( m + 1) − 1 2 pǫ 2 = 1 − log 2 ( m + 1) = 1 − p + r. (42) Lemma 1 and L emma 4 imply th at, in this case, δ S ( H ) < ǫS holds with prob ability greater th an 1 − 2 1 − p + r . Due to Lemma 1, the ℓ 1 -recovery conditio n ( 4) can be written as δ 2 S < 2 s 6 log 2 ( m + 1) p S < √ 2 − 1 . (43) From this inequality , we h ave S < Z r p log 2 ( m + 1) < Z r p log 2 m , (44) which pr oves the claim of th e the orem. D. Asymptotic analysis In this sub section, the asym ptotic prop erties of the p roposed construction are gi ven. Lemma 5: Assume that we draw a parity che ck matrix from R r,p . The p robab ility o f selecting H that satisfies µ (Φ( H )) ≤ ǫ is upper b ound ed by (1 − 2 − r )2 1+ r P ⌊ ( 1 − ǫ 2 ) p ⌋ w =0 p w 2 P ⌊ ( 1 − ǫ 2 ) p ⌋ w =0 p w − 1 2 . (45) (Proof) Here, we u se a variant o f Cheby schev’ s ineq uality [ 1]: P r ob [ K ǫ ( H ) = 0] ≤ V AR R r,p ( K ǫ ( H )) E R r,p [ K ǫ ( H )] 2 , (46) where V AR R r,p ( · ) deno tes the variance with respect to R r,p . The variance V AR R r,p ( K ǫ ( H )) is gi ven b y V AR R r,p ( K ǫ ( H )) = A X w 1 =1 A X w 2 =1 C ov ( w 1 , w 2 ) + A X w 1 =1 p X w 2 = B C ov ( w 1 , w 2 ) + p X w 1 = B A X w 2 =1 C ov ( w 1 , w 2 ) + p X w 1 = B p X w 2 = B C ov ( w 1 , w 2 ) , (47) where A = ⌊ (1 − ǫ ) p/ 2 ⌋ and B = ⌈ (1 + ǫ ) p/ 2 ⌉ . The covariance of weight distributions den oted b y C ov ( w 1 , w 2 ) is defined as follows: C ov ( w 1 , w 2 ) △ = E R r,p [ A w 1 ( H ) A w 2 ( H )] − E R r,p [ A w 1 ( H )] E R r,p [ A w 2 ( H )] (48) for w 1 , w 2 ∈ [1 , n ] . Th e cov arian ce f or the ran dom en semble has the follo win g closed for mula [10]: C ov ( w 1 , w 2 ) = (1 − 2 − r )2 − r p w w 1 = w 2 = w 0 w 1 6 = w 2 (49) for w 1 , w 2 ∈ [1 , n ] . Ap plying the covariance for mula to (47), we have V AR R r,p ( K ǫ ( H )) = (1 − 2 − r )2 − r A X w =1 p w + n X w = B p w ! < (1 − 2 − r )2 1 − r A X w =0 p w . (50) Plugging the exp ectation of K ǫ ( H ) E R r,p [ K ǫ ( H )] = 2 − r A X w =1 p w + p X w = B p w ! = 2 − r 2 A X w =0 p w − 1 ! (51) and the upper bou nd on the variance (50) in to (46) proves th e lemma. The asymptotic behavior of P r ob [ K ǫ ( H ) = 0] and P r ob [ K ǫ ( H ) 6 = 0] is sum marized in th e fo llowing theor em. Theor em 2: Assume that α = r /p is fixed (0 < α < 1) . Let f 1 ( ǫ, α ) △ = lim p →∞ 1 p log 2 P r ob [ K ǫ ( H ) = 0] (52) f 2 ( ǫ, α ) △ = lim p →∞ 1 p log 2 P r ob [ K ǫ ( H ) 6 = 0] . (53) The following inequ alities give up per b ounds o n f 1 ( ǫ ) and f 2 ( ǫ ) , respe ctiv e ly: f 1 ( ǫ, α ) < α − H 1 − ǫ 2 , (54) f 2 ( ǫ, α ) < − α + H 1 − ǫ 2 . (55) (Proof) W e first discuss (5 4). Let X △ = ⌊ (1 − ǫ ) p/ 2 ⌋ X w =0 p w . (56 ) Using the inequality o n the binomial c oefficients p w ≥ 1 ( p + 1) 2 2 pH ( w /p ) , (57) X can be b ound ed fro m below: X > p ⌊ (1 − ǫ ) p/ 2 ⌋ ≥ 1 ( p + 1) 2 2 pH ( (1 − ǫ ) / 2 − 1 /p ) . (58) The ineq uality (45) c an be simp lified as (1 − 2 − αp )2 1+ αp X (2 X − 1 ) 2 < 2 1+ αp X − 1 (59) for sufficiently large X . Th e righ t-hand side of the ab ove inequality can b e b ound ed fro m ab ove using (58): 2 1+ αp X − 1 ≤ ( p + 1) 2 2 1+ αp − pH ( (1 − ǫ ) / 2 − 1 /p ) . (60) W e ar e now ab le to derive the inequality given in (54) as follows: lim p →∞ 1 p log 2 h ( p + 1) 2 2 1+ αp − pH ( (1 − ǫ ) / 2 − 1 /p ) i = α − H 1 − ǫ 2 . (61) The ineq uality given in (5 5) is readily obtained fro m (38). Theorem 2 imp lies a sh arp thre shold behavior in the asy mp- totic regime. Let α ∗ ( ǫ ) be α ∗ ( ǫ ) △ = H 1 − ǫ 2 , ( 62) which is referr ed to as the critica l exponent . If α < α ∗ ( ǫ ) , (54) mean s that th e p robability to draw a p × r matrix with µ (Φ( H )) ≤ ǫ decr eases expon entially as p goes to infinity . On the o ther hand, ( 55) indicates th at the probab ility not to select a matrix with µ (Φ( H )) ≤ ǫ decreases exponentially if α > α ∗ ( ǫ ) . I V . C O N C L U D I N G R E M A R K S In the p resent pap er , a co nstruction of a bipolar sensing matrix is intr oduced an d its RIP is analy zed. Th e existence of sensing matrices with th e RIP has been sho wn based on a probab ilistic a rgument. An advantage of this type of sensing matrix is its compactn ess. A sen sor requires O ( pm ) -bits in order to sto re a truly random p × m bip olar matrix. On the other han d, we nee d only O ( p log 2 m ) -bits to sto re Φ( H ) because we can use a g enerator matr ix of C ( H ) as a com pact representatio n of C ( H ) . Howev e r , this lim ited rand omness of matrices results in a p enalty o n the RIP constant. A lthough the present co nstruction is based on a prob abilistic construction , the r esults shown in T heorem 1 are weaker than the ℓ 1 - recovery co ndition O ( S log e ( m/S )) < p f or the truly rand om p × m bip olar matrix en semble shown in [ 5]. Th e cond ition shown in Theorem 1 can be written as O ( S p log 2 m ) < √ p and is mo re similar to the cond itions of deterministic constru c- tions, such a s th at given in [ 7]. Lemma 3 may b e u seful for ev alu ating the goodne ss of a random ly generated instance. The weight distribution o f C ( H ) can be e valuated with tim e co mplexity O ( mp ) , and an upper bound on th e RIP constant can be obtained using Lem ma 3. A P P E N D I X Lemma 6: The fo llowing inequ ality 1 − 2 x − 1 2 2 ≥ H ( x ) − 1 (63) holds fo r 0 < x < 1 . (Proof) Let f ( x ) be f ( x ) △ = − 2 ( x − 1 / 2) 2 − ( H ( x ) − 1) (64) the dom ain of wh ich is 0 < x < 1 . The first an d second deriv atives of f ( x ) are given by f ′ ( x ) = − 4 ( x − 1 / 2 ) − log 2 (1 − x ) + log 2 x (65) and f ′′ ( x ) = − 4 + 1 1 − x + 1 x 1 log e 2 , (66) respectively . It is easy to verify that f ′′ ( x ) > 0 fo r 0 < x < 1 , which ind icates that f ( x ) is con vex. Thu s, we can obtain the global min imum of f ( x ) by solvin g f ′ ( x ) = 0 , and we have f ′ (1 / 2) = 0 and f (1 / 2 ) = 0 . 1 This bound becomes tighter as x approaches to 1 / 2 . Pr oof of Lemma 1 Let Q be an index set satisfying Q ⊂ { 1 , . . . , m } , | Q | ≤ S . For a ny c = ( c i ) i ∈ Q ∈ R | Q | , we h av e || Φ Q c || 2 2 = (Φ Q c ) T (Φ Q c ) = X i ∈ Q c i φ i T X j ∈ Q c j φ j = X i ∈ Q X j ∈ Q c i c j φ T i φ j = X i ∈ Q c 2 i + X i,j ∈ Q ( i 6 = j ) c i c j φ T i φ j ≤ X i ∈ Q c 2 i + X i,j ∈ Q ( i 6 = j ) | c i c j φ T i φ j | ≤ X i ∈ Q c 2 i + µ (Φ) X i,j ∈ Q ( i 6 = j ) | c i c j | , (67) where Φ Q is a sub-matr ix of φ composed from the co lumns correspo nding to the in dex set Q . For any a, b ∈ R , ( a 2 + b 2 ) / 2 ≥ | ab | (68) holds since ( | a | − | b | ) 2 = a 2 + b 2 − 2 | ab | ≥ 0 . W e use this inequality to bound | c i c j | in (67) a nd ob tain || Φ Q c || 2 2 ≤ X i ∈ Q c 2 i + µ (Φ) X i,j ∈ Q ( i 6 = j ) | c i c j | < X i ∈ Q c 2 i + µ (Φ) X i,j ∈ Q c 2 i + c 2 j 2 ! = X i ∈ Q c 2 i + µ (Φ) | Q | X i ∈ Q c 2 i = || c || 2 2 (1 + µ (Φ) | Q | ) ≤ || c || 2 2 (1 + µ (Φ) S ) . Similarly , || Φ Q c || 2 2 can be lower bo unded by || Φ Q c || 2 2 ≥ || c || 2 2 (1 − µ (Φ) S ) . From the definition of δ S , the lemm a is proven. A C K N O W L E D G M E N T The author would like to than k the anonymo us revie wers o f IEEE Information Theo ry W ork shop 2009 for their construc- ti ve comm ents. The p resent stud y was sup ported in part by the Ministry of Ed ucation, Science , Sports and Culture of Japan throug h a Grant-in-Aid f or Scientific Research on Prio rity Areas (Deepen ing and Exp ansion of Statistical In formatics) 18079 0091 . R E F E R E N C E S [1] N.Alon and J.H.Spence r , “The probabilistic method, ” 2nd ed., John W iley & Sons, 2000. [2] E.Candes, J. Romberg and T .T ao, “Robust uncertainty principles: exac t signal reconst ruction from highly incomplete frequenc y information , ” IEEE Trans. on Information Theory , vol.52(2), pp. 489 – 509, 2006. [3] E.Candes and T .T ao, “Decoding by linear programming, ” IEE E Trans. on Information Theory , vol.51(12), pp. 4203 – 4215, 2005. [4] E.Candes, “The restrict ed isometry property and its implicati ons for compressed sensing, ” Compte Rendus de l’Academie des Sciences, Paris, Serie I, pp. 589-592. [5] E.Candes and T .T ao, “Near optimal signal recov ery from random pro- jecti ons: uni versal encoding strate gies?, ” IE EE Tran s. on Information Theory , vol.52(12), pp. 5406 – 5425, 2006. [6] T .M. Cov er and J.A.Thomas, ”Elements of Information Theory”, 2nd ed. W iley-Inte rscienc e 2006. [7] R.A. DeV ore , “Deterministic constructions of compressed sensing m a- trices, ” J. of Complexit y , 23, pp. 918 – 925, 2007. [8] R.G.Gallag er , ”Low Density Par ity Check Codes”. Cambridge, MA:MIT Press 1963. [9] H.Rauhut , K.Schass, and P .V andergh eynst , “Compressed sensing and redundan t dictionaries, ” IEEE Trans. on Information Theory , vol.54(5 ), pp. 2210 – 2219, 2008. [10] T .W adayama, ”On undetected error probability of binary matrix ensem- bles”, in Procee dings of IE EE Intern ationa l Symposium on Information Theory (ISIT2008), Tronto (2008) (related preprint: arXi v:0705.3995 ).
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment