Sparse Recovery with Graph Constraints: Fundamental Limits and Measurement Construction
This paper addresses the problem of sparse recovery with graph constraints in the sense that we can take additive measurements over nodes only if they induce a connected subgraph. We provide explicit measurement constructions for several special grap…
Authors: Meng Wang, Weiyu Xu, Enrique Mallada
Sparse Reco v ery with Graph Cons traints: Fundamental Limits and Measurement Construction Meng W ang W eiyu Xu Enrique Mallada Ao T ang School of ECE, Cornell Univ ersity , Ithaca, NY 14853, USA Abstract —This paper addresses the problem of sparse recov ery with graph constraints in the sense that we can take additive measureme nts over nodes only if th ey ind uce a connected sub- graph. W e pr ovide explicit measurement constructions f or sev eral special gr aphs. A general measur ement construction algorithm is also proposed and ev aluated. For any given graph G with n nodes, we d eriv e order opti mal upp er boun ds of the minimum number of measureme nts needed to re cover any k -sparse vector ov er G ( M G k,n ). Our stu dy sugge sts that M G k,n may ser ve as a graph co nn ectivity metric. I . I N T R O D U C T I O N Network monitoring is an important module in the oper ation and man agement of communication networks, wher e network perfor mance characteristics, such as traffic transmission rates and router qu eueing delays, should be monitore d. Since moni- toring each o bject in the network dir ectly ca n b e operatio nally difficult or ev en infeasible, the topic of inferr ing in ternal characteristics using inf ormation fr om indirect en d-to-e nd (ag- gregate) measu rements, k nown as Network T o mogra phy , has been widely explo red recently [7], [10], [ 12], [20], [22], [25], [33]. In practice, the total n umber of aggregate measurements we can take is small compa red with the size of th e network. Howe ver , we can indeed e xtract the m ost dominating e le- ments of a high-dime nsional signal from lo w-dimensio nal non- adaptive measur ements. W ith the signal itself being sparse, i.e. most entries are zero, the recovered signal can be exact ev en though the n umber of measurements is much smaller than the d imension of the sign al. On e p ractical examp le is that only a sma ll num ber of bo ttleneck link s in the commu nication networks experience large d elays. Sparse Recovery addresses the prob lem of recovering sparse hig h-dimen sional sign als from low-dimensional m easuremen ts, a nd h as two d ifferent but closely related prob lem f ormulatio ns. One is Compr essed Sensing [4], [8], [9], [16], [1 7], [2 1], where the signal is rep- resented b y a hig h-dimen sional r eal vector, an d an a ggregate measuremen t is the ar ithmetical sum of the co rrespon ding real entries. T he o ther is Gr oup T esting [18], [ 19], wh ere the high - dimensiona l vector is lo gical, and a measur ement is a logical disjunction ( OR ) on th e correspon ding logical values. One key question in both compr essed sensing and grou p testing is to de sign a small numb er of no n-adap ti ve measure- ments ( either rea l or logical) suc h th at all the vectors (either real or log ical) u p to certain sparsity (the suppor t size of a vector) can be co rrectly re covered. Most existing results, howe ver , rely critically on the assumption that any subset of the values can be aggregated to gether [8], [1 6], which is not realistic in the network monitor ing pro blem. Her e only objects that can f orm a path o r a cycle on the gr aph [22], or induce a connec ted sub graph can be combined together in the same measurem ent. Only a few recen t works conside r g raph topolog ical constraints in co mpressed sensing [13], [21], [24], [31], [32] and g roup testing [ 2], [11], [23], [ 26], [29]. Thoug h motiv ated by the network m onitorin g applicatio n, beyond networks. In deed, th is form ulation abstractly models that cer tain elemen ts cann ot be measured togethe r in a co m- plex system. Thus, our work can be useful to other applications besides network to mogra phy . Here are the m ain contributions of this paper . (1) W e provide explicit measuremen t constructio ns for dif- ferent graph s. Moveov er, the n umber of ou r measurem ents improves over the existing estimates (e.g. [11], [31]) of the minimum number o f me asurements re quired to re cover sparse vectors over graphs. (Section III) (2) W e prop ose a design guideline based on r -p artition fo r general graphs and further sho w some of its properties. (Sec- tion IV -A) (3) A simple measurement de sign alg orithm is prop osed for general grap hs. (Section IV -B) W e ev aluate its perf ormance both theoretically and nu merically . (Section V) W e now start with Sectio n II to introduce the model and problem formulatio n. I I . M O D E L A N D P RO B L E M F O R M U L A T I O N Consider a gr aph G = ( V , E ) , where V denotes the set of nod es with cardinality | V | = n an d E de notes the set of links. E ach node i is associated with a real nu mber x i , and we say vector x = ( x i , i = 1 , ..., n ) is associated with G . Let T = { i | x i 6 = 0 } deno te the supp ort of x , and let k x k 0 = | T | denote the nu mber of non -zero entries of x , we say x is a k -sparse vector if k x k 0 = k . Let S ⊆ V denote a subset of nod es in G . Let E S denote the subset of links with b oth en ds in S , th en G S = ( S, E S ) is the induced sub graph of G . W e have the following two assumptions throughou t the paper: (A1) : A set S of nodes can be measur ed togeth er in one measuremen t if and only if G S is connected . (A2) : The measurem ent is an additive sum of values at the correspo nding no des. (A1) captures the grap h con straints. On e practical examp le is a sensor network wher e the nodes r epresent sensor s a nd the links represen t fea sible commu nication between sensor s. For the set S of no des tha t indu ce a connected su bgraph , one node u in S monitors th e total values correspo nding to nod es in S . Every n ode in S obtains values fr om its ch ildren, if any , on the span ning tre e rooted at u , aggregates them with its o wn value and sends the sum to its parent. T hen the fusion center can obtain the sum o f values co rrespon ding to all the nod es in S b y o nly c ommun icating with u . (A2) follows fro m the additive p roperty of many network characteristics, e.g. delays and packet loss rates [22]. Howe ver , compressed sensing can also be applied to cases whe re (A2 ) does no t ho ld, e.g., the measuremen ts can be nonlinea r as in [5], [27]. Let y ∈ R m ( m ≪ n ) denote the vector of m measure- ments. Let A be an m × n measurement matrix with A ij = 1 ( i = 1 , ..., m , j = 1 , ..., n ) if a nd only if node j is included in the i th measurem ent and A ij = 0 otherwise. Then we have y = A x . W e say A can identify all k -sparse vectors if and only if A x 1 6 = A x 2 for e very two different vectors x 1 and x 2 that are at most k -sp arse. Th e advantage of sparse recovery is that with th e no n-adap tiv e measurement matrix A , it can identify n -dimensio nal vectors f rom m ( m ≪ n ) m easuremen ts as long as the vector s a re sparse. 1 3 2 6 5 7 4 8 S 1 S 2 Fig. 1. Network Example W ith the above assump tions, A is a 0 - 1 matr ix and f or each row o f A , the set of n odes that correspond to ‘1’ should form a con nected indu ced subgrap h of G . In Fig. 1, we can measure nodes in S 1 and S 2 separately , and the measu rement ma trix is A = 1 1 1 0 1 1 0 0 0 0 1 1 0 0 1 1 . W e remark here that in gr oup testing with graph constraints, the requ irements for th e measurem ent matrix A are the same, while grou p te sting differs from comp ressed sen sing only in that (1) x is a log ical vector , and (2) the op eration used in each group testing measurement is the logical “OR”. All arguments and results in this p aper are in the comp ressed sensing setup if not otherwise specified, and we also compare ou r results with group testing for special networks. Note that for recovering 1 -sparse vectors, the numb ers of measurem ents requir ed by compressed sensing and g roup testing are the same. Giv en a graph G with n nodes, let M G k,n denote the m ini- mum number of non-adaptive mea surements needed to identify all k -sp arse vectors associated with G . Let M C k,n denote th e minimum number of non-adaptive m easuremen ts need ed in a complete g raph with n n odes. I n comp lete graph s, since any subset o f node s can be measured together, any 0 - 1 m atrix is a fea sible mea surement matrix. Ex isting results [4], [9], [3 0] show that with overwhelming probability a r andom 0 - 1 matrix T ABLE I S U M M A R Y O F K E Y N O TA T I O N S Notatio n Meaning G S Subgraph of G induced by S M G k,n Minimum numbe r of measurements nee ded to recov er k -sparse vec tors associat ed with G of n nodes. M C k,n Minimum numbe r of measurements nee ded to recov er k -sparse vec tors associat ed with a complete graph of n nodes. f ( k, n ) Number of measureme nts constructed to recove r k -s parse vec tors associat ed with a complete graph of n nodes with O ( k lo g( n/k )) rows 1 can identif y all k -sparse vectors, and we can recover the sparse vector by ℓ 1 -minimization , which returns th e vector with the least ℓ 1 -norm 2 among tho se that can produce th e obtained measurements. The n we ha ve M C k,n = O ( k log( n/k )) . (1) W e will use (1) f or the an alysis of c onstruction methods. Explicit constructio ns of measurem ent matrices for comp lete graphs also exist, e.g., [1] , [4], [14], [ 15], [ 30]. W e will use f ( k , n ) to deno te the numb er of measu rements to recover k - sparse vector s associated w ith the complete graph of n nodes by a par ticular measu rement constru ction method , an d f ( k , n ) varies f or different con struction methods. The key notatio ns are summarized in T able I. The questions we w ould like to a ddress in th e paper are: • Given grap h G , what is the correspon ding M G k,n ? • How to explicitly design measuremen ts such that the total number of measurements is close to M G k,n ? I I I . S PA R S E R E C O V E RY O V E R S P E C I A L G R A P H S In this section, we consider four kinds of special gra phs: one-dim ensional line/ring n etwork, ring with each nod e con- necting to four closest neigh bors, two-dimensional grid and a tree. W e construct m easuremen ts for each graph and later generalize the constru ction ideas o btained here to general graphs in Section I V. A. Line and Ring First con sider one-dim ensional line/ring network as shown in Fig. 2. When later comparin g the results her e with th ose in Section III-B on e can see that the nu mber o f measurements required to recover sparse vectors can be significantly different in tw o graphs that only differ from each other with a small number of links. In a line/rin g network, there is not m uch fre edom in the measuremen t design since only con secutive no des can be me a- sured together fr om assumption (A1 ). In fact, [23], [26] sh ow that ⌈ n +1 2 ⌉ (or ⌈ n 2 ⌉ ) measurements ar e both n ecessary an d sufficient to recover 1 -sp arse vectors a ssociated with a line (o r ring) network with n nodes. Therefore, Θ( n ) m easurements 1 W e use th e notations g ( n ) ∈ O ( h ( n )) , g ( n ) ∈ Ω( h ( n )) , or g ( n ) = Θ( h ( n )) if as n goes to infinity , g ( n ) ≤ ch ( n ) , g ( n ) ≥ ch ( n ) or c 1 h ( n ) ≤ g ( n ) ≤ c 2 h ( n ) eve ntually holds for some positi ve con stants c , c 1 and c 2 respect iv ely . 2 The ℓ p -norm ( p ≥ 1 ) of x is k x k p = ( P i | x i | p ) 1 /p , and k x k ∞ = max i | x i | . 1 n 1 n (a) (b) Fig. 2. (a) line network (b) ring network are required to recover even o ne non-zero elem ent associated with a line/ring n etwork. W e next construct k ⌈ n k +1 ⌉ + 1 measuremen ts to re cover k - sparse vector s ( k ≥ 2 ) associated with the line/ring network. Let t = ⌈ n k +1 ⌉ . For every 1 ≤ i ≤ k t + 1 , the i th measur ement goes through all the no des from i to min( i + t − 1 , n ) . Theorem 1. k ⌈ n k +1 ⌉ + 1 above measurements ar e sufficient to ide ntify all k -spa rse vectors associa ted with a line/ring network with n nodes. Pr oof: Consider matrix A ( tk +1) × ( tk + t ) with its i th row having ‘1’ s fr om entry i to entry i + t − 1 an d ‘0’ s else where for all 1 ≤ i ≤ tk + 1 . Th en the first n columns of A co rrespond to o ur measureme nt matrix. T o prove the statemen t, we only need to show that A can identify all k -sparse vectors in R tk + t , which hap pens if and only if every non-zer o vector z such tha t A z = 0 holds has at le ast 2 k + 1 non- zero elements [8]. For each index 1 ≤ k ′ ≤ k , define a submatrix A k ′ , which consists of the first t k ′ + 1 ro ws and the first tk ′ + t columns of A . W e claim that every non-zer o vector w such that A k ′ w = 0 holds has at least 2 k ′ + 1 non-zer o elem ents with at least two non-ze ro elements in the last t entries. W e prove this claim by induction over k ′ . First consider A 1 . Note that its first row has ‘1’ s from column 1 to t , and its last row ha s ‘1’ s from co lumn t + 1 to 2 t . Because any two columns of the subm atrix A 1 are lin early indepen dent, for any w 6 = 0 such that A 1 w = 0 , w mu st have at least three no n-zero elements. Le t j be the in dex of the last non-ze ro e lement of w . If j ≤ t , consider the j th row o f A 1 with its first ‘1’ entry in th e j th column. The inner produ ct of the j th row and w is non-z ero, contradicting the assumption that A 1 w = 0 . Then j ≥ t + 1 must h old. Then since the inner product between w and the last row of A 1 is zer o, a t least two n on-zer o elements exist in th e last t e ntries of w . Now suppo se the claim holds for A k ′ , consider a n on-zer o vector w such that A k ′ +1 w = 0 holds. Note th at the vector of the first tk ′ + t position s of w , d enoted by ˆ w , satisfies A k ′ ˆ w = 0 . W e remark that ˆ w 6 = 0 . If ˆ w = 0 , let j denote the index of the first no n-zero eleme nt of w , and we have j ≥ tk ′ + t + 1 . Consider th e ( j + 1 − t ) th row of A k ′ +1 with its last ‘1’ entry in column j . Then th e inn er produ ct of this row with w is no n-zero, which is a contrad iction. Since ˆ w 6 = 0 , from the in duction assumption , it has at least 2 k ′ + 1 non -zero elements with at least two non-z ero elemen ts in its last t elements. Now consider th e last 2 t elements of w and the last t + 1 measure ments in A k ′ +1 . From a similar argument f or th e case of A 1 , we know that w must h av e at least two non-zero elements in th e last t positions. So w has at least 2( k ′ + 1) + 1 non-zero elements. By induction over k ′ , e very w 6 = 0 satisfying A w = 0 has at least 2 k + 1 non -zero entries. This co mpletes th e proof. Theorem 1 implies that we can sa ve abo ut ⌊ n k +1 ⌋ mea- surements but still b e ab le to recover k -sp arse vectors in a line/ring network v ia compr essed sensing. However , for gro up testing associated with a line/ring network , one ca n ch eck that n measu rements are necessary to recover more than one non-ze ro element. The key is that every node should b e the endpo int at least twice, wh ere the end points are the no des at the b eginning and the end of a measurem ent. The endpo ints of a measurement can be a same node. If no de u is an en dpoint for at mo st o nce, then it is always measured togeth er with one of its neighbo rs, say v , if e ver mea sured. Th en wh en v is ‘ 1’, we canno t determ ine the value of u , eith er ’ 1’ or ’0 ’. Therefo re, to rec over mo re than o ne n on-zer o element, we need at least 2 n endpoints, and th us n m easuremen ts. B. Ring with nodes connecting to fo ur closest n eighbo rs W e know from Section III-A that ⌈ n/ 2 ⌉ measurements are necessary to recover ev en one non- zero elemen t associated with a ring netw or k. No w consider a graph with eac h node directly con necting to its four closest n eighbo rs as in Fig. 3 (a), denoted by G 4 . G 4 is important to th e study of sma ll-world networks [28]. G 4 has n more links th an th e ring network, but we will show that the numb er of measur ements req uired by compressed sensing to rec over k -sparse v ector s associated with G 4 significantly reduces from Θ( n ) to O ( k log( n/k )) . Throu ghout the paper , given a graph G = ( V , E ) , we say S fo rms a hub f or U if G S is co nnected, and fo r every u in U , there exists s in S such that ( u, s ) ∈ E . Clearly the set o f all the odd nodes, deno ted b y T o , fo rm a hub for th e set o f all th e even nodes, den oted by T e . Giv en a k - sparse vector x , let x o and x e denote the subvectors of x with odd and ev en indices. Then x o and x e are at most k - sparse. The sum of entries in x o , de noted by s o , can be obtained by one measuremen t, and similarly fo r the sum s e of the en tries of x e . For a ny subset W of T e , T o ∪ W indu ces a connec ted subgrap h an d thus can be measured b y one measurement. W e can o btain the sum o f values c orrespon ding to n odes in W by measuring nodes in T o ∪ W an d th en subtracting s o from the sum. For examp le in Fig. 3 (b ) and (c), in o rder to measure the sum of the pink nod es 2, 8 and 10, we m easure the sum of pink nodes and all the black od d nodes, and then subtract s o from the obtained summ ation. Th ough the subgrap h indu ced by T e are not com plete, we can indeed freely mea sure nodes in T e with the help of the hub T o . Theref ore M C k, ⌊ n/ 2 ⌋ + 1 measuremen ts are eno ugh to recover x e ∈ R ⌊ n/ 2 ⌋ , where the additional one measureme nt measures s o . Similarly , we can use T e as a hub to recover the subvector x o ∈ R ⌈ n/ 2 ⌉ with M C k, ⌈ n/ 2 ⌉ + 1 measurements, an d thus x is recovered. From above, we hav e Theorem 2. All k - sparse vectors associate d with G 4 can be r ecover ed with M C k, ⌊ n/ 2 ⌋ + M C k, ⌈ n/ 2 ⌉ + 2 measurements, which 1 2 n 1 2 12 3 4 5 6 7 8 9 10 11 1 2 12 3 4 5 6 7 8 9 11 10 1 2 12 3 4 5 6 7 8 9 11 10 (a) T opology of G 4 (b) Odd nodes as a hub (c) M easure nodes 2,8 and 10 via hub (d) Delete h long links Fig. 3. Sparse recov ery on graph G 4 is O (2 k log( n/ (2 k ))) + 2 . Theorem 2 is important in the following thr ee aspects. Firstly , fr om ring network to G 4 , althoug h the nu mber of lin ks on ly increases by n , the nu mber of measuremen ts required to recover k -spa rse vectors sign ificantly red uces f rom Θ( n ) to O (2 k log ( n/ (2 k ))) + 2 . Besides, this value is in the same order as M C k,n , while the nu mber of lin ks in G 4 is only 2 n compared with n ( n − 1 ) / 2 links in a com plete g raph. Secondly , the idea of using a hub to design the measure- ments is very impo rtant for our later results. If set S can serve as a hub fo r U in graph G , then the induced grap h G U is “almost equiv alent” to a comp lete subgrap h in the sense that we can me asure any subset of n odes in U freely via S . The numb er of measur ements requ ired to recov er k -sp arse vectors associated with U is M C k, | U | + 1 with one ad ditional one measurement for the hu b . Thirdly , o ur estimate O (2 k log ( n/ (2 k ))) + 2 on the min- imum nu mber of mea surements requir ed to recover k -sparse vectors gr eatly impr oves over the existing results in [ 11], [ 31], both of wh ich are based on the mix ing time of a r andom walk. T he mixing time T ( n ) is the sm allest t ′ such that a random walk of len gth t ′ starting at an y nod e in G en ds up having a distribution µ ′ with k µ − µ ′ k ∞ ≤ 1 / (2 cn ) 2 for some c ≥ 1 , where µ is th e stationar y d istribution over the nodes of a standard ran dom walk over the graph G . [31] pr oves that O ( k T 2 ( n ) log n ) m easurements can iden tify k - sparse vectors with overwhelming probability by comp ressed sensing. [11] uses O ( k 2 T 2 ( n ) log ( n/k )) measurements to identify k non- zero eleme nts by grou p testing. In G 4 , on e can easily see that T ( n ) sh ould be at least n/ 4 . Th en both results provide no sa vin g in the n umber of measu rements fo r G 4 as the mixing time is Θ( n ) . Besides the explicit mea surement co nstruction describe d before Theorem 2 , we can also recover k - sparse vector s w ith O (log n ) random measur ements with high pro bability . W e need to poin t out that these r andom mea surements do not depend on the me asurements of a complete gr aph. Consider an n -step Markov cha in { X k , 1 ≤ k ≤ n } with X 1 = 1 . For any k ≤ n − 1 , if X k = 0 , then X k +1 = 1 ; if X k = 1 , then X k +1 can be 0 o r 1 with equ al p robability . Clearly any realization of this Markov ch ain doe s not c ontain two or m ore consecu ti ve zeros, and th us is a feasible r ow of the measuremen t matrix. Moreover , Theorem 3. W ith high pr obability all k -spa rse ve ctors asso- ciated with G 4 can be r ecover ed with O ( g ( k ) log n ) mea sur e- ments o btained fr om th e a bove Markov chain , wher e g ( k ) is a function of k . Pr oof: See Appendix. Adding n links in the form ( i, i + 2( mod n )) to the ring network greatly redu ces the nu mber of m easuremen ts need ed from Θ ( n ) to O (log n ) . Th en h ow many link s in the form ( i, i + 2( mod n )) sh all we add to the rin g network such that the minimum number of measurem ents r equired to recover k - sparse vectors is exactly Θ(log n ) ? The answer is n − Θ(log n ) . T o see this, let G 4 h denote the graph obtained b y deleting h links in the form ( i, i + 2( mo d n )) from G 4 . For example in Fig. 3 (d), we delete links (3 , 5) , (8 , 1 0) and (9 , 1 1 ) in red dashed lines from G 4 . Giv en h , ou r following results do not depend on the spec ific choice of lin ks to remove. W e h ave Theorem 4. The min imum num ber of measur ements r equir ed to reco ver k -sparse vectors associated with G 4 h is lower bound ed by ⌈ h/ 2 ⌉ , and upper bounded b y 2 M C k, ⌈ n 2 ⌉ + h + 2 . Pr oof: Let D denote the set of nodes such that for every i ∈ D , link ( i − 1 , i + 1) is removed from G 4 . Th e proo f of the lower bo und follows the pr oof of Theorem 2 in [2 6]. The key idea is that recovering one non-zero element in D is equivalent to recovering one non- zero element in a r ing network with h nodes, and thus ⌈ h/ 2 ⌉ measur ements a re necessary . For the upper bound, we first measure nodes in D separately with h m easurements. Let S co ntain th e e ven nodes in D and all the odd nodes. S ca n be used as a hub to recover the k -spar se subvectors ass ociated with the even nodes that are n ot in D , and the number o f me asurements used is at most M C k, ⌊ n 2 ⌋ + 1 . W e similarly recover k -sp arse subv ector s associated with odd nodes th at are not in D using the set of the odd n odes in D and a ll the even n odes as a hub. The number of measurements is at mo st M C k, ⌈ n 2 ⌉ + 1 . Sum them up and the up per bound follo ws. T ogethe r with (1), Th eorem 4 direc tly implies that if Θ(log n ) links in the form ( i, i + 2( mod n )) are deleted from G 4 , then Θ(log n ) measurem ents are both necessary an d sufficient to recover k - sparse vector s associated with G 4 Θ(log n ) for any constan t k . Moreover , the lower bo und in Th eorem 4 implies that if the number of links removed is Ω(log n ) , then the n umber of measurements required for sp arse recovery is (a) T he se t of black nodes as a hub (b) Me asure pink nodes via the hub Fig. 4. Sparse recov ery on two-dimen sional grid also Ω(log n ) . Thus, we need to add n − Θ (log n ) links to a ring network such th at the number of measurements required for sparse recovery is exactly Θ(log n ) . Since th e number of measuremen ts r equired by comp ressed sensing is greatly red uced when we add n links to the rin g network, one may wond er whether the nu mber of measure- ments needed to lo cate k n on-zer o elements by group testing can also be gr eatly reduced or no t. Our next result shows that this is not the case for group testing. Proposition 1. ⌊ n/ 4 ⌋ measurements ar e necessary to locate two non-zer o ele ments associated with G 4 by gr oup testing. Pr oof: Suppose two non-zero elements are on nodes 2 i − 1 and 2 i fo r some 1 ≤ i ≤ ⌊ n 2 ⌋ . W e view nodes 2 i − 1 and 2 i as a group for e very i ( 1 ≤ i ≤ ⌊ n 2 ⌋ ), denoted by B i . If both nodes in B j are ‘1’ s for som e j , the n e very measuremen t that passes either node o r both nodes in B i is always ‘1’. Consider a reduced g raph with B i , ∀ i as nodes, and link ( B i , B j ) ( i 6 = j ) exists on ly if in G 4 there is a path fr om a node in B i to a nod e in B j without going though any other node not in B i or B j . The red uced network is a ring with ⌊ n 2 ⌋ nodes, and thu s ⌊ n/ 4 ⌋ measuremen ts are requ ired to locate on e non-zero elem ent in the reduced network. Then the lower b ound follo ws. By Theorem 2 and Propo sition 1, we observe that in G 4 , with co mpressed sensing th e nu mber of me asurements needed to recover k - sparse vectors is O (2 k log ( n/ (2 k ))) , while with group testing, Θ( n ) measurements ar e required if k ≥ 2 . C. T wo -dimension al grid Next we consider the tw o-dim ensional grid, denoted by G 2 d . G 2 d has √ n rows and √ n co lumns. From no w on we skip ‘ ⌈·⌉ ’ and ‘ ⌊·⌋ ’ fo r notation al simplicity , but note that the num ber of nodes should al ways be an integer . W e assume √ n to be e ven here for notational simplicity , and the result can be easily mo dified for the case that √ n is odd. T he idea of m easuremen t construction is similar to th at for graph G 4 . First, Let S 1 contain the nod es in the first row and all the no des in the odd columns. Th en S 1 can b e used as a hub to measure k -spa rse subvectors associated with n odes in V \ S 1 , as shown in Fig. 4. The nu mber o f me asurements is M C k, ( n/ 2 − √ n/ 2) + 1 . Then let S 2 contain the nodes in the first row and all the n odes in the even columns, and use S 2 as a hub to recover up to k -spar se subvectors associated with nodes in V \ S 2 . Then number of measurements required is also M C k, ( n/ 2 − √ n/ 2) + 1 . Finally , use n odes in the seco nd row as a hub to recover spar se subvectors associated with nodes in the first row . Since nodes in the second row a re already identified in the above two steps, th en we do not ne ed to measure the hub sepa rately in this step . Th e nu mber o f measurem ents here is M C k, √ n . Therefor e, Theorem 5. The number of measur emen ts nee ded to r e- cover k - sparse vectors a ssociated with G 2 d is at m ost 2 M C k,n/ 2 − √ n/ 2 + M C k, √ n + 2 . D. T r ee Next we consider a tr ee topology as in Fig. 5. For a g iv en tree, the root is treated as the only node in layer 0. The nodes that are t steps a way from the root a re in laye r t . W e say th e tree h as dep th h if the farthest n ode is h steps aw ay from the root. Let n i denote the number of no des on layer i , and n 0 = 1 . W e con struct m easuremen ts to recover vector s associated with a tree by the fo llowing tr ee appr oach . root layer 1 layer 2 layer 3 1 2 3 4 5 6 7 8 9 10 Fig. 5. Tree topo logy W e recov er the nodes laye r b y lay er starting from the root, and recovering no des in lay er i require s th at all the n odes above layer i should alr eady be recovered. First m easure the root separately . When recovering the subvector associated with nodes in layer i ( 2 ≤ i ≤ h ), we can m easure the sum o f any subset of n odes in lay er i using some nod es in the u pper layers as hub and then delete the value of the hub fro m th e obtained sum. One simp le way to find a hub is to trac e bac k from no des to b e measu red on the tree simu ltaneously until they reach o ne same node. For example in Fig. 5, in order to measure nodes 5 and 7 toge ther , we will trace back to the root and measure nodes 1 , 2, 3, 5, and 7 to gether and then subtract the values of nodes 1, 2, and 3, which are already identified whe n we recover no des in the upper layers. W ith this approac h, we have, Theorem 6. P h i =0 M C k,n i measur ements ar e enough to r ecover k -sparse vecto rs associated with a tr ee w ith depth h , wher e n i is the number of nodes in layer i . I V . S PA R S E R E C OV E RY OV E R G E N E R A L G R A P H S In this section we consider recovering k -sparse vectors over general graphs. The gra ph is assumed to b e conn ected. If not, we simply treat each compo nent as a co nnected subgrap h and design measurements to recover k -spa rse subvectors associated with each subgraph sep arately . Inspired b y the constructio n method s in Section III, in Section I V -A we propo se a general design guidelin e based on “ r -partition” which will b e introduced soo n. The key idea is to divide the nodes into a small n umber of gro ups suc h that node s in the sam e grou p are conn ected to o ne hub , and thus can be measured freely with the help of the hu b . W e use the Erd ˝ os-Rényi rando m g raph as an example to illustrate the d esign g uideline b ased on r -partition . Since findin g the minimum number of such groups in general tu rns o ut to be NP-hard, in Section IV -B we propose a simple alg orithm to design a small num ber of m easuremen ts to recover k -sp arse vectors associated with any gi ven g raph. A. Measurement Construction Based on r -pa rtition In G 4 , we d ivide nodes into od d nodes T o and even nodes T e and use ea ch set as a hub for the other set. In general graphs, we extend this idea and ha ve the following definition : Definition 1 ( r -p artition) . Give n G = ( V , E ) , disjoint subsets N i ( i = 1 , ..., r ) o f V fo rm an r -pa rtition of G if and only if these two c ondition s both hold: (1) ∪ r i =1 N i = V , and ( 2) ∀ i , V \ N i is a hub for N i . Clearly , T o and T e form a 2 -partitio n o f graph G 4 . W ith the above definitio n, we have Theorem 7. If G has an r - partition N i ( i = 1 , ..., r ), th en th e number of measur ements needed to r ecover k -spa rse vectors associated with G is at mo st P r i =1 M C k, | N i | + r , which is O ( r k log( n/ k )) + r . Pr oof: No te that M C k, | N i | + 1 measurements (with one additional measur ement fo r V \ N i ) are eno ugh to recover k - sparse subvector associated with N i via its hub V \ N i . W e next apply this result to th e Erd ˝ os-Rényi random graph G ( n, p ) , which contain s n nod es and th ere exists an link between any two nod es independen tly with pr obability p . Note that if p ≥ (1 + ǫ ) log n/n fo r some constant ǫ > 0 , G ( n, p ) is connected almost surely [6] . Theorem 8. F or Er d ˝ os-Rényi random g raph G ( n, p ) with p = β log n / n , if β ≥ 2 + ǫ for some constant ǫ > 0 , then a ny two disjoint subsets N 1 and N 2 of nod es with | N 1 | = | N 2 | = n/ 2 form a 2-partition with high pr obability . Moreo ver , with high pr obability the nu mber of measurements n eeded to r ecover k - sparse vectors a ssociated with G ( n, p ) is at most 2 M C k,n/ 2 + 2 , which is O (2 k log( n/ (2 k ))) + 2 . Pr oof: L et N 1 be any subset of V with | N 1 | = n/ 2 , and let N 2 = V \ N 1 . Then G N 1 and G N 2 are both Erd ˝ os-Rényi random graph s with n/ 2 n odes, and are conn ected almo st surely when p ≥ (2 + ǫ ) log n/n . W e claim that with h igh p robab ility , for e very u ∈ N 1 , there exists v ∈ N 2 such that ( u, v ) ∈ E . Let P 1 denote the probab ility that there exists som e u ∈ N 1 such that ( u, v ) / ∈ E for e very v ∈ N 2 . Then P 1 = X u ∈ N 1 (1 − p ) n/ 2 = n 2 (1 − β lo g n/n ) n/ 2 = n 2 (1 − β log n n ) n β log n · β log n 2 ≤ n 2 e − β log n 2 ≤ n − ǫ/ 2 2 , where the last ineq uality holds from β ≥ 2 + ǫ . The n P 1 goes to zero a s n goes to infinity , a nd the claim follows. Similarly , one can prove that with high probability for e very v ∈ N 2 , there exists u ∈ N 1 such that ( u, v ) ∈ E . Then with high prob ability N 1 and N 2 form a 2-p artition. The second statement f ollows fro m Th eorem 7. [11] considers gro up testin g over Erd ˝ os-Rényi random graphs and shows th at O ( k 2 log 3 n ) measur ements are enou gh to identify up to k non -zero en tries in an n -d imensional logical vector provided that p = Θ( k log 2 n/n ) . Here with compressed sensing setup an d 2-p artition r esults, we can recover k -sparse vector s in R n with O (2 k lo g ( n/ (2 k ))) + 2 measuremen ts when p > (2 + ǫ ) log n/ n for som e ǫ > 0 . Note that this result also improves over the previous resu lt in [31], which requ ires O ( k log 3 n ) measure ments f or comp ressed sensing on G ( n, p ) . From T heorem 7, the num ber of measurem ents used is closely related to the v alue r . In general one wants to reduce r so as to reduce the n umber of measu rements. Given graph G an d integer r , the question th at whether or not G has an r -p artition is called r -pa rtition pr oblem . In fact, Proposition 2. ∀ r ≥ 3 , r -p artition pr oblem is NP-co mplete. Please refe r to Appen dix for its pr oof. W e remark that we cannot prove th e hardn ess of the 2 - partition pro blem tho ugh we conjecture it is also a h ard problem. B. Measurement Construction Algorithm for Gen eral Graphs Section I V -A p ropo ses th e r -partition concep t as a mea- surement de sign guide line. But find ing an r -par tition with the smallest r in genera l is NP-hard. No w given a con nected graph G , h ow shall we efficiently de sign a sma ll num ber of measuremen ts to r ecover k -sparse vectors associated with G ? One simple way is to find the spannin g tr ee of G , and then use th e tree approach in Section III-D. The depth of the spanning tree is at least R , where R = min u ∈ V max v ∈ V d uv is the radius of G with d uv as the length of th e shortest path between u and v . This approach only uses link s in th e spanning tree, and the nu mber of m easurements u sed is large wh en the radius R is large. For exam ple, th e radius of G 4 in Fig. 3 is n/ 4 , then the spann ing tree ap proach uses at least n/ 4 measuremen ts, one for each layer . Howev er, th e numb er of measuremen ts can be as small as O (2 k log( n/ 2 k )) + 2 if we take ad vantage of the addition al link s. Here we prop ose a simple alg orithm to design th e measu re- ments fo r gener al graph s. The algo rithm com bines the idea s of the tr ee ap proach and the r -partition. W e still divide nodes into a small numb er of groups such that each group can be identified via some hub . Her e no des in the sam e gr oup are the leaf nod es of a spannin g tre e of a g radually reduce d graph. A leaf node has no ch ildren on th e tree. Let G ∗ = ( V ∗ , E ∗ ) denote the input graph. The algorithm is built on the following two subroutines. Leaves ( G , u ) return s the set of leaf nod es of a spanning tree of G r ooted at u . Reduce ( G = ( V , E ) , u , H ) deletes u f rom G an d fully connects all the ne ighbor s of u . Specifically , for every two Subroutine 1 Lea ves ( G , u ) Input: grap h G , ro ot u 1 Find a spanning tree T of G rooted at u by bread th-first search, and let S denote the set of leaf no des o f T . 2 Return: S Subroutine 2 Reduce ( G , u , H ) Input: G = ( V , E ) , H e for each e ∈ E , and nod e u 1 V = V \ u . 2 for each two d ifferent neig hbors v and w of u do 3 if ( v , w ) / ∈ E t hen 4 E = E ∪ ( v , w ) , H ( v, w ) = H ( v, u ) ∪ H ( u,w ) ∪ { u } . 5 end if 6 end for 7 Return: G , H neighbo rs v an d w of u , we ad d a link ( v , w ) , if no t already exist, an d let H ( v, w ) = H ( v, u ) ∪ H ( u,w ) ∪ { u } , wher e fo r each link ( s, t ) ∈ E , H ( s,t ) denotes the set of nod es, if any , th at serves as a hu b for s and t in the or iginal grap h G ∗ . W e re cord H su ch that measurements constructed on a red uced graph G can be feasible in G ∗ . Giv en graph G ∗ , let u denote the node s uch th at max v ∈ V ∗ d uv = R , wh ere R is the rad ius of G ∗ . Pick u as th e ro ot an d ob tain a spanning tree T o f G ∗ by b readth- first sear ch. Let S deno te the set of leaf n odes in T . W ith V ∗ \ S as a hub, we can design f ( k , | S | ) + 1 measureme nts to recover up to k -sparse vectors ass ociated with S . W e then reduce the network by deleting every u in S and fully con nects all the neighbors of u . For the obtained reduced network G , we repeat the above process until all the nodes are deleted. Note that wh en d esigning the measurements in a red uced grap h G , if a measurement uses link ( v , w ) , then it should a lso inclu de nodes in H ( v, w ) so as to be feasible in the original graph G ∗ . In e ach step tree T is rooted a t nod e u where max v ∈ V d uv equals the ra dius of the cu rrent graph G . Since all the leaf nodes of T are deleted in the g raph reductio n procedure, the Algorithm 1 Measur ement construction for grap h G ∗ Input: G ∗ = ( V ∗ , E ∗ ) . 1 G = G ∗ , H e = ∅ for each e ∈ E 2 while | V | > 1 do 3 Find th e node u such that max v ∈ V d uv = R G , wher e R G is the radius of G . S = Lea ves ( G , u ). 4 Design f ( k , | S | ) + 1 measureme nts to recover k - sparse vectors associated with S using nod es in V \ S as a h ub. 5 for each u in S do 6 G = Reduce ( G , u , H ) 7 end f or 8 end while 9 Measure the last node in V directly . 10 Output: All the measurements. radius of the new obtain ed gra ph shou ld be redu ced by at least one. Then we have at most R iteration s in Algorithm 1 until only one n ode is left. Clearly we have, Proposition 3. The number of mea sur ements designed by Algorithm 1 is at most R f ( k , n ) + R + 1 , where R is the radius o f the gr ap h. W e rem ark that the num ber of measurem ents by the spa n- ning tree ap proach we m entioned at the beginning o f Section IV -B is also no greater than Rf ( k , n ) + R + 1 . Howe ver , we expect that Alg orithm 1 uses fewer measurem ents than the spanning tree appro ach for ge neral gr aphs, since Algorithm 1 also co nsiders links th at are not in the spannin g tree. And it is verified in Experim ent 1 in Section V. V . S I M U L AT I O N Experiment 1 (Effectiveness o f Algo rithm 1 ): Given a g raph G , we con sider r ecovering 1 -spa rse vectors associated with G . No te that M C 1 ,n = ⌈ log( n + 1) ⌉ a nd th e cor respond ing measuremen t matrix has the bin ary expansio n of i as co lumn i [18]. Algorithm 1 divides the no des into groups such that each gro up (except the last one) can b e measured freely via some h ub. The last group on ly contain s one n ode an d can be m easured dir ectly . The total numbe r o f mea surements by Algorithm 1 is P q − 1 i ⌈ log( n i + 1) ⌉ + q , wh ere n i is the n umber of nodes in gr oup i and q is the total number of groups. In Fig. 6, we g radually increase the nu mber of links in a graph with n = 100 0 n odes. W e start with a uniformly generated ran dom tree, an d in each step rando mly add 25 links tha t do not alread y exist. All the r esults are averaged over one hundr ed realizations. The numb er of measur ements constructed decreases from 73 to 3 0 when the n umber of links increases from n − 1 to 2 n − 1 . Note th at the numbe r of measuremen ts is alread y within 3 M C 1 ,n when the average no de degree is c lose to 4 . T he r adius of the grap h decre ases fro m 13 to 7, and we also plot the u pper bou nd in Proposition 3. One can see th at the numbe r of mea surements constru cted can be much less than the upp er b ound. In Fig. 7 , we conside r the scale-free network w ith Barabási- Albert (B A) model [3] where the graph in itially has m 0 connected n odes, and each n ew node conn ects to m existing nodes with a p robab ility that is propo rtional to the degree of the existing nod es. W e start with a r andom tree of 10 nod es and incr ease the total num ber of nodes f rom 6 4 to 10 24. Every result is averaged over one hundred realizations. One can see that th e nu mber of measuremen ts constructed is prop ortional to log n , and decreases when m increases. Experiment 2 (Sparse Recovery Perf ormance with No ise): Compressed sensin g the ory ind icates that if A is a rand om 0-1 matr ix, with overwhelming proba bility we can recover the sparse vector x 0 though ℓ 1 -minimization [8]. Here we generate a graph with n = 500 node s from B A model. Algorithm 1 divides no des into fo ur group s with 375, 122 , 2 and 1 n ode respectively . For each of the first two groups with size n i ( i = 1 , 2 ), we gener ate ⌈ n i / 2 ⌉ ran dom mea surements each 1000 1200 1400 1600 1800 2000 0 50 100 150 P S f r a g r e p l a c e m e n t s Upper bound of number of measurements Number of measureme nts Radius Number of lin ks Fig. 6. Random graph wit h n = 1000 10 2 10 3 10 15 20 25 30 35 40 45 50 P S f r a g r e p l a c e m e n t s m = 1 m = 2 m = 3 Number of nodes Number of measurem ents Fig. 7. BA model with increasing n and dif ferent m 0 20 40 60 80 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P S f r a g r e p l a c e m e n t s ℓ 1 , with noise ℓ 1 , no noise Ours, with noise Ours, no noise Support size of t he vec tors k x r − x 0 k 2 Fig. 8. Recov ery pe rformance with hub errors measuring a random subset o f th e gr oup togeth er with its h ub. W e also measure the two hubs directly . Each of the three nodes in the next two groups is measured directly by on e measure- ment. Th e gen erated ma trix A is 254 b y 5 00. W e gen erate a sparse vector x 0 with i.i.d . zero-m ean Gaussian entries o n a randomly c hosen support, an d normalize k x 0 k 2 to 1. T o recover x 0 from y = A x 0 , one can run ℓ 1 -minimization to recover the sub vectors associated with the first tw o grou ps, and the last th ree entries of x 0 can be o btained from measur ements directly . Ho wever , note that every me asurement for the first two groups passes throu gh its hub, then any error in a hu b measuremen t will affect ev ery mea surement for the gro up of nodes using this hub. T o address this issue, we pro pose to use a m odified ℓ 1 -minimization in which the err ors in the two hubs are treated as en tries of an augme nted vector to re cover . Specifically , let the augmented vector z = [ x T 0 , e 1 , e 2 ] T and the augmented matrix A ′ = [ A β γ ] , where e 1 (or e 2 ) denotes the erro r in the measure ment of th e first (second ) hub, an d the c olumn vector β (or γ ) h as ‘1’ in the row correspo nding to the m easurement of the first (o r second) hub an d ‘0’ elsewhere. W e then recover z (and thus x 0 ) fr om y = A ′ z v ia ℓ 1 -minimization o n each g roup . Fig. 8 co mpares the recovery perfo rmance of our m odified recovering meth od and the traditional ℓ 1 -minimization , wh ere the hub errors e 1 and e 2 are drawn fr om a Gaussian distribution with zero mean and unit variance. For every suppo rt size k , we ran domly generate one hun dred k -sparse vectors x 0 , and let x r denote the recovered vector . Even with th e hub erro rs, the av erag e k x r − x 0 k 2 is within 1 0 − 6 when x 0 is at most 2 5-sparse by our method, while by ℓ 1 -minimization , the value is at least 0.5. W e also consider the case that besides errors in hub measurements, ev ery oth er measuremen t has i.i.d. z ero-mean Gaussian no ise. Let w d enote the noise vector and k w k 2 is n ormalized to 2. The average k x r − x 0 k 2 here is smaller w ith our metho d than that with ℓ 1 -minimization . V I . C O N C L U S I O N This paper addresses the sparse recovery problem with graph constra ints. By provid ing explicit measuremen t con - structions for different gr aphs, we derive u pper bou nds of the minimum number o f measurem ents need ed to r ecover vectors up to certain sparsity . It would be interesting to explore correspo nding tight lower bound s. Furth er efforts ar e a lso needed to empirically evaluate the perfo rmance of different recovery themes, especially when the measurements are noisy . A P P E N D I X A. Pr oof of Theor em 3 Let A m × n denote the m atrix with m realizatio ns of th e n - step Markov ch ain. T o prove the statemen t, from [8], we only need to sho w that the pro bability that e very 2 k column s of A are linearly independent g oes to 1 as n goes to infinity . Let A I be a su bmatrix of A with columns in I , where I is an index set with | I | = 2 k . Let A S j I ( 1 ≤ j ≤ ⌊ m 2 k ⌋ ) be a submatrix o f A I formed by r ow 2 k ( j − 1) + 1 to row 2 k j of A I . Let P I d denote the p robability that ran k( A I ) < 2 k , and let π I d denote the prob ability that rank( A S j I ) < 2 k fo r given j . Note th at given I , π I d is the same for e very A S j I , ∀ j . Note that ra nk( A I ) < 2 k imp lies that ra nk( A S j I ) < 2 k for ea ch suc h matrix A S j I , then P I d ≤ ( π I d ) ⌊ m 2 k ⌋ . (2) T o char acterize π I d , con sider matrix B 2 k × 2 k with B ii = 0 for i = 2 , 3 , ..., 2 k and B ij = 1 fo r all the other elements. Since rank( B ) = 2 k , then π I d ≤ 1 − P ( A S j I is a row permu tation of B ) . (3) One c an check that in this Mar kov chain, for ev ery 1 ≤ i < k ≤ n , P ( X k = 1 | X i = 1 ) ≥ 1 / 2 , P ( X k = 0 | X i = 1) ≥ 1 / 4 , P ( X k = 1 | X i = 0) ≥ 1 / 2 , and P ( X k = 1) ≥ 1 / 2 by simp le calculation. Since B has (2 k )! different row permutatio ns, one can calculate that P ( A S j I is a ro w permutation of B ) ≥ (2 k )! / 2 4 k 2 +2 k − 1 . (4) Combining (2), (3) an d (4), we have P ( e very 2 k columns of A are linearly indep endent ) =1 − P ( rank ( A I ) < 2 k for some I with | I | = 2 k ) ≥ 1 − n 2 k P I d ≥ 1 − n 2 k e − (2 k )!( 1 2 ) 4 k 2 +2 k − 1 ⌊ m 2 k ⌋ , (5) where the first inequality fo llows from th e union bound . Then if m = g ( k ) lo g n = (2 k + 1)2 4 k 2 +2 k − 1 log n/ (2 k − 1)! , from ( 5) we have the proba bility that every 2 k co lumns of A are linearly independen t is at least 1 − 1 / ((2 k )! n ) . Then the statement follows . B. Pr oof of Pr oposition 2 Since ch ecking wh ether or not r given sets form an r - partition takes po lynom ial time, r - partition problem is NP . W e will sho w the r -partitio n problem is NP-com plete fo r r ≥ 3 b y proving th at the NP-complete r -co loring problem ( r ≥ 3 ) is polyno mial time reducib le to the r - partition problem . Let G = ( V , E ) and an integer r be an instan ce of r - coloring . For every ( u, v ) ∈ E , ad d a node w and two link s ( w, u ) and ( w, v ) . Let W deno te the set of no des added. Add a link between every pair of node s in V not already joined by a link. Let H denote th e augme nted gr aph and let V ′ denote th e set of nodes in H . W e claim tha t if there exists an r -partition of H , then we can obtain an r -co loring o f G , and v ice versa. Suppose S i ( i = 1 , ..., r ) form an r -partitio n of H . Note that for e very ( u, v ) ∈ E , u an d v cann ot b elong to the same set S i for any i . Suppo se u and v bo th belon g to S i for some i . Let w deno te the node in W that only directly connects to u an d v . If w ∈ S i , then w has bo th neighb ors in the same set with w , co ntradicting the definition o f r -p artition. If w / ∈ S i , then H V ′ \ S i is d isconnected since w does not conne ct to any node in V ′ \ S i . It also con tradicts th e definition of r -par tition. Thus, for every ( u , v ) ∈ E , node u and v belong to two sets S i and S j with i 6 = j . Then we obtain an r -co loring of G . Let C i ⊂ V ( i = 1 , ..., r ) denote an r -color ing o f G . W e claim that N i = C i ( i = 1 , ..., r − 1 ), an d N r = C r ∪ W form an r -partition of H . First note for e very u ∈ V , at least o ne of its neighbo rs is not in the s ame set as u sin ce H V is a complete subgrap h. For e very w ∈ W , w is directly connected to u and v with ( u, v ) ∈ E . Fro m the d efinition of r -colo ring, u and v are in different sets C i and C j for some i 6 = j . Therefo re, w has at least one neighbo r that is not in N r . Secon d, we will show H V ′ \ N i is conn ected fo r all i . H V ′ \ N r is in fact a comp lete graph, an d thus co nnected. For every i < r , let S i := V \ C i , then V ′ \ N i = S i ∪ W . H S i is a complete subgrap h, and thu s connected . For every w ∈ W , since its two neighbor s c annot be both in C i , then at least one neigh bor belongs to S i , thus H V ′ \ N r = H S i ∪ W is connected. N i ( i = 1 , ..., r ) indeed f orms an r -partition . R E F E R E N C E S [1] L. Applebaum, S. D. How ard, S. Searle, and R. Calderbank , “Chirp sensing codes: Deterministic compressed s ensing measurements for fast reco very , ” Applied and Compu tational Harmonic Analysis , v ol. 26, no. 2, pp. 283 – 290, 2009. [2] P . Babarczi, J. T apol cai, and P .-H. Ho, “ Adja cent link failure localiza tion with monitoring trails in all-optic al mesh networks, ” IEEE/ACM Tr ans. Netw . , vol . 19, no. 3, pp. 907 –920, 2011. [3] A. Barabási and R. Albert, “Emergence of scaling in random networks, ” Scienc e , vol. 286, no. 5439, pp. 509–512, 1999. [4] R. Beri nde, A. Gilbert, P . Ind yk, H. Karlof f, and M. Strauss., “Com- bining geometry and combinatoric s: a unified approach to sparse signal reco very , ” , 2008. [5] T . Bl umensath, “Compressed sensin g with nonline ar observat ions, ” T ech. Rep., 2010. [6] B. Boll obas, Random Graphs , 2nd ed. Cambridge Uni ve rsity Press, 2001. [7] T . Bu, N. Duffiel d, F . L. Presti, and D. T o wsley , “Network tomography on general topologie s, ” in Proc ACM SIGMETRICS , 2002, pp. 21–30. [8] E. Candès and T . T ao, “Decodin g by linear programming, ” IEEE T rans. Inf. Theory , vo l. 51, no. 12, pp. 4203–42 15, 2005. [9] ——, “Near-o ptimal signal recove ry from random projections: Univ ersal encodin g strategi es?” IEEE T rans. Inf . Theory , vol. 52, no. 12, pp. 5406– 5425, 2006. [10] Y . Chen, D. Binde l, H. H. Song, and R. Katz, “ Algeb ra-based scalable ov erlay netw ork monitoring: Algorithms, e val uation, and applic ations, ” IEEE/ACM T rans. Netw . , vol. 15, no. 5, pp. 1084 –1097, 2007. [11] M. Cher aghchi, A. Kar basi, S. Moha jer , and V . Sali grama, “Graph- constrai ned group testing, ” , 2010. [12] A. Coates, A. Hero III, R. No wak, a nd B. Y u, “In ternet t omography , ” IEEE Signal Proc essing Magazine , vol. 19, no. 3, pp. 47 –65, 2002. [13] M. Coates, Y . Pointurier , and M. Rabbat, “Compressed netw ork moni- toring for ip and all-optic al networks, ” in P r oc. ACM SIGCOMM IMC , 2007, pp. 241–252. [14] G. Cormode and S. Muthukrishnan , “Combinatoria l algorithms for compressed sensing, ” ser . L ecture Notes in Computer Science , 2006, vol. 4056, pp. 280–294. [15] R. DeV ore, “Determi nistic construc tions of compressed sensing matri - ces, ” J ournal of Comple xity , vol . 23, no. 4-6, pp. 918 – 925, 2007. [16] D. Donoho, “Compressed sensing, ” IEEE T rans. Inf. Theory , vol. 52, no. 4, pp. 1289–1306, 2006. [17] D. Donoho and J. T anner , “Spa rse nonnega tiv e soluti on of underdet er- mined linear equatio ns by linea r programming, ” in Pr oc. Natl. Acad. Sci. U.S.A. , vol. 102, no. 27, 2005, pp. 9446–94 51. [18] R. Dorfman, “The detect ion of defecti ve members of large populati ons, ” Ann. Math. Statist. , vol. 14, pp. 436–440, 1943. [19] D.-Z. Du and F . K. Hwang, Combinatorial Group T esting and Its Ap- plicat ions (Applied Mathematic s) , 2nd ed. W orld Scientific Publishing Compan y , 2000. [20] N. Duffield, “Netw ork tomography of binary network performanc e charac teristics, ” IEEE T rans. Inf . Theory , vol. 52, no. 12, pp. 5373 – 5388, 2006. [21] M. Firooz and S. Roy , “Link delay estimation via expa nder graphs, ” arxiv:110 6.0941 , 2011. [22] A. Gopal an and S. Ramasubramanian, “On identifyin g additi ve link metrics using linearl y independent cycl es and paths, ” 2011. [Online]. A v aila ble: http://www2.en gr .arizona.edu/~srini/papers/tomography .pdf [23] N. Harvey , M. Patrasc u, Y . W en, S. Y ekhanin, and V . Chan, “Non- adapti ve fault diagnosis for all-opti cal networks via combinat orial group testing on graphs, ” in Proc . IEE E INFOCOM , 2007, pp. 697 –705. [24] J. Haupt, W . Bajwa, M. Rabbat, and R. Nowa k, “Co mpressed sensin g for network ed data, ” IEEE Signal Pro cessing Magazine , vol. 25, no. 2, pp. 92 –101, 2008. [25] H. X. N guyen and P . Thiran, “Using end-to-end data to infer lossy links in sensor networks, ” in Pr oc. IEEE INFOCOM , 2006, pp. 1 –12. [26] J. T apol cai, B. Wu, P . -H. Ho, and L. Rón yai, “ A no vel approach for fail ure locali zation in all-optica l mesh network s, ” IE EE/ACM T rans. Netw . , vol . 19, pp. 275–285, 2011. [27] A. W agner , J. Wright, A. Ganesh, Z . Zhou, H. Mobahi, and Y . Ma, “T o ward s a practical face re cognition system: Robust alignment and illumina tion by s parse represent ation, ” IEEE T rans. P att ern A nalysis and Mach ine Intelli gence , no. 99, pp. 1–14, 2011. [28] D. W att s and S. Strogatz, “Colle cti ve dyna mics of ’ small-wor ld’ net- works, ” Nature , vol. 393, pp. 440–442, 1998. [29] B. Wu, P .-H. Ho, J. T apol cai, and X. Jiang, “ A nove l framew ork of fast and unambiguous link failure localizati on via monitoring trails, ” in Pro c. IEEE INFOCOM , 2010, pp. 1 –5. [30] W . Xu and B. Hassibi, “Effic ient compressiv e sensing with deterministi c guarant ees using expande r gra phs, ” in P r oc. IEEE ITW , 2007, pp. 414 –419. [31] W . Xu, E. Mall ada, and A. T ang, “Compressi ve sensi ng over graphs, ” in Pro c. IEE E INFOCOM , 2011. [32] Y . Zhang , M. Roughan, W . W illinger , and L . Qiu, “Spa tio-temporal com- pressi ve sensing and intern et traffic matric es, ” in Proc. A CM SIGCOMM , 2009, pp. 267–278. [33] Y . Zhao, Y . Chen, and D. Bindel, “T o wards unbiased end-to-end network diagnosi s, ” in P r oc. SIGCOMM , 2006, pp. 219–230.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment