Average-Consensus Algorithms in a Deterministic Framework

We consider the average-consensus problem in a multi-node network of finite size. Communication between nodes is modeled by a sequence of directed signals with arbitrary communication delays. Four distributed algorithms that achieve average-consensus…

Authors: Kevin Topley, Vikram Krishnamurthy

Average-Consensus Algorithms in a Deterministic Framework
1 A v erage-Consensus Algorithms in a Deterministi c Frame work Ke v in T opley , V ikram Krishnamurthy Department of E lectrical and Co mputer Engineering The University of British Columbia, V ancouver , Canad a Email: { kevint, vikramk } @ec e.ubc.c a Index T erms distributed algorithm, linear update, recurring conne ctivity , l east-squares problem, a verage-con sensus Abstract W e consider the ave rage-consensus problem in a multi-node network of finite size. Communication between nodes is mo deled by a seq uence of directed sign als with arbitrary co mmunication delays. Fo ur distributed algorithms that ach iev e a vera ge-consensus are proposed. Necessary and suf ficient communication conditions are gi ven for each algorithm to achie ve average -consensus. Resource costs for each algorithm are deri ve d based on the number of scalar v alues that are required for commun ication and sto rage at each nod e. Numerical examples are provided t o illustrate the empirical con ver gence rate of the four algorithms in comparison with a well-kno wn “gossip” algorithm as well as a randomized information spreading algorithm when assuming a fully connected random graph with instantaneous communication. Glossary • BM, Bench-Ma rk Algorithm, proposed here in ( 10 ) − ( 14 ) . • D A, Distrib uted-A veragin g Algo rithm, proposed here in ( 15 ) − ( 19 ) . • OH, One-Ho p Algorithm, proposed here in ( 20 ) − ( 24 ) . • DD A, Discretized Distributed-A veraging, p roposed here in ( 27 ) − ( 32 ) . • Gossip, Gossip Algor ithm, proposed in [3], also defin ed in ( 156 ) − ( 1 58 ) . • RIS, Random ized Information Spreading, propo sed in [19]. • ARIS, Adapted Rando mized Informatio n Sp reading, defined in ( 160 ) − ( 166 ) . • S V SC, a “singly V -stro ngly connected” commun ication sequence, defined in ( 34 ) . • S V CC, a “singly V -co mpletely connected” commun ication sequence, defined in ( 36 ) . • I V SC, an “infinitely V - strongly connected” commu nication seq uence, define d in ( 35 ) . DRAFT 2 • I V CC, an “infinitely V -c ompletely connected” commu nication seq uence, define d in ( 37 ) . I . I N T RO D U C T I O N A verage-consen sus formation imp lies that a group of distinct no des co me to agr ee o n the average of their initial values, see [3], [7], [10], [19], [21], [29], [ 6] for re lated work. Past results indicate th at obtaining average-con sensus on a set of of n initial vectors each with dimension d requir es at least one of the fo llowing assum ptions: • storage and communic ation of a set with ca rdinality upper b ounde d by O ( nd ) at ea ch node, e. g. the “ flooding” method described for examp le in [4], [16] (where O den otes the Landau big Oh) • construction of dir ected acyclical graphs in the network topolog y [6], [1], [16] • knowledge at the transmitting node of the rec ei ving node’ s identity [17], [6], [1], [1 6] • instantaneou s communication as well as knowledge at the transmitting node of its o ut-degree [12], [9] • strictly bi-d irectional and instantaneous commun ication [5 ], [8], [7], [10], [1 1], [29] • instantaneou s communication and symmetric probab ilities o f node comm unication [10], [12] • an appro ximate a verag e-consensu s based o n random ized number generation [19], [6], [1] • pre-deter mined bound s on the com munication delay s as well as the use of a veraging-weig hts th at are globa lly balanced and pre- determined of f-line [28], [15], [14], [1 3], [25], [21]. This paper proposes four algorithms to solve the av erage-c onsensus problem under weaker commun ication condition s than those listed above. W e deno te the four algorithms Be nch-Mark (BM), Distrib uted-A vera ging (DA), One-Hop (OH) , and Discr e tized Distributed-A veraging (DD A) . (i) The BM algorithm is based on the “floo ding” metho d describ ed in [4], [1 6]. W e sho w in Theorems 4.2 , 4.3 that the BM algor ithm achieves a verage-consen sus gi ven the weakest c ommun ication condition necessary f or average- consensus und er any distrib uted algorithm (this is the S V SC condition , defined in Sec.IV). (ii) As the main result, we show in T heorem 4.5 that the D A algo rithm (a reduction of the BM algorithm) can achieve a verage-consen sus under a r ecurring S V SC condition ( this is th e I V SC c ondition, d efined in Sec. IV). By “recurrin g” we mean that for an infinite set of disjoin t time inter vals, the S V SC condition occurs on each in terval. Previous results based on iterative a veraging (e. g. [3], [10], [2 6], [ 21], [ 25]) req uire spec ial cases o f th e I V SC condition . (iii) The OH and DD A algorith ms ca n be vie wed respectiv ely as simplified version s of the BM and D A algorithms. W e will show th at analogo us results hold u nder th ese algo rithms with r espect to the S V CC and I V CC condition s defined in Sec.IV . In co ntrast to earlier work, the main results of this paper sho w that und er general uni-directio nal c onnectivity condition s on the co mmunication sequenc e, each pro posed algorith m ach iev es av erage- consensus in the presence of • arbitrary comm unication delays, DRAFT 3 • arbitrary link failures. Another distinct contrast each of th e proposed algorithms has with p reviously co nsidered consensus algorithms is, • each node will know exactly wh en th e tru e average-consensus estimate has b een loca lly ob tained, regardless of the com munication pattern between nodes. Of c ourse our gener al results come at a pr ice. T he main drawback of three of our p roposed alg orithms ( D A, DDA, OH) is that th ey require local storage and tra nsmission upper bou nded b y O ( n + d ) , wher e w e recall that n is the network size and d is the dim ension o f the initial consensus vectors. Most a verage-consensus algo rithms in the literature require only O ( d ) costs, howe ver they also require assumptions such as in stantaneous commu nication, pre-deter mined av eraging weights, or contr ol at the transmitting nod e o f th e recei ver identity . Our algorithms (D A, DD A, OH) require no ne of these assumption s and ar e particu larly advantageous for average-consen sus inv olving d ≥ n distinct scalars. In this c ase the tra nsmission and storag e cost of e ach algor ithm is O ( d ) , hence the a ssumption of relati vely w eak communication conditions can be le veraged ag ainst p ast algorithms. T here are several e xamples in th e liter ature whe re d ≥ n . For instance, if each node observes a √ n d imensional pro cess or parameter in noise, then a d istributed Kalman filter or maximum-likelihoo d estimate requ ires an average-consensus on d = n + √ n scalars (see [2 0], [4]). The first algorithm we consider ( BM) is an obviou s solution and is pr esented her e only because (i) it serves as a bench- mark for the other algo rithms (ii) the commun ication conditions necessary for its conv ergence will be u sed in our main results, an d (iii) its formal d escription has many of the same pr operties as the other thre e pro posed algorithm s. Th e BM alg orithm requires local storage and tr ansmission of O ( nd ) and has optimal con vergence rate (see Theo rem 4.2-4.3, in Sec.IV). A. Relation to P ast W o rk Since the literature in co nsensus for mation is substan tial, we now giv e an overview of existing results and compare them with the re sults of this pap er . For suf ficiently small ǫ > 0 , if x k = [ x 1 k , x 2 k , . . . , x n k ] ∈ R n denotes the network state at time k , then [21] proves th at each element in the sequence { x k = ( I − ǫ L k ) x k − 1 , k = 1 , 2 , . . . } conv erges asymptotically to P n i =1 x i 0 /n if each graph Laplacian L k in the sequen ce {L k ∈ R n × n : k = 1 , 2 , . . . } is balanced and ind uces a strong ly connected graph . Th e work [26] gene ralizes th is result by allowing ǫ to decr ease at a sufficiently slow ra te, and assum ing only that the re exists some integer β such th at the unio n of all graph Laplacians over th e in terval [ k β , ( k + 1) β − 1] in duces a stro ngly connected grap h for each k ≥ 1 . Howe ver , each graph Lap lacian in [26] is still assumed to b e ba lanced, and as is typical of m any interesting papers on av erage-co nsensus, neither [21] nor [26] e xplain any method by which the no des c an d istrib utively ensure each L k is balanced while the sequence {L k : k = 1 , 2 , . . . } is o ccurring . Hence the results of th ese works assume that all av eraging we ights ar e globally pre -determine d of f-line, or in o ther w ords, e very node in the network is assumed to know what averaging weights they should locally be u sing at e ach iteration to guarantee th e resulting L k is globally balanced. DRAFT 4 In con trast, [5] prop oses a “Metropo lis” algorithm that requ ires a “two-round ” sign aling proc ess wherein nodes can distrib utively comp ute local averaging weigh ts. I t is sho wn in [ 5] that each element in the seq uence { x k = ( I − ǫ L k ) x k − 1 , k = 1 , 2 , . . . } converges asymptotically to P n i =1 x i 0 /n under mild conn ectivity cond itions on the sequence {L k : k = 1 , 2 , . . . } . The work [9] also propo ses a distributed algorith m that do es not r equire pre - determined averaging weights, howe ver [9] assumes each transmitting no de k nows the number of nodes that will receive its me ssage, even before the m essage is transmitted. Similar ly , the algorithm in [5] assumes bi-direction al commun ication, and furth ermore each of the stated results in [ 21], [26], [5], [9] assume that the communicatio n is instantaneou s. In contrast, only [21], [9] and [5] a ssume the comm unication is noiseless. The results in this paper do no t assume instantaneous or bi-directional communica tion, no r do they a ssume that the transmitting n ode knows what node will rece i ve their message or wh en. Howe ver , o ur results d o require noiseless commun ication. The iss ue of n oisy communication can be tr eated as a “meta-prob lem” that may be super-imposed upon the framework co nsidered here . Similar to ou r current app roach, there has b een much researc h that assumes noiseless co mmunica tion (for instance [ 12], [17], [19]). Conv ersely , there is a g rowing body of w ork that considers av erage-co nsensus formation in specifically noisy commu nication settings [11], [26], [14]. W o rks such as [3], [ 18], [27], [15], [ 24], [2], [2 3], [2 1], [ 25], [ 15] re quire n ode com munication p roperties that are special cases of the I V SC condition defined in Sec. IV -A. However , b esides the floo ding BM meth od and the D A algorith m pr oven to co n verge h ere, the only o ther known alg orithm that can even be conjectur ed to almost surely o btain a verage-co nsensus under all I V SC seq uences is a specific adap tation of the randomiz ed informatio n spreading ( RIS) alg orithm pro posed in [19]. Our adaptation o f this algorithm is referred to as ARIS and is detailed in Sec.VII-B o f Appendix VI I. W e note, howe ver , th at th e lower bound o n con vergence rate d erived in [1 9] assum es instantaneou s bi-dir ectional com munication , an d furthermor e, any version of the RIS algorithm assumes th at the initial consensu s variables ar e all positive valued. The Gossip algo rithm pr oposed in [3] as well as the ARIS algorithm are used as points o f referenc e for the four algorith ms pro posed in th is paper . In Sec.V - A we co mpare the resource costs of all six algorithms, and in Sec.V -B the performa nce of each algor ithm is illustrated b y simulation under various randomized communication sequences when assum ing a full network graph . W e note that [1 8], [27], [24], [2], [23] do n ot guarante e the final consensus value will equal the initial average; [3] p roposes an u pdate rule th at assum es instantaneou s bi-directional commu nication; and [25], [ 15], [21] assume pre-deter mined bounds on the c ommunic ation d elays as well as averaging-weig hts that are globally b alanced at each iteration (see Theorem 5, [25]). In contrast, Theore m 4. 5 in Sec.IV states that th e D A algorithm will gu arantee av erage-co nsensus un der a ll I V SC sequences, regard less o f the comm unication delays an d witho ut req uiring any pre-deter mined ba lancing of the av eraging weights. W e n ote that a distinct feature that a ll four of the prop osed algorithm s have is that they ea ch r equire the local initial consensu s vector to be stored in the r espectiv e datab ase of each n ode until average-consensus is locally ob tained at that nod e. W ithou t this proper ty , each algorithm could still ensure a “consensus formation” under the exact same communica tion con ditions assumed i n our main results, howe ver the final consen sus value would no t necessarily eq ual the initial average, which is d esirable in most applications [3], [ 4], [6], [ 1], [1 6], [2 0]. In fu rther contra st to past results, the proofs of con vergence used in this DRAFT 5 work do not rely on matrix theory [5], [21], L y apunov techn iques [18], [28], or stochastic stability [10], [25], [13]; instead, for our main results we obtain a variety of lo wer bounds on the “error” reduction and sho w that under (determin istic) recurr ing connectivity con ditions an average-consensus will asymp totically ob tain in the L 2 norm. As a final note, we clar ify that the commu nication is assumed to be causal, hen ce a signal cannot be rece i ved at node i before it ha s left node j . Given this assumption, our framew ork considers every p ossible sequence of signals, hence any realization o f a (cau sal) stochastic commun ication model is a special case of our deterministic framework. B. Outline Sec.II formu lates the problem statement as well as ou r assump tions regarding the no de co mmunica tion and algorithm framework. Sec.III de fines the fo ur p roposed alg orithms. Sec .IV states the commun ication condition s that are ne cessary and sufficient for each algorithm to o btain a verage-co nsensus. Sec. V consider s th e nume rical implementatio n o f the four algo rithms togeth er with th e two comparison algo rithms Gossip a nd ARIS. Resource costs are given in Sec.V -A, and n umerical simulations are pr esented in Sec.V -B . A summary of the results and suggested futur e w ork is provided Sec.VI. The App endix VII presents the proofs of all theor ems, the two comparison algorithm s, four conjectures, the resource cost der i vations, and a n impo rtant example. I I . T H E A V E R A G E - C O N S E N S U S P RO B L E M A N D A L G O R I T H M A S S U M P T I O N S This section fo rmulates the a verage-co nsensus p roblem and lists the a ssumptions regarding comm unication between nod es. Sec .II-A below d efines the grap h theoretic model for con sensus formation that will be subseq uently analyzed, and Sec.II-B defines the class of the distributed algo rithms we con sider . Sec.II-C details the remaining assumptions th at will be m ade on the nod e co mmunicatio n and com putational abilities, an d also explains the technique by which th e four prop osed algorithms will obtain av erage-c onsensus. A. Pr oblem F ormulation Let t ≥ 0 de note time (the results of this paper assume a co ntinuou s time fr amew ork; a ny discrete time index is a spec ial case of this f ramew ork) . At initial time t = 0 , c onsider a finite set of arbitrar ily num bered no des V = { 1 , . . . , n } and a set of d -dimensional vector s { s i (0) ∈ R d : i ∈ V } . The set { s i (0) ∈ R d : i ∈ V } is referred to as the set o f “initial co nsensus vectors”. Suppose each n ode can locally store a “k nowledge set” K i ( t ) that con sists of a group of scalars e ach with a distinct me aning. • (A1): (knowledge set assumption ) At any time t ≥ 0 , each node i ∈ V is equip ped with a device that can update and store a “knowledge set” K i ( t ) . For each i ∈ V , the knowledge set K i ( t ) may have a time-varying cardinality . Next we assume that a set S ij ( t ij 0 , t ij 1 ) can be tran smitted from nod e j at a time d enoted t ij 0 ≥ 0 , and r eceiv ed at node i at a time denoted as t ij 1 , where due to causality t ij 1 ≥ t ij 0 . W e refer to S ij ( t ij 0 , t ij 1 ) as a “signa l”, or “signal set”. DRAFT 6 • (A2): (sig nal set assumption) At any time t ij 0 ≥ 0 , each node j ∈ V has the ability to tran smit a “signal set” S ij ( t ij 0 , t ij 1 ) ⊆ K j ( t ij 0 ) that will b e received at some node i ∈ V at time t ij 1 ≥ t ij 0 . As our final co ndition, we assume that at t = 0 each node i ∈ V will “ know” its u nique node identifier v alue i , the network size n , an d the respective initial c onsensus vector s i (0) . This is f ormalized as, • (A3): At time t = 0 , the knowledge set K i (0) of ea ch node i ∈ V satisfies K i (0) ⊇ { i, n , s i (0) } . Definition 2.1: Und er (A1)-(A3 ), the a verage-con sensus p roblem is solved at some time instant t if and only if (iff) the avera ge of the initial consen sus vectors, ¯ s (0) = 1 n n X i =1 s i (0) , (1) is co ntained in th e knowledge set K i ( t ) o f all nodes i ∈ V . W e say that a specific no de i h as obtain ed a verage- consensus at time t if f ¯ s (0) ∈ K i ( t ) .  The f our algo rithms analyzed in this pap er can be adapted so that th e size of the network n an d uniq ue node identifier i ∈ V can b e compu ted dur ing the averaging p rocess, and thus they need no t be initially known at any node. For simplicity , howe ver , it will be assumed that the network size n an d unique node identifier i are initially known at each node i ∈ V . B. The Distributed Algorithm and Loc al Consensus Estimates T o provide a u nifying them e for the algorith ms discussed in this pap er , we first de fine the class of d istributed algorithm s that will b e co nsidered f or consensus form ation. Given the assumptio ns ( A 1) − ( A 2) , we d efine a “distributed algorithm ” in term s of its “k nowledge set updatin g r ule” f K {·} to gether with a “sig nal specificatio n rule” f S {·} . The k nowledge set u pdating ru le f K {·} de fines the effect that a set of signa ls has on the knowledge set K i ( t ij 1 ) at the receiving node i , wher eas the sign al specification rule f S {·} define s the elements contained in the sig nal S ij ( t ij 0 , t ij 1 ) giv en as a fu nction o f th e k nowledge set K j ( t ij 0 ) at the transmitting node j . If we assume that upon reception of a set of sign als ∆ t > 0 time elapses be fore the kn owledge set is fully updated , then the class of distributed algorithms we consid er may be defined as f ollows. Class of Distributed Algorithms un der (A1)-(A2) : Knowledge Set Updating Rule: f K : K i ( t ij 1 ) [ t ij 0 ≤ t ij 1 : j ∈V S ij ( t ij 0 , t ij 1 ) 7− → K i ( t ij 1 + ∆ t ) (2) Signal Specification Rule: f S : K j ( t ij 0 ) 7− → S ij ( t ij 0 , t ij 1 ) (3) The algo rithms in this pap er will assume that every knowledge set K i ( t ) con tains a local “c onsensus estimate” ˆ s i ( t ) ∈ R d that repr esents the “belief ” of no de i in regard to the average-consensus ¯ s (0) defined in ( 1 ) , K i ( t ) ⊇ { ˆ s i ( t ) } , ∀ i ∈ V , ∀ t ≥ 0 . (4) DRAFT 7 Notice that using a lo cal consen sus estimate ˆ s i ( t ) is necessary for any alg orithm seeking to solve th e average- consensus pro blem; if there is no lo cal consensu s estimate then there is no means b y which the knowledge set of any node can contain ¯ s (0) and hence the pr oblem stated in Definition 2. 1 is ill-posed. In contrast from any known past con sensus algorith m, the fou r prop osed algorith ms here will also assume th at at any time t ≥ 0 the knowledge set K i ( t ) of each no de i ∈ V will contain a local “normal c onsensus estimate” ˆ v i ( t ) ∈ R n , K i ( t ) ⊇ { ˆ v i ( t ) } , ∀ i ∈ V , ∀ t ≥ 0 . (5) Combining ( A 3) , ( 4 ) , and ( 5 ) we the n have th e initial knowledge set for each of the fou r propo sed alg orithms, • ( A 4) : K i (0) = { i, n , s i (0) , ˆ s i (0) , ˆ v i (0) } , ∀ i ∈ V . Define the “err or” of the consen sus estimate ˆ s i ( t ) as fo llows, E ¯ s (0)  ˆ s i ( t )  = | q  ˆ s i ( t ) − ¯ s (0)  2 | , where ¯ s (0) is defin ed in ( 1 ) . Denote the “network consensus er ror” P n i =1 E ¯ s (0)  ˆ s i ( t )  . W e con clude this section with the following definition of what con stitutes the solution to the a verage-co nsensus pr oblem. Definition 2.2: The average-consensus proble m is solved at time t iff the network consensus er ror is zero , that is, n X i =1 E ¯ s (0)  ˆ s i ( t )  = 0 . (6)  C. Node Commun ication and Update Assumptio ns Below we detail the node communicatio n an d update assump tions that will be used by each of th e four proposed algorithm s. Th e final up date condition we pro pose will explain the tec hnique by wh ich all four a lgorithms achie ve av erage-co nsensus. For any t ≥ 0 , define t (+) as th e right-h and limit of t , that is t (+) = lim T → t + T . T o con struct a suitable av erage-co nsensus algorithm we assum e the following condition s on the n ode commun ication and k nowledge set updates: • (A5): at no tim e doe s any nod e j ∈ V have the ability to know when the signal S ij ( t ij 0 , t ij 1 ) is transmitted, when the signal S ij ( t ij 0 , t ij 1 ) is rece i ved, or what n ode i ∈ V will receive th e signal S ij ( t ij 0 , t ij 1 ) . • (A6): at no time d oes any n ode i ∈ V have th e ab ility to co ntrol when the signal S ij ( t ij 0 , t ij 1 ) is receiv ed, what node j ∈ V transmitted the sign al S ij ( t ij 0 , t ij 1 ) , or whe n the signal S ij ( t ij 0 , t ij 1 ) was transmitted . • (A7): eac h knowledge set satisfies K i ( t (+)) = K i ( t ) at any tim e t ≥ 0 that nod e i does n ot receive a signal (recall that t (+) d enotes the right- hand limit of t ). • (A8): when a signal S ij ( t ij 0 , t ij 1 ) is recei ved, the knowledge set K i ( t ij 1 ) of th e recei ving n ode is u pdated in an arbitrarily small amo unt of time. • (A9): at mo st one signal can be received and processed by a given node at any gi ven time instant. DRAFT 8 Note tha t ( A5)-(A6) imp ly the communica tion p rocess is a p riori un known at every no de. The assumption (A7 ) implies that the node knowledge set K i ( t ) can only change if a signal is receiv ed at node i , and ( A8) implies that any signal S ij ( t ij 0 , t ij 1 ) tran smitted from node j is allo wed to contain informatio n that has b een upda ted by a signal received at node j at any time pr eceding t ij 0 . Notice th at (A8) is re alistic since a ll four of th e proposed algorithms will require only a few arithm etic o peration s in the update process. Assumption (A9) is a technical requiremen t that simplifies the pr oofs of conver gence. T ogether ( A8) an d (A9) imp ly the k nowledge set u pdating rule f K {·} defined in ( 2 ) re duces to, f K : K i ( t ij 1 ) ∪ S ij ( t ij 0 , t ij 1 ) 7− → K i ( t ij 1 (+)) . (7) Each alg orithm we propo se can b e easily adapted if (A8)-(A9 ) we re r elaxed, howe ver this w ould b e at the expen se of simplicity in b oth our framework and analysis. As our final u pdate condition , we require that the loca l consensus estima te ˆ s i ( t ij 1 ) at the receiving node i , as defined above ( 4 ) , is updated based on th e updated norma l consensus estimate ˆ v i ( t ij 1 (+)) via the relation, ˆ s i ( t ij 1 (+)) = S ˆ v i ( t ij 1 (+)) , (8) where S = [ s 1 (0); s 2 (0); · · · ; s n (0)] ∈ R d × n . Under ( 8 ) it is clear that ˆ s i ( t ij 1 (+)) = ¯ s (0) if ˆ v i ( t ij 1 (+)) = 1 n 1 n , where 1 n ∈ R n denotes a vector that consists on ly of unit-valued elements. Thus, in terms of Definition 2.2, und er ( 8 ) and ( 5 ) the average-con sensus problem is solved at time t if ˆ v i ( t ) = 1 n 1 n for all no des i ∈ V . Mo tiv a ted b y this fact, we propose for e ach of the four algorithms that the no rmal consensus estimate ˆ v i ( t ) is updated based on the following optimization problem, ˆ v i ( t ij 1 (+)) = arg ˜ v min  ˜ v − 1 n 1 n  2 , s.t. ( 8 ) ho lds, giv en K i ( t ij 1 ) S S ij ( t ij 0 , t ij 1 ) . (9) Note that f rom ( 4 ) and ( 5 ) each n ode i ∈ V will know it has o btained th e tr ue average-con sensus v alue ˆ s i ( t ) = ¯ s (0 ) when the loca l nor mal con sensus estimate ˆ v i ( t ) satisfies the cond ition ˆ v i ( t ) = 1 n 1 n . In th e next section we will define th e k nowledge set u pdating r ule f K {·} an d sign al specificatio n ru le f S {·} f or ea ch o f the four algorithms. W e shall fin d th at fo r each algorithm the update p roblem ( 9 ) reduces to a least-squares optimizatio n and a closed-fo rm expression can be obtain ed for updated normal consen sus esti mate ˆ v i ( t ij 1 (+)) . I I I . D I S T R I B U T E D A V E R AG E - C O N S E N S U S A L G O R I T H M S W ith the above definitions, we are now r eady to d escribe the fo ur distributed algorith ms tha t achieve average- consensus. The deta ils o f the two comp arison alg orithms c an be fou nd in Sec.VII-B of Append ix VII. All six algorithm s ( the f our pr esented below an d the two c omparison algorith ms in Sec. VII-B) are d efined usin g the abstract definition of the “distributed algorithm” gi ven in Sec. II, that is ( 3 ) , ( 7 ) . Th is section sets the stage for the conv ergence th eorems presented in Sec.IV . DRAFT 9 A. Algorithm 1: B ench-Mark (BM) The BM algorithm obtains average-consensus trivially; we pro pose it fo rmally beca use the rema ining thr ee algorithm s are specific reductions o f it, an d a lso because it requ ires co mmunicatio n condition s that will be u sed in our ma in resu lt. The BM algorithm implies that the initial co nsensus vectors { s i (0) : i ∈ V } are essentially flooded throug h-out th e network. In [4], [16] the genera l methodo logy o f the BM algorithm is discussed wherein it is ref erred to as “flooding”. For completen ess of our r esults, we show in T heorems 4.2,4.3 th at regardle ss of the commu nication pattern between nodes, there exists n o distributed algor ithm ( 2 ) , ( 3 ) that c an o btain av erage-co nsensus before the BM algorithm . This is why we have n amed it the “bench-m ark” alg orithm. Let δ [ · ] denote the Kronecker delta function app lied element-wise, and e i denote the i th standard unit vector in R n . The BM sign al specification ( 3 ) and kn owledge set up date ( 7 ) are respectively defined as ( 10 ) and ( 11 ) below . Algorithm 1: Bench -Mark (BM) Signal Specification: S ij ( t ij 0 , t ij 1 ) =    K j ( t ij 0 ) \ { j, n, ˆ s j ( t ij 0 ) } if ˆ v j ( t ij 0 ) 6 = 1 n 1 n { ˆ v j ( t ij 0 ) , ˆ s j ( t ij 0 ) } if ˆ v j ( t ij 0 ) = 1 n 1 n (10) Knowledge Set Update : v ij ( t ij 1 , t ij 0 ) ≡ 1 n − δ [ ˆ v i ( t ij 1 ) + ˆ v j ( t ij 0 )] K i ( t ij 1 (+)) =    { i, n, ˆ v i ( t ij 1 (+)) , ˆ s i ( t ij 1 (+)) , s ℓ (0) ∀ ℓ s.t. v ij ℓ ( t ij 1 , t ij 0 ) = 1 } if ˆ v i ( t ij 1 (+)) 6 = 1 n 1 n { ˆ v i ( t ij 1 (+)) , ˆ s i ( t ij 1 (+)) } if ˆ v i ( t ij 1 (+)) = 1 n 1 n (11) Normal Consensus Estimate Upd ate: ˆ v i ( t ij 1 (+)) = 1 n v ij ( t ij 1 , t ij 0 ) (12) Consensus Estimate Upd ate: ˆ s i ( t ij 1 (+)) =    P n ℓ =1 ˆ v iℓ ( t ij 1 (+)) s ℓ (0) if ˆ v i ( t ij 1 (+)) 6 = 1 n 1 n ¯ s (0) if ˆ v i ( t ij 1 (+)) = 1 n 1 n (13) Estimate Initializatio n: ˆ v i (0) = 1 n e i , ˆ s i (0) = 1 n s i (0) . (14) In Lemma 7.3 and Lemm a 7.2 of Appen dix VII we will prove that ( 12 ) is t he unique solutio n to ( 9 ) u nder ( 10 ) and ( 11 ) , and th at ( 14 ) is the unique solution to ( 9 ) under the initial knowledge set ( A 4) . The update ( 13 ) follows immediately from ( 12 ) and the relation ( 8 ) . Notice that the BM algor ithm updates ( 11 ) , ( 12 ) together with the signal spec ification ( 10 ) imply ¯ s (0) ∈ K i ( t ij 1 ( t ij 1 (+)) if f ˆ v i ( t ij 1 (+)) = 1 n 1 n , and like wise s ℓ (0) ∈ K i ( t ij 1 (+)) iff ˆ v iℓ ( t ij 1 (+)) = 1 n and ˆ v i ( t ij 1 (+)) 6 = 1 n 1 n . It thus f ollows that ( 10 ) , ( 11 ) and ( 12 ) imply th at the consensus estimate update ˆ s i ( t ij 1 (+)) defined in ( 13 ) can b e lo cally computed at node i based only on K i ( t ij 1 ) and the received signal S ij ( t ij 0 , t ij 1 ) . Besides flooding th e in itial co nsensus vector s, the BM algorithm ( 1 0 ) − ( 14 ) has an additional f eature that is not necessary b ut is rath er practical: if ˆ v j ( t ) = 1 n 1 n then all n o f th e initial consensus vectors are stored in K j ( t ) , DRAFT 10 when this o ccurs the co nsensus e stimate ˆ s j ( t ) defined in ( 13 ) will equal ¯ s (0) and thus n ode j has obtained average- consensus. In this case all of the initial consensus v ectors are d eleted from K j ( t ) , and any signal transmitted from j con tains o nly the consensus estimate ˆ s j ( t ) = ¯ s (0 ) and the vector ˆ v j ( t ) = 1 n 1 n . Upon receptio n of a signal containing ˆ v j ( t ) = 1 n 1 n , the recei ving node i can d elete all o f their locally stor ed initial co nsensus vectors and set ˆ s i ( t ) = ˆ s j ( t ) = ¯ s (0) . In this way the a verage-consensus value ¯ s (0) can be propagated through-ou t the network without r equiring all n initial co nsensus vectors to be contained in every signal. See Conjecture 7 .4 in Sec.VII- A for a con jecture regarding the BM algorithm. B. Algorithm 2: Distributed A ve raging ( D A) The D A algor ithm that we now in troduce, to the b est of our k nowledge, is new . T o define the D A update procedur e, let V + denote the p seudo-inverse of an ar bitrary matrix V . The D A signal specificatio n ( 3 ) and knowledge set upd ate ( 7 ) are d efined as ( 15 ) and ( 16 ) below . Algorithm 2: Distributed A veraging (D A) Signal Specification: S ij ( t ij 0 , t ij 1 ) = K j ( t ij 0 ) \ { j, n , s j (0) } (15) Knowledge Set Update: K i ( t ij 1 (+)) = { i, n, ˆ v i ( t ij 1 (+)) , ˆ s i ( t ij 1 (+)) , s i (0) } (16) Normal Consensus Estimate Upd ate: ˆ v i ( t ij 1 (+)) = V ( DA ) V + ( DA ) 1 n 1 n , V ( DA ) = [ ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , 1 n e i ] (17) Consensus Estimate Update: ˆ s i ( t ij 1 (+)) = V s V + ( DA ) 1 n 1 n , V s = [ ˆ s i ( t ij 1 ) , ˆ s j ( t ij 0 ) , 1 n s i (0)] (18) Estimate Initializatio n: ˆ v i (0) = 1 n e i , ˆ s i (0) = 1 n s i (0) . (19) The Lemma 7.2 an d Lemma 7.5 in App endix VII, respectively , pr ove that ( 19 ) is th e uniq ue solu tion of ( 9 ) under ( A 4) , and that ( 17 ) is the u nique solution to ( 9 ) un der ( 15 ) and ( 16 ) . Based on the norm al consensus upda te ( 17 ) th e relation ( 8 ) r educes to ( 18 ) . From ( 18 ) it is clea r th at the D A con sensus estimate up date ˆ s i ( t ij 1 (+)) can be locally comp uted at no de i based only on K i ( t ij 1 ) and the re cei ved signal S ij ( t ij 0 , t ij 1 ) . Notice that V ( DA ) is a n × 3 matrix, thus th e p seudo-inverse in ( 17 ) will have an imme diate closed-f orm expression, see ( 70 ) , ( 72 ) and ( 73 ) in Lemma 7.9. Also n ote that under th e DA algorith m ev ery sign al contain s on ly the l ocal consensus estimate ˆ s j ( t ) toge ther with the local norma l consensus estimate ˆ v j ( t ) . C. Algo rithm 3: One-Hop (OH) Under the OH algor ithm each signal S ij ( t ij 0 , t ij 1 ) will either con tain the local initial consensus vector s j (0) and transmitting node identity j , or the average-consensu s value ¯ s (0 ) an d a scalar 0 to indicate that the tran smitted vector is the true average-consensu s v alue. For this reason the c onditions for average-con sensus under the OH DRAFT 11 algorithm ar e relatively straight-for ward to deriv e (see T heorem 4.7 in Sec.IV an d the proof in Ap pendix VII). The OH algorith m signal specification and knowledge set update are respectively d efined by ( 20 ) and ( 21 ) below . Algorithm 3: One -Hop (OH) Signal Specification: S ij ( t ij 0 , t ij 1 ) =    { j, s j (0) } if ˆ v j ( t ij 0 ) 6 = 1 n 1 n { 0 , ˆ s j ( t ij 0 ) } if ˆ v j ( t ij 0 ) = 1 n 1 n (20) Knowledge Set Update: K i ( t ij 1 (+)) =    { i, n, ˆ v i ( t ij 1 (+)) , ˆ s i ( t ij 1 (+)) , s i (0) } if ˆ v i ( t ij 1 (+)) 6 = 1 n 1 n { ˆ v i ( t ij 1 (+)) , ˆ s i ( t ij 1 (+)) } if ˆ v i ( t ij 1 (+)) = 1 n 1 n (21) Normal Consensus Estimate Update: v ij ( t ij 1 , t ij 0 ) ≡ 1 n − δ [ ˆ v i ( t ij 1 ) + e j ] ˆ v i ( t ij 1 (+)) =    1 n v ij ( t ij 1 , t ij 0 ) if 0 6 = S ij 1 ( t ij 0 , t ij 1 ) 1 n 1 n if 0 = S ij 1 ( t ij 0 , t ij 1 ) (22) Consensus Estimate Upd ate: ˆ s i ( t ij 1 (+)) =    ˆ s i ( t ij 1 ) +  1 n − ˆ v ij ( t ij 1 )  s j (0) if 0 6 = S ij 1 ( t ij 0 , t ij 1 ) ¯ s (0) if 0 = S ij 1 ( t ij 0 , t ij 1 ) (23) Estimate Initializatio n: ˆ v i (0) = 1 n e i , ˆ s i (0) = 1 n s i (0) . (24) The Lemma 7.2 an d Lemma 7.20 in Ap pendix VII respectively prove th at ( 24 ) is the u nique solution to ( 9 ) under ( A 4) , a nd that ( 22 ) is th e uniqu e solution to ( 9 ) under ( 20 ) and ( 21 ) . No tice that ( 22 ) implies the relation ( 8 ) re duces to ( 23 ) . Given ( 20 ) , ( 21 ) and ( 22 ) it fo llows that the OH consensus estimate update ˆ s i ( t ij 1 (+)) de fined in ( 23 ) can be locally compu ted at node i based on ly on K i ( t ij 1 ) and the r eceiv ed signal S ij ( t ij 0 , t ij 1 ) . D. Algorithm 4: Discretiz ed Distributed-A veraging (DD A) For the DD A algorithm let the d iscrete set of vectors R n 0 , 1 n be defined, R n 0 , 1 n = { v ∈ R n : v ℓ ∈ { 0 , 1 /n } , ∀ ℓ = 1 , 2 , . . . , n } . (25) The discretized version of ( 9 ) that we consider und er the DD A algorithm is, ˆ v i ( t ij 1 (+)) = arg ˜ v min  ˜ v − 1 n 1 n  2 , s.t. ( 8 ) ho lds and ˜ v ∈ R n 0 , 1 n , given K i ( t ij 1 ) S S ij ( t ij 0 , t ij 1 ) . (26) T o defin e the DDA norm al con sensus upd ate it is con venient to denote v − i ∈ R n − 1 as th e vector v ∈ R n with element v i deleted. The DDA s ignal sp ecification and knowledge s et update are defined below , DRAFT 12 Algorithm 4: Discretized Distributed-A veragin g (DDA) Signal Specification: S ij ( t ij 0 , t ij 1 ) = K j ( t ij 0 ) \ { j, n , s j (0) } (27) Knowledge Set Update: K i ( t ij 1 (+)) = { i, n, ˆ v i ( t ij 1 (+)) , ˆ s i ( t ij 1 (+)) , s i (0) } (28) Normal Consensus Estimate Upd ate: ˆ v i ( t ij 1 (+)) = ˆ a ˆ v i ( t ij 1 ) + ˆ b ˆ v j ( t ij 0 ) + ˆ ce i (29) (ˆ a, ˆ b, ˆ c ) =  1 , 1 , − ˆ v j i ( t ij 0 )  if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) = 0 (ˆ a, ˆ b, ˆ c ) =  0 , 1 , 1 n − ˆ v j i ( t ij 0 )  if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 , ˆ v − i i ( t ij 1 ) 2 < ˆ v − i j ( t ij 0 ) 2 (ˆ a, ˆ b, ˆ c ) =  1 , 0 , 0  if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 , ˆ v − i i ( t ij 1 ) 2 ≥ ˆ v − i j ( t ij 0 ) 2 (30) Consensus Estimate Update: ˆ s i ( t ij 1 (+)) = ˆ a ˆ s i ( t ij 1 ) + ˆ b ˆ s j ( t ij 0 ) + ˆ cs i (0) (31) Estimate Initializatio n: ˆ v i (0) = 1 n e i , ˆ s i (0) = 1 n s i (0) . (32) Notice that the DDA signal specification ( 27 ) an d knowledge set up date ( 28 ) are identical to those o f the DA algorithm . Also notice that under ( 29 ) the relation ( 8 ) im plies ( 31 ) . The Lemma 7.2 and Lemma 7.2 7 in Appendix VII resp ecti vely prove th at ( 32 ) is the un ique solution to ( 26 ) u nder ( A 4) , an d that ( 29 ) − ( 3 0 ) is a glob al solution to ( 26 ) under ( 27 ) and ( 28 ) . From ( 2 7 ) and ( 28 ) it is clear that the DD A consen sus estimate upd ate ˆ s i ( t ij 1 (+)) defined in ( 31 ) c an be locally co mputed at node i based on ly on K i ( t ij 1 ) and the received signal S ij ( t ij 0 , t ij 1 ) . Summary: W e have defined four algorith ms that in Sec.I V will b e shown to achieve a verage-co nsensus. The algorithm s were d eriv ed as special cases of the distributed algo rithm ( 3 ) , ( 7 ) , wh ere ( 7 ) is a special case o f ( 2 ) . Computation ally , each algorithm r equires only a few elementar y arithmetic oper ations. In Sec.V -A we discuss the storage and com munication costs of these four alg orithms. I V . M A I N R E S U LT S : A N A L Y S I S O F T H E A V E R AG E - C O N S E N S U S A L G O R I T H M S This section proves th at Algorithms 1 − 4 d escribed in Sec.III achie ve a verag e-consen sus und er different communi- cation c onditions, we state this below in fi ve th eorems. Under assumption s ( A1)-(A9) listed in Sec.II-A − II-B, th ese theorems provid e necessary and sufficient conditions on the co mmunicatio n between nodes fo r av erage-co nsensus formation under Algorithms 1 − 4 (recall Definition 2.2 d efines the solution to the av erage-co nsensus pro blem). The main imp lication of the se results is that, with or witho ut floodin g, an av erage-c onsensus can b e ach iev ed in the presenc e of arbitrary co mmunicatio n delay s and link failures, provided only that th ere is a u ni-directio nal connectivity condition among nodes. Each theorem be low will assume a certain cond ition on the commun ication among nod es. T o specify the se condition s we requ ire the following two definition s. For any t 1 ≥ t 0 ≥ 0 , an arbitr ary “commu nication sequence” DRAFT 13 C ( t 0 ,t 1 ) is defin ed as the set of all signa ls transmitted after time t 0 and received before time t 1 , that is, C [ t 0 ,t 1 ] = { S i 1 j 1 , S i 2 j 2 , S i 3 j 3 , . . . } where we h av e omitted the time in dices but it is un derstood that the transmission tim e t i ℓ j ℓ 0 and receptio n time t i ℓ j ℓ 1 of each signal S i ℓ j ℓ belong to the inter val [ t 0 , t 1 ] . Recall that a sign al S ij ( t ij 0 , t ij 1 ) as stated in (A2 ) implies that a well-defined subset of K j ( t ij 0 ) leaves nod e j a t time t ij 0 and is r eceiv ed at node i at time t ij 1 (the specific subset depend s on the algorithm considere d). Next we d efine the n otion of a “co mmunica tion path”. Intuitiv ely , a com munication p ath from n ode j to i implies that n ode j tran smits a signal rece i ved by some no de ℓ 1 , an d then node ℓ 1 sends a signal received by some n ode ℓ 2 , and then nod e ℓ 2 sends a sign al recei ved b y so me node ℓ 3 , and so on , until n ode i r eceiv es a signa l from no de ℓ k ( ij ) . T ech nically , we say that C [ t 0 ,t 1 ] contains a “comm unication path” from node j to node i if f C [ t 0 ,t 1 ] contains a sub-sequ ence C ij [ t 0 ( ij ) ,t 1 ( ij )] with the f ollowing con nectivity property , C ij [ t 0 ( ij ) ,t 1 ( ij )] ⊇ { S ℓ 1 j , S ℓ 2 ℓ 1 , S ℓ 3 ℓ 2 , . . . , S ℓ k ( ij ) ℓ k ( ij ) − 1 , S iℓ k ( ij ) } (33) where again we ha ve o mitted the time indices b ut it is understood th at th e transmission time t ℓ q +1 ℓ q 0 of each sign al S ℓ q +1 ℓ q occurs after the reception time t ℓ q ℓ q − 1 1 of the pr eceding signal S ℓ q ℓ q − 1 . Note th at the commu nication path C ij [ t 0 ( ij ) ,t 1 ( ij )] has a car dinality | C ij [ t 0 ( ij ) ,t 1 ( ij )] | ≥ k ( ij ) + 1 . A. Necessary and S ufficient C ondition s for A verage-Consensus W e are now ready to present the main conver gence theo rems for Algorithms 1 − 4 . 1) Algo rithm 1 (BM): T o prove conver gence o f the BM algo rithm, con sider th e fo llowing com munication condition . L et us den ote V − i = V \ { i } for an arb itrary node i ∈ V . Definition 4.1: (S V SC) A com munication seque nce C [0 ,t 1 ] is “singly V -strongly co nnected” ( S V SC) iff the re exists a communication path fro m each node i ∈ V to ev ery n ode j ∈ V − i .  W e will let “ C [ t 0 ,t 1 ] ∈ S V SC” de note that a sequence C [ t 0 ,t 1 ] is S V SC. The definition ( 33 ) implies that C [ t 0 ,t 1 ] ∈ S V SC iff, C ij [ t 0 ( ij ) ,t 1 ( ij )] ⊂ C [0 ,t 1 ] , ∀ j ∈ V − i , ∀ i ∈ V . (34) The following theorem establishes the sufficient communication condition s for the BM algorithm ( 10 ) − ( 14 ) . Theor em 4.2: (BM Su fficient Cond itions) Consider A lgorithm 1, na mely the BM algorithm ( 10 ) − ( 14 ) . Th en the av erage- consensus ( 6 ) is achieved at time t = t 1 (+) for any commu nication sequ ence C [0 ,t 1 ] satisfying the S V SC condition ( 34 ) . Pr oo f: See Appen dix VII. The BM algorithm implies a “flooding” of the initial consensus vectors { s i (0) : i ∈ V } through -out the network . There is no o ther algo rithm in the literatur e that does not use a protocol “equiv alent” to the flood ing techniqu e and DRAFT 14 still g uarantees a verag e-consensu s f or every commu nication sequence C [0 ,t 1 ] satisfying ( 34 ) , we state this formally as Conjecture 7. 4 in Sec.VII-A. Also related to this is Conjectur e 7.38 in Sec.VII-B of Ap pendix VII. When co mbined with Theorem 4. 2, the following theo rem implies that th e com munication condition sufficient for av erage-co nsensus under th e BM alg orithm is th e exact same communica tion condition that is ne cessary f or av erage-co nsensus under any alg orithm. Th is is why the BM algorith m can be said to possess o ptimal con vergenc e proper ties. Theor em 4.3: (BM,DA,OH a nd DDA Necessary Conditions) If a c ommun ication s equenc e C [0 ,t 1 ] does not satisfy the S V SC condition ( 34 ) , then no distributed algo rithm ( 2 ) , ( 3 ) ca n achiev e average-con sensus ( 6 ) at time t = t 1 (+) . Pr oo f: See Appen dix VII. Although Th eorem 4 .3 is somewhat obviou s, it is also a valid necessary com munication co ndition and w ill be referred to through-o ut th e pa per . Note that Theor em 4.7 b elow states that the OH algo rithm requ ires e ven stron ger necessary conditions than the S V SC cond ition ( 34 ) implied by T heorem 4 .3. Howe ver , the D A an d DDA algo rithms do n ot; th ere exist many S V SC sequences un der wh ich the D A and DDA algorithms will obtain average-con sensus at the same instan t as the BM algorith m. An example is the “unit-de lay double cycle sequ ence” defin ed in ( 170 ) of App endix VII . T ogether with Theorem 4.3, th e examp le ( 170 ) imp lies that the D A an d DDA algorithm s possess the weakest possible necessary co nditions for average-con sensus that any algorithm can h av e, this is illustrated in Fig.1 below . 2) Algo rithm 2 (D A): Th e con vergence of Alg orithm 2 depend s on the following I V SC con dition. Definition 4.4: (I V SC) A comm unication sequ ence C [0 ,t 1 ] is “infinitely V - strongly co nnected” (I V SC) iff fo r each time instant t ∈ [0 , t 1 ) there exists a fi nite span of time T t ∈ (0 , t 1 − t ) such that C [ t,t + T t ] satisfies the S V SC condition ( 34 ) .  It follows that a sequence C [0 ,t 1 ] is I V SC iff we can define the infin ite set of ord ered pairs { ( t ℓ 0 , t ℓ 1 ) : ℓ ∈ N } , t 0 0 = 0 , t ℓ 1 = arg min t { C [ t ℓ 0 ,t ] ∈ S V SC } t ℓ 0 = t ℓ − 1 1 (+) , ℓ = 1 , 2 , . . . . A sequenc e C [0 ,t 1 ] being I V SC is the n equiv alent to the condition , C [0 ,t 1 ] = S ℓ ∈ N C [ t ℓ 0 ,t ℓ 1 ] C [ t ℓ 0 ,t ℓ 1 ] ∈ S V SC , ∀ ℓ ∈ N . (35) W e now proceed to the convergence of the D A algorithm. The following theorem is ou r main result. Theor em 4.5: (DA Sufficient Conditions) Co nsider Algorithm 2, namely the D A alg orithm ( 15 ) − ( 1 9 ) . Then av erage-co nsensus ( 6 ) is a chieved at time t = t 1 (+) fo r any commu nication sequence C [0 ,t 1 ] satisfying the I V SC condition ( 35 ) . Pr oo f: See Appen dix VII. The above result is interestin g b ecause the I V SC cond ition ( 35 ) assumes a weak rec urring connectivity b etween nodes, and also be cause the resource costs of the DA algorithm are significan tly lower than the BM alg orithm (see Sec.V -A). Many papers on average-consensus fo rmation such as [17], [ 3], [10], [26], [21], [25] assume DRAFT 15 commun ication condition s that are special cases of the IVSC conditio n. See also Con jecture 7 .39 in Sec.VII- B of Append ix VII. 3) Algo rithm 3 (OH): The following S V CC condition will b e shown su fficient for conver gence of the OH algorithm . Definition 4.6: (S V CC) A commu nication sequence C [0 ,t 1 ] is “singly V -c ompletely conne cted” (S V CC) iff there exists a node ˆ i ∈ V and a time instant t 1 / 2 ∈ (0 , t 1 ) such that, S ˆ i j ( t ˆ ij 0 , t ˆ ij 1 ) ∈ C [0 ,t 1 / 2 ] , ∀ j ∈ V − ˆ i S j ˆ i ( t j ˆ i 0 , t j ˆ i 1 ) ∈ C ( t 1 / 2 ,t 1 ] , ∀ j ∈ V − ˆ i . (36)  The fir st line in ( 36 ) implies th at dur ing the interval [0 , t 1 / 2 ] every node j ∈ V − ˆ i will h av e tran smitted a signal that was received by node ˆ i . The seco nd line in ( 36 ) implies tha t during the interval ( t 1 / 2 , t 1 ] the n ode ˆ i will have transmitted a signal that is r eceiv ed by each node j ∈ V − ˆ i . W e will let “ C [ t 0 ,t 1 ] ∈ S V CC ” den ote that a sequence C [ t 0 ,t 1 ] is S V CC. Note that the identity of nod e ˆ i need n ot be known by any node. W e now consider convergence of the OH a lgorithm. Theor em 4.7: (OH Necessary and Suf ficient Con ditions) Consider Algo rithm 3 , namely th e OH alg orithm ( 20 ) − ( 24 ) . T hen average-consensus ( 6 ) is achieved at time t = t 1 (+) if f the communicatio n sequen ce C [0 ,t 1 ] satisfies the following condition: • (C): fo r each n ode i for which th ere exists a node j ∈ V − i such that S ij ( t ij 0 , t ij 1 ) / ∈ C [0 ,t 1 ] for all ( t ij 0 , t ij 1 ) ∈ R 2 , there e xists a commu nication p ath C iℓ [ t 0 ( iℓ ) ,t 1 ( iℓ )] ∈ C [0 ,t 1 ] from at least on e nod e ℓ suc h that S ℓj ( t ℓj 0 , t ℓj 1 ) ∈ C [0 ,t 0 ( iℓ )) for all j ∈ V − ℓ . Pr oo f: See Appen dix VII. Notice that any commu nication sequen ce th at satisfies the S V CC con dition ( 36 ) will also satisfy the co ndition (C) stated in Theo rem 4.7; take ℓ = ˆ i , t 0 ( j ℓ ) = t 1 / 2 (+) , and C j ℓ [ t 0 ( j ℓ ) ,t 1 ( j ℓ )] = S j ˆ i ( t j ˆ i 0 , t j ˆ i 1 ) f or all j ∈ V − ℓ . This relation m otiv ates the position of the OH algorithm in the V enn diagram s presented in Fig.1. See also Remark 7.36 in Sec.VII- A. The conditio n (C) is more g eneral than S V CC, it implies th at each n ode i ∈ V will either receive a signal directly f rom every other node j ∈ V − i , or h av e a co mmunicatio n path f rom som e node ℓ after the n ode ℓ has received a signal direc tly from every other nod e j ∈ V − ℓ . W e have defined the S V CC cond ition not o nly because it is suffi cient for average-consensus un der the OH algorithm, b ut also because it is necessary for th e definition of the com munication condition I V CC described next. 4) Algo rithm 4 (DDA): The sufficient con ditions fo r conver gence under Alg orithm 4 requir es the following definition. Definition 4.8: (I V CC) A co mmunica tion sequence C [0 ,t 1 ] is “infinitely V -comp letely conn ected” (I V CC) if f fo r each time instan t t ∈ [0 , t 1 ) there exists a finite span of time T t ∈ (0 , t 1 − t ) such that C [ t,t + T t ] satisfies the S V CC condition ( 36 ) .  DRAFT 16 It follows that a sequence C [0 ,t 1 ] is I V CC iff we can define the infinite set of o rdered pairs { ( t ℓ 0 , t ℓ 1 ) : ℓ ∈ N } , t 0 0 = 0 , t ℓ 1 = arg min t { C [ t ℓ 0 ,t ] ∈ S V CC } t ℓ 0 = t ℓ − 1 1 (+) , ℓ = 1 , 2 , . . . . A sequenc e C [0 ,t 1 ] being I V CC is then e quiv alent to the condition , C [0 ,t 1 ] = S ℓ ∈ N C [ t ℓ 0 ,t ℓ 1 ] C [ t ℓ 0 ,t ℓ 1 ] ∈ S V CC , ∀ ℓ ∈ N . (37) W e note that the specific n ode ˆ i can vary between each C [ t ℓ 0 ,t ℓ 1 ] sequence, an d furtherm ore we do not assume that any node knows the identity of ˆ i . The fo llowing th eorem deals with the c on vergence of Alg orithm 4. Theor em 4.9: (DDA Suf ficient Condition s) Con sider Algorithm 4, namely the DD A algorithm ( 2 7 ) − ( 32 ) . Then av erage-co nsensus ( 6 ) is achieved at som e time t ∈ (0 , t 1 ) for any commun ication sequence C [0 ,t 1 ] satisfying the I V CC conditio n ( 37 ) . Pr oo f: See Appen dix VII. The above result is not quite as interesting as Theorem 4.5, since e ven though the DDA alg orithm is a d iscretized version of the D A algorithm, the I V CC con dition ( 37 ) is far stronger than the I V SC co ndition ( 35 ) . Also obser ve that the OH algorithm obtains av erage-co nsensus fo r any C [0 ,t 1 ] ∈ S V CC, wh ereas the DD A algor ithm will obtain av erage-co nsensus un der the much stronger conditio n that C [0 ,t 1 ] satisfies ( 37 ) . Howev er , as men tioned above, due to example ( 170 ) of Ap pendix VII th e DDA and DA algorithm s both can obtain av erage- consensus under suitable S V SC sequences, thus they possess muc h weaker necessary conditions than th e OH algorithm. In fact, any commun ication seq uence C [0 ,t 1 ] that strictly satisfies ( 36 ) im plies Alg orithms 1 − 4 all obtain average-consensus a t the exact same time instant. This is note worthy b ecause many past algo rithms can on ly achie ve average-consensus asymptotically (e.g . most iter ati ve av eraging schem es), in contrast all f our of the algo rithms considered her e can achieve the ( finite) bench -mark time for average-consensus un der app ropriate commun ication sequence s (e.g . any sequence C [0 ,t 1 ] that strictly satisfies ( 36 ) ). Summary . Theo rems 4.2 , 4.3 , 4.5 , 4.7 , 4.9 above state necessary and su ffi cient commun ication condition s f or av erage-co nsensus under Algo rithms 1 − 4 given the assumptio ns (A1)-( A9). Each th eorem is associated with one of fo ur con nectivity assumptions o n the comm unication sequ ence C [0 ,t 1 ] , d enoted b y { S V SC, I V SC, S V CC, I V CC } d efined in ( 34 ) , ( 35 ) , ( 36 ) , and ( 37 ) . Observe that assumptions I V SC an d S V CC are sufficient con ditions for S V SC. Furtherm ore, I V CC implies that both I V SC and S V CC are satisfied, see Fig.1. Notice that ea ch con nectivity condition assumes a set of d irected sign als with an arbitrary delay in the transmission time of each signal. Th is is significant beca use, apart from the floodin g technique , no othe r consensus pr otocol in the current literatu re can ensure average-consensu s in the presen ce of a rbitrary delays in the transmission time of each signal fo r a priori unknown commu nication sequen ces. Of course if the communicatio n sequence is known a priori , then specific update protoco ls can always be constructed that guara ntee average-consen sus at the same in stant as the BM algo rithm. Current r esults on average-con sensus that do allow comm unication delays assume the d elays have pre-determ ined upper-boun ds, and also require th e use of a veraging we ights th at are glob ally balan ced and pre-deter mined off-line DRAFT 17 (see fo r example [21], [15], [ 28]). On a related note, b esides flooding ther e appears to b e no consensus p rotocol in the literature that has been p roven to guar antee a verage-con sensus for every commun ication sequence C [0 ,t 1 ] satisfying any one of the conditio ns ( 34 ) , ( 35 ) , ( 36 ) , or ( 37 ) . The majority of past results on av erage-co nsensus either requir e sp ecial cases of the I V SC cond ition, or else can only guar antee a pprox imate average-consen sus under the S V SC cond ition [19], [ 6], [1]. On the other hand, the two non-trivial algorithm s D A and DD A require a set with cardinality upper-bound ed bounded b y O ( n + d ) t o be commun icated and stored at each node, and all p revious algorithm s besides the flooding and rando mized protocols do not possess this drawback . The V enn d iagrams in Fig.1 su mmarize Theorems 4 .2 , 4 .3 , 4 .5 , 4.7 , 4.9, as well as their relation to the Gossip and ARIS algo rithms that will be used as comparisons to the f our propo sed algorithms. I t remains an op en problem whether any pro tocol exists that guarantees average-consen sus for all I V SC seq uences without req uiring a s et with cardinality upper bou nded by O ( n + d ) to be stored an d commun icated at each node. Gossip Gossip SVSC IVSC IVCC (OH) (BM), (DA), (DDA) Necessary Conditions Sufficient Conditions SVSC IVSC (BM) (OH) IVCC SVCC SVCC (DA) (ARIS) (ARIS) (DDA) Figure 1. V enn Diagram of Sufficie nt and Nece ssary Conditio ns for Algorithms 1-4 as well as the Comparison Algorithms Gossip and ARIS to Achi ev e A verage-C onsensus. The condit ion (C ) in Theorem 4.7 is omitted for simplic ity of presenta tion. V . N U M E R I C A L I M P L E M E N TA T I O N A N D E X A M P L E S This section presents the commun ication a nd storage costs o f the v arious algorithms prop osed above. In S ec.V - A below we de fine th e r esource costs for the six algor ithms com pared in this paper (four prop osed in Sec.III and t wo others in Sec.VII- B). In Sec.V -B we present numerical s imulations of th e six algorithm s under v arious ra ndomized full network grap hs when assuming instantaneou s communication. A. Algorithm Resou r ce Costs Each o f th e fo ur average-consen sus a lgorithms as well as the two comparison algor ithms presented in Sec.VII-B require that the kn owledge set K i ( t ) and sign al set S ij ( t ij 0 , t ij 1 ) are respectively d efined by a set of scalars, where each scalar has a p articular meanin g. W e are thus motiv ated to quantify the “resou rce cost” of each algor ithm DRAFT 18 Algorith m BM D A OH DD A Gossip ARIS min( φ i ( t )) 4 d + 5 4 d + 6 4 d + 5 4 d + 5 2 d 7 + 2( r + 2) d max( φ i ( t )) 2 nd + 4 + ⌊ n 2 ⌋ 4 d + 2 n + 4 4 d + 4 + ⌊ n 2 ⌋ 4 d + 4 + ⌊ n 2 ⌋ 2 d ⌊ n 2 ⌋ + 6 + 2( r + 2) d min( ρ ij ( t )) 2 d + 1 2 d + 1 2 d + 2 2 d + 1 2 d 3 + 2( r + 1) d max( ρ ij ( t )) 2( n − 1) d + ⌊ n 2 ⌋ 2 d + 2 n 2 d + 2 2 d + ⌊ n 2 ⌋ 2 d ⌊ n 2 ⌋ + 2 + 2( r + 1) d min( ω i ( t )) 6 d + 6 6 d + 8 6 d + 7 6 d + 6 4 d 10 + 4( r + 3 2 ) d max( ω i ( t )) 2(2 n − 1) d + 4 + 2 ⌊ n 2 ⌋ 6 d + 4 n + 4 6 d + 6 + ⌊ n 2 ⌋ 6 d + 4 + 2 ⌊ n 2 ⌋ 4 d 2 ⌊ n 2 ⌋ + 8 + 4( r + 3 2 ) d O ( φ ) nd n + d n + d n + d d r d + n O ( ρ ) nd n + d d n + d d r d + n T able I M I N I M U M A N D M A X I M U M R E S O U R C E C O S T S O B TA I N E D B Y A P P LY I N G E A C H A L G O R I T H M I N S E C . I I I T O ( 3 8 ) , ( 3 9 ) , A N D ( 4 0 ) . H E R E d D E N O T E S T H E D I M E N S I O N O F T H E I N I T I A L C O N S E N S U S V E C T O R S , n D E N OT E S T H E N U M B E R O F N O D E S , ρ ij ( t ) D E N O T E S T H E S I G N A L S E T D I M E N S I O N , φ i ( t ) T H E K N O W L E D G E S E T D I M E N S I O N , A N D ω i ( t ) I S T H E S U M O F ρ ij ( t ) A N D φ i ( t ) . W E L E T ⌊·⌋ D E N O T E T H E “ F L O O R ” O P E R AT I O N . in terms of th e total num ber of scalar values that are requir ed to define the two sets K i ( t ) and S ij ( t ij 0 , t ij 1 ) . In particular, f or any t ≥ 0 we define the “storage cost” of an a rbitrary node i ∈ V as φ i ( t ) , φ i ( t ) = the minimu m number of scalars required to define the knowledge set K i ( t ) . (38) Like wise we define the “commun ication co st” ρ ij ( t ij 0 ) of an ar bitrary signal S ij ( t ij 0 , t ij 1 ) as follows, ρ ij ( t ij 0 ) = the minimu m number of scalars required to define the set S ij ( t ij 0 , t ij 1 ) . (39) Next define the total resour ce cost of nod e i at time t ≥ 0 a s ω i ( t ) , ω i ( t ) = φ i ( t ) + ρ ij ( t ) . (40) Based on th e k nowledge set and signal specifications defined in Sec.III , th e T able I pr esents th e resource cost computatio ns of each algorithm when using the definitions ( 3 8 ) , ( 39 ) , and ( 40 ) . The entries of T able I are der i ved in Sec.VII- C of Appendix VII. Note that th e sto rage c ost φ i ( t ) is d efined per nod e, an d the co mmunicatio n co st ρ ij ( t ) is d efined per sign al. The total maximu m r esource costs of the BM algorithm increase on the order O ( nd ) , wher eas the total maximum resource costs of the DA and DD A alg orithm in crease o n the or der O ( n + d ) . Although the to tal max imum resource costs of the OH algorithm increase on the order O ( n + d ) , the max imum resource c ost of each signal under the O H algorithm incr eases on ly o n the o rder O ( d ) . Howe ver , the commun ication conditions necessary fo r av erage-co nsensus under th e OH algor ithm are much stro nger than the conditions necessary un der the BM, D A, and DD A algorithms. This dispa rity makes it dif ficult to state definitiv e results regardin g the least co stly algo rithm under ge neral commu nication sequences. If con dition (C) in Th eorem 4.7 is known to hold a priori , then the OH algorithm may be prefer able to the DA and DD A due to th e lo wer com munication co sts O ( d ) ≤ O ( d + n ) . On DRAFT 19 the o ther hand , under example ( 1 70 ) in Appendix VII the DD A algo rithm is p referable to both the D A and OH algorithm , since the former implies larger resource costs and the latter will not obtain average-consen sus. The Gossip alg orithm in [3] has total resource costs that incr ease on the o rder O ( d ) , howev er this algorithm requires strictly bi-direction al an d instantan eous commun ication, see Sec.VII-B as well as Fig.2 in Sec.V -B. The total re source costs of th e ARIS algorithm incr ease on the o rder O ( rd + n ) , where r is an ARIS algorithm parameter explained in Sec. VII-B. If r ≥ n then ARIS is mo re costly than the BM algorithm. For th is reason , the sim ulations presented in Sec.V -B assume r = n . The R IS algo rithm proposed in [ 19] requires r rando m variables to be initially generated at each node f or each element o f the respecti ve lo cal initial c onsensus vector s i (0) ∈ R d , the RIS algorithm also r equires these random variables to b e commu nicated between n odes, thus both the s torage and communicatio n costs of the RIS algorithm increase on the order O ( r d ) . B. Numerical Re sults W e present h ere n umerical simulations of the four prop osed algo rithms togeth er with the two co mparison algorithm s. The algo rithm para meters wer e cho sen as n = 80 (numb er o f nodes), d = 1 (dim ension of the initial consensus vectors), r = n (number of ARIS random variables g enerated per initial consensu s element), and s i (0) = i, ∀ i ∈ V (initial consensus vector values). T he node commu nication is a ssumed to be instantaneo us and in discrete-tim e, using the following randomized protocol: • at each k ∈ N , two nod es ( i, j ) ∈ V 2 are un iformly chosen at random such that i 6 = j , • with pro bability one, the node i sends a signal to the nod e j at time k , • with pro bability p the node j sends a signal to th e nod e i at time k . W e c ompare four cho ices of p , nam ely p ∈ { 1 , 1 2 , 1 4 , 0 } . No te that p = 1 implies instantane ous bi-d irectional commun ication. As p de creases there will b e fewer expected signals pe r time instant, thus we expect that each algorithm will have slo wer con vergence fo r lower v alues of p . Fig.2 shows the network consensu s error und er each algorith m for each value of p ∈ { 1 , 1 2 , 1 4 , 0 } . It is c lear that th e Gossip algorithm o nly converges to av erage-co nsensus for p = 1 . For p ∈ { 1 2 , 1 4 , 0 } the Gossip a lgorithm conv erges to a consensu s that is increasingly distant from the av erage-co nsensus. The BM algor ithm conv erges fastest in all simu lations, as expected f rom Theor ems 4.2 , 4 .3. Initially the DA algo rithm can be ob served to almost match th e BM a lgorithm in all simulation s, however as time pro ceeds for p = 1 the Gossip and ARIS algorithm eventually overcome D A, a nd in p ∈ { 1 2 , 1 4 , 0 } the ARIS algor ithm e ventu ally overcomes DA. However , the resource co sts o f A RIS in o ur sim ulations is gr eater th an e ven the BM a lgorithm, thus fu rther research is need ed to objectively e valuate the trade-off betwee n r esource co st a nd network con sensus er ror fo r e ach a lgorithm under various communication conditions. V I . C O N C L U S I O N This paper has d escribed and analyz ed fo ur distributed algo rithms design ed to solve th e average-consensu s problem under gen eral uni-directio nal connectivity con ditions. W e deriv ed n ecessary and sufficient condition s for DRAFT 20 3 4 5 6 7 8 9 10 0 20 40 60 80 100 120 140 log(time) Consensus Error (E) p = 1 Communication Model BM Gossip ARIS DA DDA OH 3 4 5 6 7 8 9 10 0 20 40 60 80 100 120 140 log(time) Consensus Error (E) p = 0.5 Communication Model 3 4 5 6 7 8 9 10 0 20 40 60 80 100 120 140 log(time) Consensus Error (E) p = 0.25 Communication Model 3 4 5 6 7 8 9 10 0 20 40 60 80 100 120 140 log(time) Consensus Error (E) p = 0 Communication Model Figure 2. Con verge nce of Consensus Algorit hms for Communicat ion Probabilitie s p ∈ { 1 , 1 2 , 1 4 , 0 } (top left to bottom right). The conse nsus error E is define d abo ve ( 6 ) . Observ e that the Gossip consensus error v anishes only for p = 1 , b ecause thi s is the only model tha t impl ies strictl y bi-direction al communic ation. All algo rithms con verge moderatel y slower as p decrea ses. the con vergence to a verage-consensu s u nder e ach respec ti ve algorith m. The conditions for conv ergence we re b ased on two types o f connectivity con ditions, n amely the singly V -strongly connected sequence (defined in Sec.IV -A1), and the singly V -co mpletely co nnected sequence (defined in Sec. IV -A3). Both connectivity condition s allow arbitrary delays in the transmission time of each signal, and we d id not assume that the sending node knows the identity of the recei ving node. The r esource costs f or each o f the alg orithms were deriv ed and shown to differ in regard to their order of magnitud e with respect to th e param eters n (num ber of node s) and d (d imension of the initial consensu s vectors). Comparison s were made with two known consensus algorithm s, referred to as th e Gossip algorithm and th e adapted ran domized infor mation spreading ( ARIS) alg orithm. Simulations were provided as well as V en n diagr ams of the connectivity condition s required f or av erage-co nsensus u nder eac h alg orithm. The no n-trivial algo rithms considered here are relativ ely ad vantageous under weak comm unication cond itions if the d imension d of the initial consensus vectors exceeds the network size n . The works [ 4] and [20] provide two practical examples of wh en d ≥ n migh t typically be the case, e.g. distributed inferen ce regardin g a √ n d imensional p rocess or para meter in noise. The four communication conditions we p roposed we re deterministic; there wer e no stochastic p roperties assumed in regard to the signal process between n odes. Howe ver, our f ramework allowed directed signals as well as arbitrary delays in tran smission time, henc e e very causal sign al sequence is a spec ial case of our framework. This suggests that the four proposed algor ithms can be applied to stoch astic commu nication models for wh ich ther e is a non-zero probab ility o f consen sus under any distrib uted algorith m, h owe ver future w ork is need ed in this direction . Future work is also needed to obtain a lower bound on th e co n vergence rate to average-consensu s fo r given DRAFT 21 characterizatio ns of the commu nication sequen ce un der eac h p roposed algorithm , as well as improved algor ithms designed specifically for particu lar comm unication sequ ences. For an o bjective e v aluation of the various average- consensus algor ithms, additional research is needed to co mpare the ev olution of a cost function of the resource cost and n etwork consensus err or un der a variety of comm unication sequenc es. Lastly , an interesting g eneralization of the average-consen sus problem is to allow the initial con sensus vectors s i (0) to v ary with time , as d iscussed for instance by [20], [8], [22], [13], [29], [16]. Applying the algorith ms an d communication conditions proposed in this work co uld yie ld fu rther results with regard to en suring th e average ¯ s ( t ) is contain ed in the kn owledge set K i ( t ) of each nod e i ∈ V at some time instan ts t for given dynamic m odels of the set { s i ( t ) : i ∈ V } . As a final note, it i s worth mentioning th at each of the four p roposed algorithms can obtain a co nsensus on a ny linear combin ation of the in itial consen sus vectors { s i (0) ∈ R d : i ∈ V } under the exact same communication condition s as st ated in th e main results. In o ther words, suppose a vector w ∈ R n was initially known at each nod e, then if each algor ithm upd ated the no rmal consensus estimate based on ( 9 ) with 1 n 1 n replaced by the vector w , the same co nditions stated in Sec.IV will im ply th e r espective algorith ms ensur e ˆ s i ( t 1 (+)) = P i ∈V s i (0) w i for all i ∈ V . Th e proofs of these results follow by iden tical arguments to th ose presented in this work, sim ply by replacing 1 n 1 n by the vector w . V I I . A P P E N D I X In Sec.VII-A of this appen dix we d erive the co nsensus estimate initialization for each of the fo ur pro posed algorithm s, as well as the proofs fo r Th eorems 4.2,4.3,4.5,4.7, and 4.9. In Sec.VII- B we define the Gossip algorithm propo sed in [3] in terms of the class of d istributed algorithms ( 3 ) , ( 7 ) , and then we define the ARIS alg orithm in these terms as well. In Sec.VII-C we der i ve the resour ce costs presented in T able I o f Sec.V -A, and in Sec.VII-D we d efine the “un it-delay dou ble cycle sequence” as an example of a S V SC sequence that im plies the D A, DD A, and BM algorithm all ob tain a verage-consen sus at the sam e instant. W e presen t two conjectur es in Sec.VII-A, and two conjecture s in Sec.VII-B. Throu gh-out the appendix we denote the “erro r” of the normal con sensus estimate ˆ v i ( t ) as, E 1 n 1 n ( ˆ v i ( t )) =   r  ˆ v i ( t ) − 1 n 1 n  2   . (41) The total red uction in normal consensus squar ed error resulting from the seque nce C [ t 0 ,t 1 ] is then , E 2 ( C [ t 0 ,t 1 ] ) ≡ X S ij ( t ij 0 ,t ij 1 ) ∈ C [ t 0 ,t 1 ] E 2  S ij ( t ij 0 , t ij 1 )  , (42) where E 2  S ij ( t ij 0 , t ij 1 )  is defined using the n ormal consensus error E 1 n 1 n  ˆ v ( t )  in ( 41 ) , E 2  S ij ( t ij 0 , t ij 1 )  ≡ E 2 1 n 1 n  ˆ v i ( t ij 1 )  − E 2 1 n 1 n  ˆ v i ( t ij 1 (+)  . (43) It is convenient to use the following definition. DRAFT 22 Definition 7.1: Und er ( 8 ) , ( 4 ) , an d ( 5 ) , th e av erage- consensus problem is solved at time t if f the “n etwork normal consensus erro r” P n i =1 E 1 n 1 n  ˆ v i ( t )  is zero, that is, n X i =1 E 1 n 1 n  ˆ v i ( t )  = 0 , (44) where E 1 n 1 n  ˆ v i ( t )  is defined in ( 41 ) . A. Algorithm Conver gence Pr o ofs Lemma 7.2: (Consen sus Estimate Initialization) Gi ven the initial k nowledge state (A4), the unique solution to both ( 9 ) an d ( 26 ) is, ˆ v i (0) = 1 n e i , ˆ s i (0) = 1 n s i (0) . (45) Pr oo f: Under (A4) the u pdate ( 9 ) beco mes, ˆ v i (0) = arg ˜ v min  ˜ v − 1 n 1 n  2 , s.t. ( 8 ) ho lds, gi ven K i (0) ⊇ { i, n, s i (0) } . (46) Observe tha t th e vector e i can b e locally con structed at each node i ∈ V based only on th e da ta { i, n } . Next observe that s i (0) = S e i for all i ∈ V . Given that s i (0) = S e i is known by node i at t = 0 , th e set of vectors ˆ v i (0) for which ( 8 ) ho lds is span { e i } , thu s ( 46 ) becomes, ˆ v i (0) = arg ˜ v min ˜ v ∈ span { e i }  ˜ v − 1 n 1 n  2 , = V V + 1 n 1 n , V = [ e i ] = V ( V ′ V ) − 1 V ′ 1 n 1 n = 1 n e i . (47) If ˆ v i (0) = 1 n e i then ( 8 ) implies ˆ s i (0) = 1 n s i (0) . Note that 1 n e i and 1 n s i (0) can both be initially comp uted at node i g iv en that K i (0) ⊇ { i, n, s i (0) } . The solution ( 45 ) implies ˆ v i (0) ∈ R n 0 , 1 n and thu s ( 45 ) is a feasible solution to ( 26 ) . It then follows that ( 45 ) is also the uniqu e solution to ( 2 6 ) since th e feasible space of ( 26 ) is contained in the feasible space of ( 9 ) , and both ( 9 ) and ( 26 ) m inimize the same ob jecti ve function. Lemma 7.3: (BM No rmal Consensus Estimate Update) Applyin g ( 10 ) an d ( 11 ) to ( 9 ) yields the BM no rmal consensus estimate upd ate ( 12 ) . Pr oo f: If ˆ v i ( t ij 1 ) = 1 n 1 n then ( 8 ) implies ˆ s i ( t ij 1 ) = ¯ s (0 ) and thus no de i need no t upda te its kn owledge st ate regardless of the signal S ij ( t ij 0 , t ij 1 ) . If ˆ v j ( t ij 0 ) = 1 n 1 n then ( 10 ) togethe r with ( 8 ) implies, S ij ( t ij 0 , t ij 1 ) ⊇ ˆ s j ( t ij 0 ) , ⊇ ¯ s (0) . (48) In this case ( 9 ) can be re-wr itten, ˆ v i ( t ij 1 (+)) = arg ˜ v min  ˜ v − 1 n 1 n  2 , s.t. ( 8 ) ho lds, giv en ˆ s j ( t ij 0 ) = ¯ s (0) . (49) Since ˆ s j ( t ij 0 ) = ¯ s (0) is known we can let ˆ s i ( t ij 1 (+)) = ¯ s (0) and thus o btain ˆ v i ( t ij 1 (+)) = 1 n 1 n as the uniqu e global solu tion to ( 4 9 ) (n ote that 1 n 1 n can b e com puted since (A4) and (A7) imp ly K i ( t ij 1 ) ⊇ { n } ). Notice that DRAFT 23 this so lution coin cides with ( 12 ) . Next assume that ˆ v i ( t ij 1 ) 6 = 1 n 1 n and ˆ v j ( t ij 0 ) 6 = 1 n 1 n . Und er ( 10 ) and ( 11 ) we can then re-wr ite ( 9 ) as, ˆ v i ( t ij 1 (+)) = arg ˜ v min  ˜ v − 1 n 1 n  2 , s.t. ( 8 ) ho lds, giv en s ℓ (0) = S e ℓ , for all { ℓ : v ij ℓ ( t ij 1 , t ij 0 ) = 1 } . (50) Giv en that s ℓ (0) = S e ℓ for all { ℓ : v ij ℓ ( t ij 1 , t ij 0 ) = 1 } is known, the set of vectors ˆ v i ( t ij 1 (+)) fo r which ( 8 ) hold s is span { e ℓ : v ij ℓ ( t ij 1 , t ij 0 ) = 1 } , thu s ( 50 ) become s, ˆ v i ( t ij 1 (+)) = arg ˜ v min ˜ v ∈ span { e ℓ : v ij ℓ ( t ij 1 , t ij 0 ) = 1 }  ˜ v − 1 n 1 n  2 , = V ( B M ) V + ( B M ) 1 n 1 n , V ( B M ) = [ e ℓ : v ij ℓ ( t ij 1 , t ij 0 ) = 1] . (51) Let L = v ij ( t ij 1 , t ij 0 ) ′ 1 n denote cardinality of the set { ℓ : v ij ℓ ( t ij 1 , t ij 0 ) = 1 } . Sin ce the columns of V ( B M ) are linearly indep endent, the right-han d side (RHS) of ( 51 ) can be compu ted as , V ( B M ) V + ( B M ) 1 n 1 n = 1 n V ( B M ) ( V ′ ( B M ) V ( B M ) ) − 1 V ′ ( B M ) 1 n = 1 n V ( B M ) ( I L × L ) − 1 1 ′ L , = 1 n v ij ( t ij 1 , t ij 0 ) , where I L × L denotes the identity m atrix of dimension L . Theor em 4.2 (BM Network Conver gence to A verage- Consensus) Pr oo f: By the BM update ( 12 ) when any n ode re ceiv es a signal S ij ( t ij 0 , t ij 1 ) , the receiving nod e i either receives the desired average-consen sus value ¯ s (0) , or receives the initial co nsensus v ector s ℓ (0) of every node that has a com munication path to no de j within the time span [0 , t ij 0 ) . I f C [0 ,t 1 ] satisfies ( 34 ) then ev ery no de has a commun ication path to every other node within the time span [0 , t 1 ] , thus (A7) and ( 12 ) together imply that at time t 1 (+) e very node i will com pute ˆ v i ( t 1 (+)) = 1 n 1 n . T his imp lies ( 44 ) holds at t 1 (+) an d hen ce by Definition ( 7.1 ) a network average-consen sus is ob tained at time t = t 1 (+) . Theor em 4.3 (BM,D A,OH and DDA Nece ssary Conditions) Pr oo f: If C [0 ,t 1 ] / ∈ S V SC then there exists a no de i ∈ V th at d oes n ot ha ve a co mmunicatio n path to so me node j ∈ V − i within the time spa n [0 , t 1 ] . At time t 1 the node j thus cann ot h av e any knowledge that is contained in K i ( t ) fo r any t ≤ t 1 , r egardless of the kno wledge set update rule ( 2 ) and signal specification ( 3 ) . Hen ce ˆ s j ( t 1 (+)) cannot be a fun ction of s i (0) f or an arb itrary initial consensu s vector s i (0) . I t the n follows that no distributed algorithm ( 2 ) , ( 3 ) can imply ( 6 ) is satisfied at time t ≤ t 1 (+) for an arbitrary set of initial consensus vectors { s i (0) : i ∈ V } . W e now present a conjecture regard ing the BM algorithm. From Theorem 4.2 and ( 10 ) − ( 14 ) it follows that the BM algorithm implies th e following property (P) of each knowledge set K i ( t ) : DRAFT 24 • (P): C ij [ t 0 ( ij ) ,t 1 ( ij )] ⊂ C [0 ,t 1 ] ⇒    K i ( t 1 (+)) ⊇ { s j (0) } , if ∃ ℓ s.t. C iℓ [ t 0 ( iℓ ) ,t 1 ( iℓ )] * C [0 ,t 1 ] K i ( t 1 (+)) ⊇ { ¯ s (0 ) } , if C iℓ [ t 0 ( iℓ ) ,t 1 ( iℓ )] ⊂ C [0 ,t 1 ] ∀ ℓ ∈ V − i The conditio n (P) forms an equivalence class among all algor ithms A defined by ( 2 ) and ( 3 ) . From th e second line in (P) it follows th at any alg orithm A that satisfies ( P) will imply ( 6 ) holds at time t = t 1 (+) for any commun ication sequen ce C [0 ,t 1 ] satisfying ( 34 ) . W e now co njecture that if an a lgorithm A implies ( 6 ) holds at time t = t 1 (+) for every co mmunicatio n seq uence C [0 ,t 1 ] satisfying ( 34 ) , th en algo rithm A must have resource costs at least as great as any algorithm A th at satisfies (P). Conjectur e 7.4: If an algor ithm A guarantees ( 6 ) holds at time t = t 1 (+) for ev ery com munication sequ ence C [0 ,t 1 ] satisfying ( 34 ) , then the algor ithm A will requir e that a set with ca rdinality u pper bounded b y at least O ( nd ) can be com municated and stored at each n ode. The ab ove co njecture imp lies that any a lgorithm A that satisfies (P) will require that a set with cardinality up per bound ed by O ( nd ) can be commun icated and stored at each n ode, this is why searching for less co stly algorithms is of impor tance. The prob lem is, less costly algorithms tend to req uire stronger commu nication conditions than the BM algorithm , a nd they also do not guaran tee average-con sensus is o btained as quickly as the BM algor ithm. W e note that due to the resource costs associated with the RIS and ARIS algorithm s, the Conje cture 7.38 in Sec.VII-B2 does not contr adict Conjecture 7.4. Pr oo f. (Theore m 4 .5) Lemm as 7.5 - 7.19 . Overview o f Pr o of. T o prove Theorem 4.5 we initially show in Lemma 7.6 that the update ( 17 ) im plies each normal consensus estimate ˆ v i ( t ) satisfies the no rmalization property ( 54 ) . The Lemma 7.9 proves that each n ormal co nsensus estimate ˆ v i ( t ) also satisfies the “zer o lo cal er ror” proper ty ˆ v ii ( t ) = 1 /n . W ithout the latter, the fo llowing lemmas would still imply co n vergence of all con sensus estimates, but the final consensus value would not n ecessarily equ al the average ¯ s (0) defin ed in ( 1 ) . The essence of the conv ergence proof is that th e reduction in e rror that results from any signal will e ventually vanish if C [0 ,t 1 ] ∈ I V SC, see L emma 7 .13. Ap plying th is re sult to the DA lower bo unds on the reduction in error d erived in Lem mas 7. 10 and 7.11, we can show th at each no rmal con sensus vector will n ecessarily co n verge to a commo n vector , see Lemma 7.15. T ogether with the two technical r esults derived in L emmas 7.1 7 and 7. 18, we then combin e th e triangle inequality and the “zero lo cal error” property to pr ove that the common vector will approach 1 n 1 n in the L 2 norm as time appro aches t 1 . This implies ( 44 ) holds at t 1 (+) and hence by Definition ( 7.1 ) a network av erage-co nsensus is obtained at time t = t 1 (+) . Lemma 7.5: (DA Normal Consensus Estimate Upd ate) App lying ( 15 ) and ( 16 ) to ( 9 ) yields the D A n ormal consensus estimate upd ate ˆ v i ( t ij 1 (+)) defin ed in ( 17 ) . DRAFT 25 Pr oo f: Under ( 15 ) and ( 16 ) we can re- write ( 9 ) as, ˆ v i ( t ij 1 (+)) = arg ˜ v min  ˜ v − 1 n 1 n  2 , s.t. ( 8 ) ho lds, giv en ˆ s i ( t ij 1 ) = S ˆ v i ( t ij 1 ) , ˆ s j ( t ij 0 ) = S ˆ v j ( t ij 0 ) , and s i (0) = S e i . (52) Giv en that ˆ s i ( t ij 1 ) = S ˆ v i ( t ij 1 ) , ˆ s j ( t ij 0 ) = S ˆ v j ( t ij 0 ) , and s i (0) = S e i are known, the set of vectors ˆ v i ( t ij 1 (+)) for which ( 8 ) ho lds is span { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } , thu s ( 52 ) can be re- written as ˆ v i ( t ij 1 (+)) = arg ˜ v min ˜ v ∈ span { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i }  ˜ v − 1 n 1 n  2 . (53) The update ( 17 ) fo llows im mediately from ( 53 ) . Lemma 7.6: (DA Consensus Estimate Normalization ) Ev ery normal consensus estimate ˆ v i ( t ) satisfies ˆ v i ( t ) 2 = 1 n ˆ v i ( t ) ′ 1 n , ∀ i ∈ V , ∀ t ≥ 0 . (54) Pr oo f: No te that ˆ v i (0) = 1 n e i satisfies ( 54 ) for each i ∈ V . Next observe that un der ( A 7) the estimate ˆ v i ( t ) will n ot chang e unless a signal S ij ( t ij 0 , t ij 1 ) is received at node i . If a signal is rece i ved th en b y Lemma 7 .5 the estimate ˆ v i ( t ) is up dated to the unique solutio n of ( 53 ) . Thus to finish the pr oof it suffices to show that if a vector v ∈ R n does not satisfy ( 54 ) then the vector v is not the s olution to ( 53 ) . T o prove this we sho w tha t if ( 54 ) does not hold then th e vector w d efined, w = v  v ′ 1 n  /  nv 2  , will satisfy the inequality  w − 1 n 1 n  2 <  v − 1 n 1 n  2 . (55) Notice that since w is con tained in span ( v ) , the inequality ( 55 ) implies that v is no t the so lution to ( 53 ) . Next observe that if a vector v d oes not satisfy ( 54 ) th en,  v 2 − 1 n v ′ 1 n  2 > 0 . (56) Expand ing ( 56 ) yields, ( v 2 ) 2 − 2 1 n v ′ 1 n v 2 +  1 n v ′ 1 n  2 > 0 . (57) Re-arrangin g ( 57 ) th en implies ( 55 ) , ( v 2 ) 2 − 2 1 n v ′ 1 n v 2 > −  1 n v ′ 1 n  2 , v 2 − 2 1 n v ′ 1 n + 1 n >  v ′ 1 n nv 2  2 v 2 − 2 ( v ′ 1 n ) 2 n 2 v 2 + 1 n ,  v − 1 n 1 n  2 >  v v ′ 1 n nv 2 − 1 n 1 n  2 . Lemma 7.7: (DA Non- Decreasing No rmal Consensus Estimate M agnitude) Each magn itude ˆ v i ( t ) 2 is a non- decreasing functio n of t ≥ 0 for all i ∈ V . DRAFT 26 Pr oo f: Note that under (A7) the estimate ˆ v i ( t ) will not change u nless a s ignal is recei ved at n ode i . If a signal S ij ( t ij 0 , t ij 1 ) is rece i ved then the D A update pro blem ( 53 ) implies the upd ate ˆ v i ( t ij 1 (+)) must satisfy ,  ˆ v i ( t ij 1 (+)) − 1 n 1 n  2 ≤  w − 1 n 1 n  2 , ∀ w ∈ span { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } . (58) Since { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) } ∈ span { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } the ineq uality ( 58 ) implies,  ˆ v i ( t ij 1 (+)) − 1 n 1 n  2 ≤ min {  ˆ v i ( t ij 1 ) − 1 n 1 n  2 ,  ˆ v j ( t ij 0 ) − 1 n 1 n  2 } . (59) Next observe that if a vector v ∈ R n satisfies ( 54 ) then,  v − 1 n 1 n  2 = v 2 − 2 1 n v ′ 1 n + 1 n , = 1 n − v 2 . (60) Due to Lemm a 7.6, all normal con sensus estimates satisfy ( 54 ) , thu s we can apply ( 60 ) to ( 59 ) and ob tain, 1 n − ˆ v i ( t ij 1 (+)) 2 ≤ min { 1 n − ˆ v i ( t ij 1 ) 2 , 1 n − ˆ v j ( t ij 0 ) 2 } . (61) Subtracting both sides o f ( 61 ) fro m 1 n then yields, ˆ v i ( t ij 1 (+)) 2 ≥ max { ˆ v i ( t ij 1 ) 2 , ˆ v j ( t ij 0 ) 2 } , ≥ ˆ v i ( t ij 1 ) 2 , (62) thus each mag nitude ˆ v i ( t ) 2 is a non -decreasing function of t ≥ 0 f or all i ∈ V . Lemma 7.8: (Equ ality of Normalized Lin ear Depen dent V ectors) If two linearly depend ent vectors ˆ v i , ˆ v j ∈ R n both satisfy ( 54 ) , th en ˆ v 2 j = ˆ v ′ i ˆ v j = ˆ v 2 i . Pr oo f: If ˆ v i and ˆ v j are linearly depend ent th en there exists so me k 6 = 0 such that ˆ v i = k ˆ v j , thus if both vectors also satisfy ( 54 ) the n, 1 n ˆ v ′ i 1 n = k 1 n ˆ v ′ j 1 n ⇒ k =  ˆ v ′ j 1 n  − 1  ˆ v ′ i 1 n  = ˆ v 2 i / ˆ v 2 j , ˆ v 2 i = k 2 ˆ v 2 j ⇒ k 2 = ˆ v 2 i / ˆ v 2 j . (63) Combining th e RHS of the first and seco nd lines in ( 63 ) implies k 2 = k and thus k = ± 1 since k 6 = 0 . W e th en obtain k = 1 since ˆ v 2 i / ˆ v 2 j > 0 , thus ˆ v i = ˆ v j and the re sult follows. Lemma 7.9: (DA Local Zero Error Property ) Ev ery nor mal consensus estimate ˆ v i ( t ) satisfies ˆ v ii ( t ) = 1 n , ∀ i ∈ V , ∀ t ≥ 0 . (64) Pr oo f: By L emma 7.2, ˆ v ii (0) = 1 n for eac h i ∈ V . Next observe that un der ( A 7) the estimate ˆ v i ( t ) will not change un less a sig nal S ij ( t ij 0 , t ij 1 ) is recei ved at no de i . If a sign al is r eceiv ed then ˆ v i ( t ) is updated to the solution of ( 5 3 ) . W e n ow show that under the assumption ( 64 ) the solution ˆ v i ( t ij 1 (+)) to ( 53 ) will imply ( 64 ) fo r every set v ectors { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } . First assume ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , and e i are linearly dep endent. In this case the update DRAFT 27 problem ( 53 ) redu ces to the RHS of ( 47 ) an d thus implies ( 64 ) . Next assume that any two vectors in the set { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } are linear ly dependent. In this case ( 53 ) r educes to ˆ v i ( t ij 1 (+)) = arg ˜ v min ˜ v ∈ span { e i , v }  ˜ v − 1 n 1 n  2 , = arg ˜ v = a 1 n e i + b v min ( a, b )  a 1 n e i + b v − 1 n 1 n  2 , (65) where v ∈ { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) } is linearly in depende nt o f e i . The ob jecti ve function in ( 65 ) is f ( a, b ) =  a 1 n e i + b v − 1 n 1 n  2 , = a 2 ( 1 n e i ) 2 + b 2 v 2 + 2 ab 1 n e ′ i v − 2 a 1 n e ′ i 1 n 1 n − 2 bv ′ 1 n 1 n + 1 n . (66) The Lemma 7. 6 implies that v satisfies ( 54 ) . No te also that 1 n e i satisfies ( 54 ) , thus the objecti ve function ( 6 6 ) can be simplified, f ( a, b ) = ( a 2 − 2 a )  1 n e i  2 + ( b 2 − 2 b ) v 2 + 2 ab 1 n e ′ i v + 1 n . The first-order partial der i vati ves of f ( a, b ) are, ∂ f ( a,b ) ∂ a = 2( a − 1)  1 n e i  2 + 2 b 1 n e ′ i v , ∂ f ( a,b ) ∂ b = 2( b − 1) v 2 + 2 a 1 n e ′ i v . (67) The second- order partial deriv ati ves of f ( a, b ) are, ∂ 2 f ( a,b ) ∂ a 2 = 2  1 n e i  2 , ∂ 2 f ( a,b ) ∂ b 2 = 2 v 2 , ∂ 2 f ( a,b ) ∂ a∂ b = ∂ 2 f ( a,b ) ∂ b∂ a = 2 1 n e ′ i v . The determin ant of the Hessian matrix of f ( a, b ) is thus, | H  f ( a, b )  | = 2  ( 1 n e i ) 2 v 2 − ( 1 n e ′ i v ) 2  . (68) Since 1 n e i and v are linearly independe nt, the d eterminant ( 68 ) is strictly positive by th e Cauchy-Schwartz inequality . This im plies the Hessian matr ix H  f ( a, b )  is positiv e-definite, thu s setting the RHS of ( 67 ) to zero an d solv ing for ( a, b ) yield s the unique optima l v alues (ˆ a, ˆ b ) , ˆ a = v 2 1 n e ′ i ( 1 n e i − v ) ( 1 n e i ) 2 v 2 − ( 1 n e ′ i v ) 2 , ˆ b = ( 1 n e i ) 2 v ′ ( v − 1 n e i ) ( 1 n e i ) 2 v 2 − ( 1 n e ′ i v ) 2 . (69) From ( 69 ) the u nique solution to ( 65 ) is thus obtaine d, ˆ v i ( t ij 1 (+)) = v 2 1 n e ′ i ( 1 n e i − v ) ( 1 n e i ) 2 v 2 − ( 1 n e ′ i v ) 2 1 n e i + ( 1 n e i ) 2 v ′ ( v − 1 n e i ) ( 1 n e i ) 2 v 2 − ( 1 n e ′ i v ) 2 v . (70) Based on ( 70 ) the element ˆ v ii ( t ij 1 (+)) can b e expressed, ˆ v ii ( t ij 1 (+)) = v 2 1 n e ′ i ( 1 n e i − v ) ( 1 n e i ) 2 v 2 − ( 1 n e ′ i v ) 2 1 n + ( 1 n e i ) 2 v ′ ( v − 1 n e i ) ( 1 n e i ) 2 v 2 − ( 1 n e ′ i v ) 2 v i , = v 2 ( 1 n 2 − 1 n v i ) 1 n + 1 n 2 ( v 2 − 1 n v i ) v i 1 n 2 v 2 − 1 n 2 v 2 i , = 1 n v 2 − v i v 2 + v i v 2 − 1 n v 2 i v 2 − v 2 i = 1 n . (71) DRAFT 28 Note that the last equa lity in ( 71 ) follows since Lemma 7.8 imp lies v 2 6 = v 2 i under the assump tion that v satisfies ( 54 ) and is lin early in depende nt from 1 n e i . Next assume that ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) and e i are linearly independe nt. I n this case the solutio n ( 17 ) can be expressed, ˆ v i ( t ij 1 (+)) = V ( DA ) ( V ′ ( DA ) V ( DA ) ) − 1 V ′ ( DA ) 1 n 1 n , V ( DA ) = [ ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , 1 n e i ] . For notational con venience we denote ˆ v i = ˆ v i ( t ij 1 ) and ˆ v j = ˆ v j ( t ij 0 ) . Note tha t ( V ′ ( DA ) V ( DA ) ) has the inverse ( 72 ) below , ( V ′ ( DA ) V ( DA ) ) − 1 =      ˆ v 2 i ˆ v ′ i ˆ v j 1 n ˆ v ii ˆ v ′ i ˆ v j ˆ v 2 j 1 n ˆ v j i 1 n ˆ v ii 1 n ˆ v j i 1 n 2      − 1 , = 1 | V ′ ( DA ) V ( DA ) |      1 n 2  ˆ v 2 j − ˆ v 2 j i  1 n 2  ˆ v ii ˆ v j i − ˆ v ′ i ˆ v j  1 n  ˆ v j i ˆ v ′ i ˆ v j − ˆ v 2 j ˆ v ii  1 n 2  ˆ v ii ˆ v j i − ˆ v ′ i ˆ v j  1 n 2  ˆ v 2 i − ˆ v 2 ii  1 n  ˆ v ii ˆ v ′ i ˆ v j − ˆ v 2 i ˆ v j i  1 n  ˆ v j i ˆ v ′ i ˆ v j − ˆ v 2 j ˆ v ii  1 n  ˆ v ii ˆ v ′ i ˆ v j − ˆ v 2 i ˆ v j i   ˆ v 2 i ˆ v 2 j − ( ˆ v ′ i ˆ v j ) 2       (72) where the determ inant | V ′ ( DA ) V ( DA ) | can be co mputed as, | V ′ ( DA ) V ( DA ) | = 1 n 2 ( ˆ v 2 i ˆ v 2 j − ( ˆ v ′ i ˆ v j ) 2 + 2 ˆ v ii ˆ v j i ˆ v ′ i ˆ v j − ˆ v 2 ii ˆ v 2 j − ˆ v 2 i ˆ v 2 j i ) . Right-multiply ing ( 72 ) by V ′ ( DA ) 1 n 1 n and then left-m ultiplying by the first row of V ( DA ) yields ˆ v ii ( t ij 1 (+)) , ˆ v ii ( t ij 1 (+)) = 1 | V ′ ( DA ) V ( DA ) | 1 n 2  ˆ v 2 i ˆ v 2 j ˆ v ii − ˆ v 2 i ˆ v 2 j i ˆ v ii + ˆ v 2 ii ˆ v j i ˆ v 2 j − ˆ v ii ˆ v ′ i ˆ v j ˆ v 2 j + 1 n ˆ v j i ˆ v ′ i ˆ v j ˆ v ii − 1 n ˆ v 2 ii ˆ v 2 j + ˆ v 2 i ˆ v ii ˆ v 2 j i − ˆ v ′ i ˆ v j ˆ v 2 i ˆ v j i + ˆ v 2 i ˆ v 2 j ˆ v j i − ˆ v 2 ii ˆ v 2 j ˆ v j i + 1 n ˆ v ii ˆ v j i ˆ v ′ i ˆ v j − 1 n ˆ v 2 i ˆ v 2 j i + ˆ v 2 i ˆ v ′ i ˆ v j ˆ v j i − ˆ v 2 i ˆ v 2 j ˆ v ii + ˆ v ii ˆ v ′ i ˆ v j ˆ v 2 j − ˆ v 2 i ˆ v 2 j ˆ v j i + 1 n ˆ v 2 i ˆ v 2 j − 1 n ( ˆ v ′ i ˆ v j ) 2  , = 1 | V ′ ( DA ) V ( DA ) | 1 n 3  ˆ v 2 i ˆ v 2 j − ( ˆ v ′ i ˆ v j ) 2 +2 ˆ v ii ˆ v j i ˆ v ′ i ˆ v j − ˆ v 2 ii ˆ v 2 j − ˆ v 2 i ˆ v 2 j i  , = 1 n . (73) The last line in ( 73 ) follows since | V ′ ( DA ) V ( DA ) | is n on-zero if e i , ˆ v i and ˆ v j are linearly indep endent. Lemma 7.10: (DA Lower Bound On Signal Reduction in Err or) Upon re ception of any sign al S ij ( t ij 0 , t ij 1 ) th e decrease in the u pdated normal consensus squar ed error has the fo llowing lower bound , E 2 1 n 1 n  ˆ v i ( t ij 1 )  − E 2 1 n 1 n  ˆ v i ( t ij 1 (+))  ≥ max { ˆ v j ( t ij 0 ) 2 − ˆ v i ( t ij 1 ) 2 , n  ˆ v j ( t ij 0 ) ′ ( ˆ v j ( t ij 0 ) − ˆ v i ( t ij 1 ))  2 } . (74) Pr oo f: By Lemma 7. 6 we can apply ( 60 ) to the left-han d side (LHS) of ( 74 ) , E 2 1 n 1 n  ˆ v i ( t ij 1 )  − E 2 1 n 1 n  ˆ v i ( t ij 1 (+))  = ˆ v i ( t ij 1 (+)) 2 − ˆ v i ( t ij 1 ) 2 . (75) DRAFT 29 Applying the first line of ( 62 ) to the RHS of ( 75 ) then y ields, E 2 1 n 1 n  ˆ v i ( t ij 1 )  − E 2 1 n 1 n  ˆ v i ( t ij 1 (+))  ≥ max { ˆ v i ( t ij 1 ) 2 , ˆ v j ( t ij 0 ) 2 } − ˆ v i ( t ij 1 ) 2 , ≥ max { 0 , ˆ v j ( t ij 0 ) 2 − ˆ v i ( t ij 1 ) 2 } , ≥ ˆ v j ( t ij 0 ) 2 − ˆ v i ( t ij 1 ) 2 . (76) Next observe that for any tw o vectors ˆ v i , ˆ v j , span { ˆ v i , ˆ v j } ⊆ span { ˆ v i , ˆ v j , e i } . It thus follows that, E 1 n 1 n  arg ˜ v min ˜ v ∈ span { ˆ v i , ˆ v j , e i } ( ˜ v − 1 n 1 n ) 2  ≤ E 1 n 1 n  arg ˜ v min ˜ v ∈ span { ˆ v i , ˆ v j } ( ˜ v − 1 n 1 n ) 2  . (77) Let ˆ v i = ˆ v i ( t ij 1 ) , ˆ v j = ˆ v j ( t ij 0 ) and ˆ v i ( t ij 1 (+)) = ˆ v i (+) for no tational con venience. Fr om ( 77 ) we h av e, E 2 1 n 1 n ( ˆ v i ) − E 2 1 n 1 n ( ˆ v i (+)) ≥  arg ˜ v min ˜ v ∈ span { ˆ v i , ˆ v j } ( ˜ v − 1 n 1 n ) 2  2 − ˆ v 2 i , = ˆ w 2 − ˆ v 2 i , = 1 n  ˆ w ′ 1 n − ˆ v ′ i 1 n  , (78) where the last lin e holds due to Le mma 7.6, and ˆ w is d efined, ˆ w = arg ˜ v min ˜ v ∈ span { ˆ v i , ˆ v j } ( ˜ v − 1 n 1 n ) 2 , = ar g ˜ v = a ˆ v i + b ˆ v j min ( a, b ) ( a ˆ v i + b ˆ v j − 1 n 1 n ) 2 . (79) Note that Lemma 7.7 togeth er w ith the in itialization ( 45 ) implies ˆ v i and ˆ v j are non-zero . Next assum e that ˆ v i and ˆ v j are linear ly dependent. In this case ( 79 ) r educes to, ˆ w = a rg ˜ v min ˜ v ∈ span { ˆ v i } ( ˜ v − 1 n 1 n ) 2 , = V ( V ′ V ) − 1 V ′ 1 n 1 n , V = [ ˆ v i ] , = ˆ v i ˆ v ′ i 1 n n ˆ v 2 i , = ˆ v i , (80) where the last lin e follows due to Lemma 7.6. Applyin g ( 80 ) to ( 78 ) implies, E 2 1 n 1 n ( ˆ v i ) − E 2 1 n 1 n ( ˆ v i (+)) ≥ ˆ w 2 − ˆ v 2 i , = ˆ v 2 i − ˆ v 2 i = 0 , = n  ˆ v 2 j − ˆ v ′ i ˆ v j  2 (81) where the last line f ollows by Lemm a 7.8 since Lemma 7.6 implies both ˆ v i and ˆ v j satisfy ( 54 ) , and we are assuming ˆ v i and ˆ v j are linear ly dependent. DRAFT 30 Next ass ume ˆ v i and ˆ v j are linearly independent. In this case ( 79 ) can be s olved analogous to ( 70 ) based on the optimization pro blem ( 65 ) , ˆ w = arg ˜ v = a ˆ v i + b ˆ v j min ( a, b ) ( a ˆ v i + b ˆ v j − 1 n 1 n ) 2 , = ˆ v 2 j ˆ v ′ i ( ˆ v i − ˆ v j ) ˆ v 2 i ˆ v 2 j − ( ˆ v ′ i ˆ v j ) 2 ˆ v i + ˆ v 2 i ˆ v ′ j ( ˆ v j − ˆ v i ) ˆ v 2 i ˆ v 2 j − ( ˆ v ′ i ˆ v j ) 2 ˆ v j . (82) Substituting the secon d line of ( 82 ) for ˆ w in the third line o f ( 78 ) then yield s, 1 n  ˆ w ′ 1 n − ˆ v ′ i 1 n  = 1 n  ˆ v 2 j ˆ v ′ i ( ˆ v i − ˆ v j ) ˆ v ′ i 1 n + ˆ v 2 i ˆ v ′ j ( ˆ v j − ˆ v i ) ˆ v ′ j 1 n ˆ v 2 i ˆ v 2 j − ( ˆ v ′ i ˆ v j ) 2 − ˆ v ′ i 1 n  = ˆ v 2 j ˆ v 2 i ˆ v ′ i ( ˆ v i − ˆ v j )+ ˆ v 2 i ˆ v 2 j ˆ v ′ j ( ˆ v j − ˆ v i ) −  ˆ v 2 i ˆ v 2 j − ( ˆ v ′ i ˆ v j ) 2  ˆ v 2 i ˆ v 2 i ˆ v 2 j − ( ˆ v ′ i ˆ v j ) 2 = ˆ v 2 i ˆ v 2 i ˆ v 2 j − ( ˆ v ′ i ˆ v j ) 2  ˆ v 2 j ( ˆ v 2 i − ˆ v ′ i ˆ v j ) + ˆ v 2 j ( ˆ v 2 j − ˆ v ′ i ˆ v j ) − ˆ v 2 i ˆ v 2 j + ( ˆ v ′ i ˆ v j ) 2  , ≥  ˆ v 4 j − 2 ˆ v ′ i ˆ v j ˆ v 2 j + ( ˆ v ′ i ˆ v j ) 2  ˆ v 2 i ˆ v 2 i ˆ v 2 j =  ˆ v 2 j − ˆ v ′ i ˆ v j  2 1 ˆ v 2 j ≥ n  ˆ v 2 j − ˆ v ′ i ˆ v j  2 , (83) where the last lin e follows s ince E 2 1 n 1 n ( ˆ v j ) ≥ 0 implies ˆ v 2 j ≤ 1 n . Combin ing ( 78 ) and ( 83 ) implies, E 2 1 n 1 n ( ˆ v i ) − E 2 1 n 1 n ( ˆ v i (+)) ≥ n  ˆ v 2 j − ˆ v ′ i ˆ v j  2 . (84) T ogether ( 76 ) , ( 81 ) and ( 84 ) imply ( 74 ) . Lemma 7.11: (DA Non-Decr easing Magnitud e for any Commun ication Path) Any communication p ath C ij [ t 0 ( ij ) ,t 1 ( ij )] ∈ C [0 ,t 1 ] implies, ˆ v i ( t 1 ( ij )(+)) 2 ≥ ˆ v j ( t 0 ( ij )) 2 . (85) Pr oo f: The first line in ( 62 ) implies that for any signal S ij ( t ij 0 , t ij 1 ) , ˆ v i ( t ij 1 (+)) 2 ≥ ˆ v j ( t ij 0 ) 2 . Thus for any comm unication path C ij [ t 0 ( ij ) ,t 1 ( ij )] defined as in ( 33 ) , ˆ v i ( t iℓ k ( ij ) 1 (+)) 2 ≥ ˆ v ℓ k ( ij ) ( t iℓ k ( ij ) 0 ) 2 ≥ ˆ v ℓ k ( ij ) ( t ℓ k ( ij ) ℓ k ( ij ) − 1 1 (+)) 2 ≥ ˆ v ℓ k ( ij ) − 1 ( t ℓ k ( ij ) ℓ k ( ij ) − 1 0 ) 2 ≥ ˆ v ℓ k ( ij ) − 1 ( t ℓ k ( ij ) − 1 ℓ k ( ij ) − 2 1 (+)) 2 ≥ · · · ≥ ˆ v ℓ 1 ( t ℓ 2 ℓ 1 0 ) 2 ≥ ˆ v ℓ 1 ( t ℓ 1 j 1 (+)) 2 ≥ ˆ v j ( t ℓ 1 j 0 ) 2 , where e very second ine quality holds du e to Lemma 7. 7 since t ℓ q +1 ℓ q 0 > t ℓ q ℓ q − 1 1 for each q = 1 , 2 , . . . , k ( ij ) , where ℓ 0 = j and ℓ k ( ij )+1 = i . W e then o btain ( 85 ) again by Lemm a 7.7 since ˆ v i ( t 1 ( ij )(+)) 2 ≥ ˆ v i ( t iℓ k ( ij ) 1 (+)) 2 and ˆ v j ( t ℓ 1 j 0 ) 2 ≥ ˆ v j ( t 0 ( ij )) 2 because t 1 ( ij ) ≥ t iℓ k ( ij ) 1 and t 0 ( ij ) ≤ t ℓ 1 j 0 respectively . DRAFT 31 Lemma 7.12: (Er ror Exp ression for C [0 ,t 1 ] satisfying ( 35 ) ) For any communicatio n sequen ce C [0 ,t 1 ] satisfying ( 35 ) the total r eduction in norm al consensus squared error E 2 ( C [0 ,t 1 ] ) defin ed in ( 42 ) is, E 2 ( C [0 ,t 1 ] ) = P n i =1  E 2 1 n 1 n ( ˆ v i (0)) − E 2 1 n 1 n ( ˆ v i ( t 1 (+)))  = n − 1 n − P n i =1 E 2 1 n 1 n ( ˆ v i ( t 1 (+)) = P ℓ ∈ N E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) , C [ t ℓ 0 ,t ℓ 1 ] ∈ S V SC ≤ n − 1 n . (86) Pr oo f: T he first line fo llows from (A7),(A8) and the definitions ( 41 ) − ( 43 ) . The secon d line in ( 8 6 ) is due to the initialization ( 45 ) . The third line in ( 86 ) fo llows since any commu nication sequence C [0 ,t 1 ] satisfying ( 35 ) ca n be partitioned into an infinite num ber of disjoint sequenc es C [ t ℓ 0 ,t ℓ 1 ] ∈ S V SC. The last line in ( 86 ) follows since the minimum error of any normal consensus estimate is 0 . Lemma 7.13: (V an ishing Reduction in Erro r for C [0 ,t 1 ] satisfying ( 35 ) ) For any communication sequence C [0 ,t 1 ] satisfying ( 35 ) th ere exists an integer ℓ ε such that, E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) ≤ ε , ∀ ℓ ≥ ℓ ε , (87) for any ε > 0 , where E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) is defin ed by ( 42 ) . Pr oo f: T he third line of ( 86 ) implies that for any C [0 ,t 1 ] satisfying ( 35 ) th e q uantity E 2 ( C [0 ,t 1 ] ) is the sum of an infinite number of non -negative terms E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) , th e four th line in ( 8 6 ) implies E 2 ( C [0 ,t 1 ] ) is bounded above, thus ( 87 ) follows by the mon otonic sequence theorem. Lemma 7.14: (DA Lower Bound on Reduction in E rror for any C [0 ,t 1 ] satisfying ( 34 ) ) Any commun ication sequence C [ t 0 ,t 1 ] satisfying ( 34 ) imp lies, E 2 ( C [ t 0 ,t 1 ] ) ≥ n  min i ∈V { ˆ v i ( t 1 (+)) 2 } − max i ∈V { ˆ v i ( t 0 ) 2 }  , ≥ 0 . (88) Pr oo f: The first line in ( 88 ) holds for any com munication sequence C [ t 0 ,t 1 ] , E 2 ( C [ t 0 ,t 1 ] ) = P n i =1  E 2 1 n 1 n ( ˆ v i ( t 0 )) − E 2 1 n 1 n ( ˆ v i ( t 1 (+)))  = P n i =1  ˆ v i ( t 1 (+)) 2 − ˆ v i ( t 0 ) 2  ≥ n  min i ∈V { ˆ v i ( t 1 (+)) 2 } − max i ∈V { ˆ v i ( t 0 ) 2 }  . T o prove the second line in ( 8 8 ) it is requ ired that C [ t 0 ,t 1 ] satisfies ( 34 ) . W e define, ℓ = ar g i min i ∈V { ˆ v i ( t 1 (+)) 2 } , ℓ = arg i max i ∈V { ˆ v i ( t 0 ) 2 } . Since C [ t 0 ,t 1 ] satisfies ( 34 ) there exists a com munication path C ℓ ℓ [ t 0 ( ℓ ℓ ) ,t 1 ( ℓℓ )] ∈ C [ t 0 ,t 1 ] . The Lemma 7.11 then implies, ˆ v ℓ ( t 1 ( ℓℓ )(+)) 2 ≥ ˆ v ℓ ( t 0 ( ℓ ℓ )) 2 . DRAFT 32 The second line in ( 88 ) then fo llows b ecause, min i ∈V { ˆ v i ( t 1 (+)) 2 } = ˆ v ℓ ( t 1 (+)) 2 ≥ ˆ v ℓ ( t 1 ( ℓℓ )(+)) 2 ≥ ˆ v ℓ ( t 0 ( ℓ ℓ )) 2 ≥ ˆ v ℓ ( t 0 ) 2 = max i ∈V { ˆ v i ( t 0 ) 2 } , where the first a nd third inequality ho ld due to Lemma 7. 7 because t 1 ≥ t 1 ( ℓℓ ) and t 0 ( ℓℓ ) ≥ t 0 respectively . Lemma 7.15: (DA L ocal Con vergence of Normal Consensus Estimates) For any communicatio n sequen ce C [0 ,t 1 ] satisfying ( 35 ) th ere exists an integer ℓ χ such that for all ℓ ≥ ℓ χ ,  ˆ v i ( t ij 1 ) − ˆ v j ( t ij 0 )  2 ≤ χ , ∀ S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] , (89) for any χ > 0 . Pr oo f: For any commun ication sequ ence C [0 ,t 1 ] satisfying ( 35 ) th e Lemma 7.13 implies there exists an integer ℓ ε such that ( 87 ) holds for any ε > 0 . For all ℓ ≥ ℓ ε we thus have for any sig nal S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] , ˆ v i ( t ij 1 ) 2 − ˆ v j ( t ij 0 ) 2 ≤ ˆ v i ( t ℓ 1 (+)) 2 − ˆ v j ( t ℓ 0 ) 2 , ≤ P n r =1  ˆ v r ( t ℓ 1 (+)) 2 − ˆ v r ( t ℓ 0 ) 2  , = E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) ≤ ε . (90) Note that the first inequality in ( 90 ) holds by Lemma 7.7 sinc e t ij 0 ≥ t ℓ 0 and t ij 1 ≤ t ℓ 1 for any S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] . The second ineq uality in ( 90 ) holds since, P n r =1  ˆ v r ( t ℓ 1 (+)) 2 − ˆ v r ( t ℓ 0 ) 2  =  ˆ v i ( t ℓ 1 (+)) 2 − ˆ v j ( t ℓ 0 ) 2  +  ˆ v j ( t ℓ 1 (+)) 2 − ˆ v i ( t ℓ 0 ) 2  + P r ∈V \{ i,j }  ˆ v r ( t ℓ 1 (+)) 2 − ˆ v r ( t ℓ 0 ) 2  , (91) where X r ∈V \{ i,j }  ˆ v r ( t ℓ 1 (+)) 2 − ˆ v r ( t ℓ 0 ) 2  ≥ 0 (9 2) holds due to Lemma 7.7, and, ˆ v j ( t ℓ 1 (+)) 2 − ˆ v i ( t ℓ 0 ) 2 ≥ min r ∈V { ˆ v r ( t ℓ 1 (+)) 2 } − max r ∈V { ˆ v r ( t ℓ 0 ) 2 } ≥ 0 , (93) where the second inequality in ( 93 ) h olds due to Lemma 7 .14 since C [ t ℓ 0 ,t ℓ 1 ] ∈ S V SC an d th us satisfies ( 3 4 ) . T ogether ( 91 ) , ( 92 ) and ( 93 ) imply th e secon d inequality in ( 90 ) . Apply ing Lemma 7.1 0 to L emma 7. 13 implies that for all S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] and ℓ ≥ ℓ ε , ε ≥ E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) ≥ E 2  S ij ( t ij 0 , t ij 1 )  , = E 2 1 n 1 n  ˆ v i ( t ij 1 )  − E 2 1 n 1 n  ˆ v i ( t ij 1 (+))  , ≥ max { ˆ v j ( t ij 0 ) 2 − ˆ v i ( t ij 1 ) 2 , n  ˆ v j ( t ij 0 ) ′ ( ˆ v j ( t ij 0 ) − ˆ v i ( t ij 1 ))  2 } , (94) DRAFT 33 for any ε > 0 . For no tational con venience de note ˆ v i = ˆ v i ( t ij 1 ) and ˆ v j = ˆ v j ( t ij 0 ) . Combinin g ( 90 ) and ( 94 ) im plies that for any ε > 0 the re exist an integer ℓ ε such that, ˆ v 2 i − ˆ v 2 j ≤ ε , ˆ v ′ j ( ˆ v j − ˆ v i ) ≤ p ε/n , (95) for any S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] and ℓ ≥ ℓ ε . T o obtain ( 89 ) note that ( 95 ) implies, ( ˆ v i − ˆ v j ) 2 = ˆ v 2 i − 2 ˆ v ′ i ˆ v j + ˆ v 2 j , = ˆ v 2 i − ˆ v 2 j + 2 ˆ v ′ j ( ˆ v j − ˆ v i ) , ≤ ε + 2 p ε/n , ≤ √ ε (1 + 2 / √ n ) , ∀ ǫ ∈ (0 , 1] . (96) W e thus define ε ( χ ) , ε ( χ ) =  χ 1 + 2 / √ n  2 . (97) For any χ ∈ (0 , 1 ) and ε ∈ (0 , ε ( χ )] the result ( 89 ) then follows from ( 96 ) . Lemma 7.16: (DA Properties of the Normal Con sensus Upda te) Upon r eception of any signal S ij ( t ij 0 , t ij 1 ) the normal consensus estimate ˆ v i ( t ij 1 (+)) that results from the upd ate problem ( 53 ) will satisfy , ˆ v i ( t ij 1 ) ′  ˆ v i ( t ij 1 ) − ˆ v i ( t ij 1 (+))  ≤ 0 . (98) Pr oo f: Let us defin e ˜ w , ˜ w = a rg ˜ v min ˜ v ∈ span { ˆ v i ( t ij 1 (+)) , ˆ v i ( t ij 1 ) , e i }  ˜ v − 1 n 1 n  2 , (99) where ˆ v i ( t ij 1 (+)) is given by ( 53 ) . Note that ˆ v i ( t ij 1 (+)) ∈ span { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } implies, span { ˆ v i ( t ij 1 (+)) , ˆ v i ( t ij 1 ) , e i } ⊆ span { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } . (100) From ( 100 ) we h av e, E 1 n 1 n ( ˜ w ) ≥ E 1 n 1 n  ˆ v i ( t ij 1 (+))  , (101) and thus comb ining ( 101 ) and ( 60 ) implies, ˜ w 2 ≤ ˆ v i ( t ij 1 (+)) 2 . (102) Next observe that since ˜ w is defined by ( 99 ) , if ( 10 2 ) is app lied to the signal S ii ( t ij 1 , t ij 1 (+)) then, E 2  S ii ( t ij 1 , t ij 1 (+))  ≡ E 2 1 n 1 n  ˆ v i ( t ij 1 (+))  − E 2 1 n 1 n ( ˜ w ) , = ˜ w 2 − ˆ v i ( t ij 1 (+)) 2 ≤ 0 . (103) Applying Lemm a 7.10 to the signal S ii ( t ij 1 , t ij 1 (+)) then im plies, 0 ≥ E 2  S ii ( t ij 1 , t ij 1 (+))  ≥ max { ˆ v i ( t ij 1 ) 2 − ˆ v i ( t ij 1 (+)) 2 , n  ˆ v i ( t ij 1 ) ′ ( ˆ v i ( t ij 1 ) − ˆ v i ( t ij 1 (+)))  2 } (104) DRAFT 34 where the first lin e follows from ( 103 ) and the last line imp lies ( 98 ) . Lemma 7.17: (DA V anishing Chang e in No rmal Consensus Up date) For any com munication sequen ce C [0 ,t 1 ] satisfying ( 35 ) th ere exists an integer ℓ ε such that fo r all ℓ ≥ ℓ ε ,  ˆ v i ( t ij 1 (+)) − ˆ v i ( t ij 1 )  2 ≤ ε , ∀ S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] , (105) for any ε > 0 . Pr oo f: Recall that Lemma 7.1 3 implies that fo r any communicatio n sequence C [0 ,t 1 ] satisfying ( 3 5 ) there exists an integer ℓ ε such that ( 87 ) ho lds for any ε > 0 , we thus observe for ℓ ≥ ℓ ε , ε ≥ E 2  C [ t ℓ 0 ,t ℓ 1 ]  ≥ E 2 ( S ij ( t ij 0 , t ij 1 )) , ∀ S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] = ˆ v i ( t ij 1 (+)) 2 − ˆ v i ( t ij 1 ) 2 . (106) For all ℓ ≥ ℓ ε and signals S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] we then have,  ˆ v i ( t ij 1 (+)) − ˆ v i ( t ij 1 )  2 = ˆ v i ( t ij 1 (+)) 2 − ˆ v i ( t ij 1 ) 2 +2 ˆ v i ( t ij 1 ) ′  ˆ v i ( t ij 1 ) − ˆ v i ( t ij 1 (+))  ≤ E 2  S ij ( t ij 0 , t ij 1 )  ≤ ε , where the first in equality follows from Lemma 7.16 and the secon d inequality from Lemma 7. 17. Lemma 7.18: (DA V anishing Change between Normal Consensus Update and Sig nal) For any commu nication sequence C [0 ,t 1 ] satisfying ( 35 ) there exists an in teger ℓ γ such that,  ˆ v i ( t ij 1 (+)) − ˆ v j ( t ij 0 )  2 ≤ γ , ∀ S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] , (107) for all ℓ ≥ ℓ γ and any γ > 0 . Pr oo f: For any com munication sequ ence C [0 ,t 1 ] satisfying ( 35 ) the L emma 7.1 5 imp lies that ( 89 ) ho lds for any χ ∈ (0 , 1) and ε ∈ (0 , ε ( χ )] , wh ere ε ( χ ) is given by ( 97 ) . The L emma 7.17 implies that ( 105 ) holds fo r any ε > 0 , thu s for any ℓ ≥ ℓ ε ( χ ) and S ij ( t ij 0 , t ij 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] the triang le inequality then implies, q  ˆ v i ( t ij 1 (+)) − ˆ v j ( t ij 0 )  2 ≤ q  ˆ v i ( t ij 1 (+)) − ˆ v i ( t ij 1 )  2 + q  ˆ v i ( t ij 1 ) − ˆ v j ( t ij 0 )  2 ≤ p ε ( χ ) + √ χ , ≤ 2 √ χ , ∀ χ ∈ (0 , 1 ) . Any χ ∈ (0 , γ / 4] and ε ∈ (0 , ε ( χ )] thus yields ( 10 7 ) for any γ ∈ (0 , 4] . Lemma 7.19: (DA Network Con vergence to A verage-Con sensus) For any communication seq uence C [0 ,t 1 ] satis- fying ( 35 ) ther e exists an integer ℓ ξ such that fo r all ℓ ≥ ℓ ξ , n X i =1  ˆ v i ( t ℓ 1 (+)) − 1 n 1 n  2 ≤ ξ , ∀ C [ t ℓ 0 ,t ℓ 1 ] ∈ C [0 ,t 1 ] , (108) for any ξ > 0 . DRAFT 35 Pr oo f: Sin ce C [0 ,t 1 ] satisfies ( 35 ) we have C [ t ℓ 0 ,t ℓ 1 ] ∈ S V SC for each ℓ ∈ N , thus there e xists a com munication path C ij [ t 0 ( ij ) ,t 1 ( ij )] ∈ C [ t ℓ 0 ,t ℓ 1 ] for any i ∈ V , j ∈ V − i , and ℓ ∈ N . For any i ∈ V , j ∈ V − i and ℓ ∈ N th e triangle inequality then implies, q  ˆ v ij ( t ℓ 1 (+)) − ˆ v j j ( t ℓ 1 j 0 )  2 ≤ P S rp ( t rp 0 ,t rp 1 ) ∈ C ij [ t 0 ( ij ) ,t 1 ( ij )] q  ˆ v r j ( t r p 1 (+)) − ˆ v pj ( t r p 0 )  2 + P k ( ij )+1 q =1 P S rp ( t rp 0 ,t rp 1 ) ∈ Q ℓ q ( ij ) q  ˆ v r j ( t r p 1 (+)) − ˆ v r j ( t r p 1 )  2 (109) where we defin e ¯ C [ t ℓ 0 ,t ℓ 1 ] = C [ t ℓ 0 ,t ℓ 1 ] \ C ij [ t 0 ( ij ) ,t 1 ( ij )] and, Q ℓ q ( ij ) = { S ℓ q m ( t ℓ q m 0 , t ℓ q m 1 ) ∈ ¯ C [ t ℓ 0 ,t ℓ 1 ] : t ℓ q m 1 ∈ ( t ℓ q ℓ q − 1 1 , t ℓ q +1 ℓ q 0 ) } , for q = 1 , . . . , k ( ij ) , wh ere ℓ 0 = j , ℓ k ( ij )+1 = i, Q ℓ k ( ij )+1 ( ij ) = { S im ( t im 0 , t im 1 ) ∈ C [ t ℓ 0 ,t ℓ 1 ] : t im 1 > t 1 ( ij ) } . (110) W e clarify that the RHS of ( 109 ) inclu des the differences between the received normal consensus vector ˆ v ℓ q − 1 ( t ℓ q ℓ q − 1 0 ) and th e u pdated normal consensus vector ˆ v ℓ q ( t ℓ q ℓ q − 1 1 (+)) that r esults f rom e ach signal co ntained in the commun ica- tion pa th C ij [ t 0 ( ij ) ,t 1 ( ij )] ∈ C [ t ℓ 0 ,t ℓ 1 ] . E ach set Q ℓ q ( ij ) d efined in ( 110 ) contain s the signals recei ved at each node after the re spectiv e signal in communicatio n path was received, but before the respectiv e signal in the com munication path was s ent, as is required for an application of th e triangle in equality . The s et Q ℓ k ( ij )+1 ( ij ) contains the sign als received at nod e i after the last signal in the comm unication path C ij [ t 0 ( ij ) ,t 1 ( ij )] was receiv ed, b ut befor e or at the end of the c ommun ication sequ ence C [ t ℓ 0 ,t ℓ 1 ] , again this is required for app lication of the triangle inequ ality . For an y communicatio n seque nce C [0 ,t 1 ] satisfying ( 35 ) the Lemma 7.17 implies there exists a n integer ℓ ε such that ( 105 ) h olds for any ε > 0 . Like wise, for any communication sequ ence C [0 ,t 1 ] satisfying ( 35 ) th e Lemma 7. 18 implies there exists an integer ℓ γ such that ( 107 ) holds for any γ > 0 . Thus for any γ ∈ (0 , 4 ] if we let χ ∈ (0 , γ / 4] and ε ∈ (0 , ε ( χ )] then for any ℓ ≥ ℓ ε ( χ ) , q  ˆ v ij ( t ℓ 1 (+)) − ˆ v j j ( t ℓ 1 j 0 )  2 ≤ | C ij [ t 0 ( ij ) ,t 1 ( ij )] | √ γ + P k ( ij )+1 q =1 | Q ℓ q ( ij ) | p ε ( χ ) , ≤  | C ij [ t 0 ( ij ) ,t 1 ( ij )] | + P k ( ij )+1 q =1 | Q ℓ q ( ij ) |  √ γ , ≤ | C [ t ℓ 0 ,t ℓ 1 ] | √ γ , (111) where the second inequality holds since ε ( χ ) < γ , and the last inequality holds since every signal con tained in C [ t ℓ 0 ,t ℓ 1 ] is represented by at m ost one term o n the RHS of ( 109 ) , th at is, | C [ t ℓ 0 ,t ℓ 1 ] | ≥ | C ij [ t 0 ( ij ) ,t 1 ( ij )] | + k ( ij )+1 X q =1 | Q ℓ q ( ij ) | . DRAFT 36 Due to ( 111 ) , any ξ > 0 and γ ∈ (0 , ξ / ( n | C [ t ℓ 0 ,t ℓ 1 ] | ) 2 ] then implies,  ˆ v ij ( t ℓ 1 (+)) − ˆ v j j ( t ℓ 1 j 0 )  2 =  ˆ v ij ( t ℓ 1 (+)) − 1 n  2 , ≤ ξ /n 2 , (112) for any ( i, j ) ∈ V 2 , where the first equality ho lds du e to Lemma 7 .9. Th e result ( 108 ) then fo llows fr om ( 11 2 ) since P n i =1  ˆ v i ( t ℓ 1 (+)) − 1 n 1 n  2 = P n i =1 P n j =1  ˆ v ij ( t ℓ 1 (+)) − ˆ v j j ( t ℓ 1 j 0 )  2 , ≤ n 2  ξ /n 2  , where the first e quality holds again due to Lem ma 7.9. Pr oo f. (Theorem 4.7) Lemm a 7.20-7 .22. Overview of Pr oo f. The Lemma 7.2 0 proves the OH nor mal consen sus update ( 22 ) . As stated in Le mma 7.21, the u pdate ( 22 ) implies the er ror of each element o f e very no rmal co nsensus vector is a non -increasing fun ction of time, and thu s ( 22 ) will im ply the con ditions stated in Lem ma 7.22 are sufficient and necessary for any node to obtain average-consen sus. The Theorem 4.7 th en follows immediately fro m Lemma 7.2 2. Lemma 7.20: (OH Nor mal Consensus Estimate Update) Applying ( 20 ) and ( 21 ) to the optimization proble m ( 9 ) yields the OH n ormal consensus estimate up date ˆ v i ( t ij 1 (+)) defined in ( 22 ) . Pr oo f: If ˆ v j ( t ij 0 ) = 1 n 1 n then 0 = S ij 1 ( t ij 0 , t ij 1 ) , in this case ( 20 ) toge ther with ( 8 ) implies ( 4 8 ) . Th e update ( 9 ) can thus be re-written as ( 49 ) . Since ˆ s j ( t ij 0 ) = ¯ s (0) is kn own we can let ˆ s i ( t ij 1 (+)) = ¯ s (0) and thus obtain ˆ v i ( t ij 1 (+)) = 1 n 1 n as the unique solution to ( 49 ) . Note that this coincides with ( 22 ) . Next assume that ˆ v j ( t ij 0 ) 6 = 1 n 1 n and thus 0 6 = S ij 1 ( t ij 0 , t ij 1 ) . Under ( 20 ) an d ( 21 ) we can r e-write ( 9 ) as, ˆ v i ( t ij 1 (+)) = arg ˜ v min  ˜ v − 1 n 1 n  2 , s.t. ( 8 ) ho lds, giv en ˆ s i ( t ij 1 ) = S ˆ v i ( t ij 1 ) , s j (0) = S e j , s i (0) = S e i . (113) Giv en that ˆ s i ( t ij 1 ) = S ˆ v i ( t ij 1 ) , s j (0) = S e j , an d s i (0) = S e i are kno wn, the set of vectors ˆ v i ( t ij 1 (+)) for which ( 8 ) holds is span { ˆ v i ( t ij 1 ) , e j , e i } , thus ( 11 3 ) can be re-written as, ˆ v i ( t ij 1 (+)) = arg ˜ v min ˜ v ∈ span { ˆ v i ( t ij 1 ) , e j , e i }  ˜ v − 1 n 1 n  2 . (114) If ˆ v i ( t ij 1 ) ∈ span { e j , e i } then ( 11 4 ) becomes, ˆ v i ( t ij 1 (+)) = arg ˜ v min ˜ v ∈ span { e j , e i }  ˜ v − 1 n 1 n  2 , = V ( OH 1) V + ( OH 1) 1 n 1 n , V ( OH 1) = [ e j , e i ] . Since e i is linearly indep endent of e j we then have, ˆ v i ( t ij 1 (+)) = V ( OH 1) ( V ′ ( OH 1) V ( OH 1) ) − 1 V ′ ( OH 1) 1 n 1 n = 1 n V ( OH 1) ( I 2 ) − 1 1 2 , = 1 n ( e i + e j ) . (115) DRAFT 37 Note that if ˆ v i ( t ij 1 ) ∈ span { e j , e i } the n the last line in ( 115 ) coincid es with ( 2 2 ) . Ne xt assume ˆ v i ( t ij 1 ) / ∈ span { e j , e i } . In this case ( 11 4 ) can be expressed, ˆ v i ( t ij 1 (+)) = V ( OH ) V + ( OH ) 1 n 1 n , = V ( OH ) ( V ′ ( OH ) V ( OH ) ) − 1 V ′ ( OH ) 1 n 1 n V ( OH ) = [ ˆ v i ( t ij 1 ) , 1 n e j , 1 n e i ] . Recall the discrete set of vectors R n 0 , 1 n is defin ed in ( 25 ) . Note that th e initialization ( 45 ) implies ˆ v ii (0) = 1 n and thus ˆ v i (0) ∈ R n 0 , 1 n . Also no te that ( 115 ) implies ˆ v ii ( t ij 1 (+)) = 1 n and ˆ v i ( t ij 1 (+)) ∈ R n 0 , 1 n . W e thus assume, ˆ v ii ( t ij 1 ) = 1 n , ˆ v i ( t ij 1 ) ∈ R n 0 , 1 n . (116) Observe th at un der ( 116 ) if the result ( 22 ) is proven, then th e assumptions ( 11 6 ) are valid. For notational con venience denote ˆ v i = ˆ v i ( t ij 1 ) . Given ( 116 ) th e matrix ( V ′ ( OH ) V ( OH ) ) has an in verse ( 117 ) belo w , ( V ′ ( OH ) V ( OH ) ) − 1 = n − 4      ˆ v ′ i 1 n 1 v ij 1 1 0 v ij 0 1      − 1 = n − 4 | V ′ ( OH ) V ( OH ) |      1 − 1 − v ij − 1 ( ˆ v ′ i 1 n − v ij ) v ij − v ij v ij ( ˆ v ′ i 1 n − 1 )      (117) where the determ inant | V ′ ( OH ) V ( OH ) | can b e computed as, | V ′ ( OH ) V ( OH ) | = n − 6 ( ˆ v ′ i 1 n − 1 − v ij ) . Next observe that, V ′ ( OH ) 1 n 1 n = 1 n 2      ˆ v ′ i 1 n 1 1      . (118) From ( 11 8 ) we o bserve that righ t-multiplyin g ( 117 ) by V ′ ( OH ) 1 n 1 n and lef t-multiplying by V ( OH ) then y ields ( 22 ) , V ( OH ) ( V ′ ( OH ) V ( OH ) ) − 1 V ′ ( OH ) 1 n 1 n = 1 ˆ v ′ i 1 n − 1 − ˆ v ij V ( OH )      ˆ v ′ i 1 n − 1 − ˆ v ij 0 ˆ v ′ i 1 n (1 − ˆ v ij ) + ˆ v ij − 1      = 1 n v ij ( t ij 1 , t ij 0 ) . Lemma 7.21: (OH Eleme nt-W ise Non-Increa sing E rror of the Normal Consen sus Estimate) The error  ˆ v iℓ ( t ) − 1 /n  2 of each elem ent ˆ v iℓ ( t ) is a n on-incr easing function of t ≥ 0 for all ( i, ℓ ) ∈ V 2 . Pr oo f: The result fo llows imm ediately from the nor mal consensus update ( 22 ) . DRAFT 38 Lemma 7.22: (OH Normal Con sensus Estimate Con vergence to A verage-Consensus) Under the OH alg orithm, any node i ∈ V ob tains a verage-consensu s by time t 1 for some commun ication sequ ence C [0 ,t 1 ) iff at least one of the following tw o con ditions holds, • (C1i): there is a signal S ij ( t ij 0 , t ij 1 ) ∈ C [0 ,t 1 ) for each j ∈ V − i . • (C2i): there is a co mmunicatio n path C iℓ [ t 0 ( iℓ ) ,t 1 ( iℓ )] ∈ C [0 ,t 1 ) from at least one n ode ℓ such that S ℓj ( t ℓj 0 , t ℓj 1 ) ∈ C [0 ,t 0 ( iℓ )) for all j ∈ V − ℓ . Pr oo f: (Sufficiency .) If a commu nication sequence C [0 ,t 1 ) implies (C1i) for some nod e i then un der the u pdate ( 22 ) there will exist a time t ij 1 ∈ [0 , t 1 ) such that ˆ v ij ( t ij 1 (+)) = 1 n for eac h j ∈ V − i , an d thus Lemm a 7.21 together with ( 45 ) imply ˆ v i ( t 1 ) = 1 n 1 n and h ence by ( 8 ) the n ode i re aches av erage-c onsensus by time t 1 . I f a commun ication seq uence C [0 ,t 1 ) implies (C2i) then by the pr evious reason ing th e node ℓ will obtain average- consensus by time t 0 ( iℓ ) , a nd thus b y the u pdate ( 22 ) any no de j ∈ V tha t ℓ sends a signal S j ℓ ( t j ℓ 0 , t j ℓ 1 ) ∈ C [ t 0 ( iℓ ) ,t 1 ) will obtain average-consensu s at t j ℓ 1 (+) . If the node ℓ has a commun ication pa th C iℓ [ t 0 ( iℓ ) ,t 1 ( iℓ )] ∈ C [0 ,t 1 ) to nod e i it then follows tha t no de i will ha ve obta ined average-consensus by time t 1 ( iℓ )(+) , and th us b y Lemma 7.21 node i obtain s a verage-con sensus at time t 1 . (Necessity .) Under the OH update ( 22 ) , if there is not a signal S ij ( t ij 0 , t ij 1 ) ∈ C [0 ,t 1 ) then there will not exist a time t ∈ [0 , t 1 ) such that ˆ v ij ( t ) = 1 n unless t here exists a communication path C iℓ [ t 0 ( iℓ ) ,t 1 ( iℓ )] ∈ C [0 ,t 1 ) from s ome node ℓ such that S ℓj ( t ℓj 0 , t ℓj 1 ) ∈ C [0 ,t 0 ( iℓ )) for all j ∈ V − ℓ . It thu s follows th at node i can not obtain average-consensus by time t 1 for any commu nication seq uence C [0 ,t 1 ) that doe s not imply either (C1i) or (C2i). Theor em 4.7 (OH Network Con vergence to A verage-Consensus) Pr oo f: The condition (C) st ated in Theo rem 4.7 is equiv alent to wh en either (C1i) or (C2i) hold f or each node i ∈ V , the re sult thus follows immediately from Lemma 7.22. Pr oo f. (Theorem 4.9) Lemma 7.23-7.35. Overview of Pr oof. Similar to Theorem 4.5, we show that e very norm al consensus estimate ˆ v i ( t ) satisfies the normalizatio n property ( 54 ) , th e “ze ro local err or” prop erty ( 64 ) , and further- more the discretizatio n ˆ v i ( t ) ∈ R n 0 , 1 n . The Lemm a 7.28 and Lemma 7. 29 shows that the upd ate ( 123 ) respectiv ely implies the erro r of e ach n ormal con sensus estimate is n on-decr easing with time, and that the nor mal consensus estimate will not change u nless there is a red uction in err or . W e the n sho w , analogous to Lemma 7. 13, that the reduction in error that results f rom a ny signal will eventually v anish if C [0 ,t 1 ] satisfies ( 3 7 ) . Du e to the d iscretization ˆ v i ( t ) ∈ R n 0 , 1 n this implies the redu ction in err or that results from any signal will eventually strictly equal zero if C [0 ,t 1 ] satisfies ( 37 ) , see Lemma 7.35. When t his occu rs we can show that ˆ v i ( t ) = 1 n 1 n by utilizing Lemma 7.30, Lemma 7 .31, together with the “zer o local e rror” p roperty ( 64 ) , h ence ( 44 ) holds and so by Definition 7.1 a network av erage-co nsensus is obtained. Lemma 7.23: (DDA Normal Con sensus Estimate Discretization) Every no rmal consensus esti mate ˆ v i ( t ) s atisfies ˆ v i ( t ) ∈ R n 0 , 1 n for all i ∈ V an d t ≥ 0 . Pr oo f: Note that the initialization ( 45 ) implies ˆ v i (0) ∈ R n 0 , 1 n . T he o ptimization pr oblem ( 26 ) requires that any solution ˆ v i ( t (+)) satisfies ˆ v i ( t (+)) ∈ R n 0 , 1 n . Under the DDA algorithm, the assum ption (A7) implies e very no rmal DRAFT 39 consensus estimate remains fixed unless updated via ( 26 ) , it t hus f ollows tha t ˆ v i ( t ) ∈ R n 0 , 1 n for all i ∈ V and t ≥ 0 . Lemma 7.24: (DDA Consensus Estimate No rmalization) Every normal co nsensus estimate ˆ v i ( t ) satisfies ( 54 ) , and furth ermore,  ˆ v − i i ( t )  2 = 1 n ˆ v − i i ( t ) ′ 1 n − 1 , ∀ i ∈ V , ∀ t ≥ 0 . (119) Pr oo f: The Lemma 7.2 3 implies, 1 n ˆ v i ( t ) ′ 1 n = 1 n P n ℓ =1 ˆ v iℓ ( t ) , = 1 n P ℓ ∈V : ˆ v iℓ ( t )= 1 n  1 n  , = P ℓ ∈V : ˆ v iℓ ( t )= 1 n  1 n 2  , = P n ℓ =1 ˆ v iℓ ( t ) 2 , = ˆ v i ( t ) 2 . (120) If ˆ v i ( t ) ∈ R n 0 , 1 n then ˆ v − i i ( t ) ∈ R n − 1 0 , 1 n , thus a similar argument to ( 120 ) implies ( 119 ) . Lemma 7.25: (DDA Local Zero Error Property) Every normal consensu s estimate ˆ v i ( t ) satisfies ( 64 ) . Pr oo f: Th e L emma 7.2 implies ˆ v ii (0) = 1 n for e ach i ∈ V . Next observe that under ( A 7) the estimate ˆ v i ( t ) will not chang e un less a signal S ij ( t ij 0 , t ij 1 ) is received at no de i . If a signal is receiv ed then ˆ v i ( t ) is updated by ( 123 ) giv en belo w . W e now show that under th e assumption ( 64 ) , the solution ˆ v i ( t ij 1 (+)) sp ecified by ( 123 ) will imply ( 64 ) for every set vectors { e i , ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) } . I f ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) = 0 then ˆ v i ( t ij 1 (+)) = ˆ v i ( t ij 1 ) + ˆ v j ( t ij 0 ) − ˆ v j i ( t ij 0 ) e i and thus, ˆ v ii ( t ij 1 (+)) = ˆ v ii ( t ij 1 ) + ˆ v j i ( t ij 0 ) − ˆ v j i ( t ij 0 ) , = 1 n . If ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 and ˆ v − i i ( t ij 1 ) 2 < ˆ v − i j ( t ij 0 ) 2 then ˆ v i ( t ij 1 (+)) = ˆ v j ( t ij 0 ) + e i  1 n − ˆ v j i ( t ij 0 )  and thus, ˆ v i ( t ij 1 (+)) = ˆ v j i ( t ij 0 ) + 1 n − ˆ v j i ( t ij 0 ) , = 1 n . Finally , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 and ˆ v − i i ( t ij 1 ) 2 ≥ ˆ v − i j ( t ij 0 ) 2 then ˆ v i ( t ij 1 (+)) = ˆ v i ( t ij 1 ) and thus ( 64 ) fo llows by assumption. Lemma 7.26: (DDA Normal Consensus Estima te Magnitud e Equiv alence to Error ) For any tw o n ormal consen sus estimates ˆ v i ( t ) and ˆ v j ( t ) , ˆ v i ( t ) 2 ≥ ˆ v j ( t ) 2 ⇔ E 2 1 n 1 n  ˆ v i ( t )  ≤ E 2 1 n 1 n  ˆ v j ( t )  . (121) Pr oo f: For any normal consensus estimate ˆ v i ( t ) , E 2 1 n 1 n  ˆ v i ( t )  =  ˆ v i ( t ) − 1 n 1 n  2 , = ˆ v i ( t ) 2 − 2 1 n ˆ v i ( t ) ′ 1 n + 1 n , = 1 n − ˆ v i ( t ) 2 (122) where the third line holds due to Lemm a 7.24. The equivalence ( 12 1 ) then follows directly from ( 122 ) . DRAFT 40 Lemma 7.27: (DDA Normal Consensus Estimate Up date) App lying ( 27 ) and ( 28 ) to the o ptimization pro blem ( 26 ) implies the DD A normal consensus estimate upd ate ˆ v i ( t ij 1 (+)) can b e defined as in ( 29 ) − ( 30 ) . Pr oo f: Note that ( 29 ) − ( 30 ) imp lies, ˆ v i ( t ij 1 (+)) =          ˆ v i ( t ij 1 ) + ˆ v j ( t ij 0 ) − ˆ v j i ( t ij 0 ) e i , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) = 0 , ˆ v j ( t ij 0 ) + e i  1 n − ˆ v j i ( t ij 0 )  , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 , ˆ v − i i ( t ij 1 ) 2 < ˆ v − i j ( t ij 0 ) 2 , ˆ v i ( t ij 1 ) , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 , ˆ v − i i ( t ij 1 ) 2 ≥ ˆ v − i j ( t ij 0 ) 2 . (123) W e thus will p rove th at ( 27 ) an d ( 28 ) imply optimization pro blem ( 26 ) yields th e DDA n ormal con sensus estimate update ˆ v i ( t ij 1 (+)) defin ed by ( 123 ) . Under ( 27 ) an d ( 28 ) we can r e-write ( 26 ) as, ˆ v i ( t ij 1 (+)) = arg ˜ v min  ˜ v − 1 n 1 n  2 , s.t. ( 8 ) ho lds, ˜ v ∈ R n 0 , 1 n , given ˆ s i ( t ij 1 ) = S ˆ v i ( t ij 1 ) , ˆ s j ( t ij 0 ) = S ˆ v j ( t ij 0 ) , s i (0) = S e i . (124) Giv en that ˆ s i ( t ij 1 ) = S ˆ v i ( t ij 1 ) , ˆ s j ( t ij 0 ) = S ˆ v j ( t ij 0 ) , and s i (0) = S e i are known, the set of vectors ˆ v i ( t ij 1 (+)) for which ( 8 ) ho lds is span { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } , thu s ( 124 ) can be re-wr itten as ˆ v i ( t ij 1 (+)) = arg ˜ v min ˜ v ∈ span { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } ∩ R n 0 , 1 n  ˜ v − 1 n 1 n  2 , = ar g ˜ v =  a ˆ v i ( t ij 1 ) + b ˆ v j ( t ij 0 ) + c 1 n e i  ∩ R n 0 , 1 n min ( a,b,c )  ˜ v − 1 n 1 n  2 . (125) For no tational convenience denote ˆ v i = ˆ v i ( t ij 1 ) , ˆ v j = ˆ v j ( t ij 0 ) , and ˆ v i ( t ij 1 (+)) = ˆ v i (+) . No te that the co nstraint in ( 125 ) can be exp ressed as follows, a ˆ v iℓ + b ˆ v j ℓ + c 1 n e iℓ ∈ (0 , 1 n ) , ∀ ℓ ∈ V . (126) Due to Lemm a 7.25 the i th constraint in ( 126 ) is, a 1 n + b ˆ v j i + c 1 n ∈ (0 , 1 n ) , ⇒ c =    − a + b ˆ v j i n , 1 − a − b ˆ v j i n . If c = − a + b ˆ v j i n we then h av e the candidate up date ˆ v (1) i (+) , ˆ v (1) i (+) = a ˆ v i + b ˆ v j + 1 n e i ( − a + b ˆ v j i n ) , ⇒ ˆ v (1) i (+) 2 = 1 n ˆ v (1) i (+) ′ 1 n = a ˆ v 2 i + b ˆ v 2 j − a 1 n 2 − b ˆ v j i /n , where the magnitud e ˆ v (1) i (+) 2 is comp uted using Lemma 7.24. If c = 1 − a + b ˆ v j i n we then have the candidate update ˆ v (2) i (+) , ˆ v (2) i (+) = a ˆ v i + b ˆ v j + 1 n e i (1 − a + b ˆ v j i n ) , ⇒ ˆ v (2) i (+) 2 = 1 n ˆ v (1) i (+) ′ 1 n = a ˆ v 2 i + b ˆ v 2 j + 1 n 2 − a 1 n 2 − b ˆ v j i /n , = ˆ v (1) i (+) 2 + 1 n 2 . (127) DRAFT 41 Since e iℓ = 0 for all ℓ 6 = i , th e i th constraint o f ( 126 is the only constrain t in volving the optimization variable c , thus b y app lying Lem ma 7.26 it follows from ( 127 ) that if c 6 = 1 − a + b ˆ v j i n then the resulting solu tion ˆ v i (+) cannot be a solution to ( 125 ) , hence ˆ c = 1 − a + b ˆ v j i n . Next observe that if ˆ v iℓ = ˆ v j ℓ = 0 then the ℓ th constraint in ( 126 ) places no restriction on a or b , thus due to Lemma 7 .23 a nd Lem ma 7.2 5 we can con sider the three p ossible scenarios posed by ( 1 26 ) given the i th constraint is satisfied by ( 127 ) : one co nstraint, ( M 1 A ) a (0) + b ( 1 n ) ∈ (0 , 1 n ) ⇒ b ∈ (0 , 1) , ( M 1 B ) a ( 1 n ) + b ( 1 n ) ∈ (0 , 1 n ) ⇒ b ∈ (1 − a, − a ) , two constraints, ( M 2 A ) a (0) + b ( 1 n ) ∈ (0 , 1 n ) ⇒ b ∈ (0 , 1) a ( 1 n ) + b (0) ∈ (0 , 1 n ) ⇒ a ∈ (0 , 1 ) , ( M 2 B ) a (0) + b ( 1 n ) ∈ (0 , 1 n ) ⇒ b ∈ (0 , 1) a ( 1 n ) + b ( 1 n ) ∈ (0 , 1 n ) ⇒ a ∈ (1 − b , − b ) , ( M 2 C ) a ( 1 n ) + b (0) ∈ (0 , 1 n ) ⇒ a ∈ (0 , 1) a ( 1 n ) + b ( 1 n ) ∈ (0 , 1 n ) ⇒ b ∈ (1 − a , − a ) , or three constra ints ( M 3) a (0) + b ( 1 n ) ∈ (0 , 1 n ) ⇒ b ∈ (0 , 1) a ( 1 n ) + b (0) ∈ (0 , 1 n ) ⇒ a ∈ (0 , 1 ) a ( 1 n ) + b ( 1 n ) ∈ (0 , 1 n ) ⇒ b ∈ (1 − a, − a ) . If ˆ v − i i ′ ˆ v − i j = 0 and th ere is one c onstraint, then Lemma 7.25 implies ˆ v − i i = 0 and hen ce ( M 1 A ) . In this case we have ˆ v i = 1 n e i and the following candidate solutions, ˆ v i (+) =    a 1 n e i + 1 n e i (1 − a ) = 1 n e i , ( b = 0) a 1 n e i + ˆ v j + 1 n e i (1 − a − ˆ v j i n ) , ( b = 1) (128) where th e seco nd line in ( 128 ) simplifies to ˆ v j + 1 n e i (1 − ˆ v j i n ) . T he solutio ns in ( 128 ) possess the fo llowing magnitud es, ˆ v i (+) 2 =    1 n 2 , ( b = 0) ˆ v − i j 2 + 1 n 2 , ( b = 1) and thus since Lem ma 7.25 implies ˆ v − i j 2 > 0 , the optimal solution is, ˆ v i (+) = ˆ v j + 1 n e i (1 − ˆ v j i n ) , = ˆ v i + ˆ v j − e i ˆ v j i , (129) where the last line follows under the given assum ption ˆ v i = 1 n e i . If ˆ v − i i ′ ˆ v − i j = 0 and there are two constraints then ( M 2 A ) necessarily f ollows. In this case ˆ v − i i 2 > 0 and we have the candidate solutions, ˆ v i (+) =                1 n e i , ( a = 0 , b = 0) ˆ v i , ( a = 1 , b = 0) ˆ v j + 1 n e i (1 − ˆ v j i n ) , ( a = 0 , b = 1 ) ˆ v i + ˆ v j − e i ˆ v j i , ( a = 1 , b = 1) . (130) DRAFT 42 The solutions in ( 1 30 ) possess the fo llowing mag nitudes, ˆ v i (+) 2 =                1 n 2 , ( a = 0 , b = 0) ˆ v − i i 2 + 1 n 2 , ( a = 1 , b = 0) ˆ v − i j 2 + 1 n 2 , ( a = 0 , b = 1) ˆ v − i i 2 + ˆ v − i j 2 + 1 n 2 , ( a = 1 , b = 1) and thus since Lem ma 7.25 implies ˆ v − i j 2 > 0 , the optimal solution is, ˆ v i (+) = ˆ v i + ˆ v j − e i ˆ v j i . (131) If ˆ v − i i ′ ˆ v − i j > 0 and there is one constraint then we necessarily ha ve ˆ v − i i = ˆ v − i j and thus ( M 1 B ) . In th is case the candidate solutions are, ˆ v i (+) =    a ˆ v i + (1 − a ) ˆ v j + 1 n e i (1 − a − (1 − a ) ˆ v j i n ) a ˆ v i − a ˆ v j + 1 n e i (1 − a + a ˆ v j i n ) . (132) Note that the first and second line in ( 132 ) co rrespond respecti vely to when b = 1 − a and b = − a , also note that each line simplifies respecti vely to ˆ v j + 1 n e i (1 − ˆ v j i n ) and 1 n e i . The solutions in ( 132 ) thus possess the f ollowing magnitud es, ˆ v i (+) 2 =    ˆ v − i j 2 + 1 n 2 , ( b = 1 − a ) 1 n 2 , ( b = − a ) , and since Lem ma 7.25 implies ˆ v − i j 2 > 0 , the optimal solutio n is, ˆ v i (+) = ˆ v j + 1 n e i (1 − ˆ v j i n ) , = ˆ v i , (133) where the last l ine follows under the assumption ˆ v − i i = ˆ v − i j . If ˆ v − i i ′ ˆ v − i j > 0 and there are tw o constraints, then if ˆ v − i i 2 < ˆ v − i j 2 it follows that ( M 2 B ) holds. The candidate solutio ns in this case are, ˆ v i (+) =                1 n e i , ( a = 0 , b = 0) ˆ v i , ( a = 1 , b = 0) − ˆ v i + ˆ v j − 1 n e i (2 − ˆ v j i n ) , ( a = − 1 , b = 1) ˆ v j + 1 n e i (1 − ˆ v j i n ) , ( a = 0 , b = 1) . (134) The solutions in ( 1 34 ) possess the fo llowing mag nitudes, ˆ v i (+) 2 =                1 n 2 , ( a = 0 , b = 0) ˆ v − i i 2 + 1 n 2 , ( a = 1 , b = 0) − ˆ v − i i 2 + ˆ v − i j 2 + 1 n 2 , ( a = − 1 , b = 1 ) ˆ v − i j 2 + 1 n 2 , ( a = 0 , b = 1) . The assumption ˆ v − i j 2 > ˆ v − i i 2 then implies the o ptimal solution, ˆ v i (+) = ˆ v j − 1 n e i (1 − ˆ v j i n ) . (135) DRAFT 43 If ˆ v − i i ′ ˆ v − i j > 0 and th ere are two constrain ts, then if ˆ v − i i ≥ ˆ v − i j it fo llows that ( M 2 C ) hold s. The can didate solutions in this case are, ˆ v i (+) =                1 n e i , ( a = 0 , b = 0) ˆ v j + 1 n e i (1 − ˆ v j i n ) , ( a = 0 , b = 1) ˆ v i − ˆ v j + 1 n e i , ( a = 1 , b = − 1) ˆ v i , ( a = 1 , b = 0) . (136) The solutions in ( 1 36 ) possess the fo llowing mag nitudes, ˆ v i (+) 2 =                1 n 2 , ( a = 0 , b = 0) ˆ v − i j 2 + 1 n 2 , ( a = 0 , b = 1) ˆ v − i i 2 − ˆ v − i j 2 + 1 n 2 , ( a = 1 , b = − 1) ˆ v − i i 2 + 1 n 2 , ( a = 1 , b = 0) . The assumption ˆ v − i j 2 ≤ ˆ v − i i 2 then implies the g lobal solution, ˆ v i (+) = ˆ v i . (137) If ˆ v − i i ′ ˆ v − i j > 0 and there are th ree c onstraints then ( M 3) necessarily follo ws. T he candidate solutions in this case are, ˆ v i (+) =          1 n e i , ( a = 0 , b = 0) ˆ v j + 1 n e i (1 − ˆ v j i n ) , ( a = 0 , b = 1) ˆ v i , ( a = 1 , b = 0) . (138) The solutions in ( 1 38 ) possess the fo llowing mag nitudes, ˆ v i (+) 2 =          1 n 2 , ( a = 0 , b = 0) ˆ v − i j 2 + 1 n 2 , ( a = 0 , b = 1) ˆ v − i i 2 + 1 n 2 , ( a = 1 , b = 0) . The assumption ˆ v − i j 2 > ˆ v − i i 2 then implies the o ptimal solution, ˆ v i (+) = ˆ v j + 1 n e i (1 − ˆ v j i n ) . (139) In contrast, the assump tion ˆ v − i j 2 ≤ ˆ v − i i 2 implies the glob al solution, ˆ v i (+) = ˆ v i . (140) Combining ( 129 ) , ( 13 1 ) , ( 133 ) , ( 135 ) , ( 13 7 ) , ( 139 ) , and ( 140 ) , tog ether imply ( 123 ) . Note th at if ˆ v − i j 2 = ˆ v − i i 2 and ˆ v − i j 6 = ˆ v − i i then there are two global solutions to ( 125 ) . By specifying ( 137 ) an d ( 140 ) we h av e chosen th e global solu tions that are necessary for Le mma 7.29, wh ich is in turn necessary f or the DDA algorithm to ob tain av erage-co nsensus under the suf ficient commun ication condition stated in Theorem 4.9. Lemma 7.28: (DDA Normal Consensus E stimate No n-Decreasing Er ror) Th e erro r of ev ery normal consen sus estimate ˆ v i ( t ) is a n on-dec reasing fu nction of t ≥ 0 fo r all i ∈ V . DRAFT 44 Pr oo f: T he Lemm a 7 .27 im plies tha t up on rec eption of any signal S ij ( t ij 0 , t ij 1 ) , the norm al consen sus estimate ˆ v i ( t ij 1 ) is update d to a solution of ( 12 5 ) , thus ˆ v i ( t ij 1 (+)) ∈ span { ˆ v i ( t ij 1 ) , ˆ v j ( t ij 0 ) , e i } ∩ R n 0 , 1 n . (141) The Lemma 7.23 imp lies ˆ v i ( t ij 1 ) ∈ R n 0 , 1 n , thus ( 141 ) imp lies that any candidate solution ˆ v (1) ( t ij 1 (+)) that does n ot satisfy E 1 n 1 n  ˆ v (1) ( t ij 1 (+))  ≤ E 1 n 1 n  ˆ v ( t ij 1 )  cannot be a so lution ( 125 ) . Lemma 7.29: (DDA Fixed Nor mal Consen sus Estimate Gi ven No Reduction in Error) Under the DD A algor ithm, for any signal S ij ( t ij 0 , t ij 1 ) we have, E 2 1 n  ˆ v i ( t ij 1 (+))  = E 2 1 n  ˆ v i ( t ij 1 )  ⇒ ˆ v i ( t ij 1 (+)) = ˆ v i ( t ij 1 ) . (142) Pr oo f: Applying Lemma 7 .26 to the LHS of ( 14 2 ) implies, E 2 1 n  ˆ v i ( t ij 1 (+))  = E 2 1 n  ˆ v i ( t ij 1 )  ⇔  v i ( t ij 1 (+))  2 =  ˆ v i ( t ij 1 )  2 . T o prove ( 14 2 ) we thus have only to show , ˆ v i ( t ij 1 (+)) 6 = ˆ v i ( t ij 1 ) ⇒  ˆ v i ( t ij 1 (+))  2 6 =  ˆ v i ( t ij 1 )  2 . (143) Under the upd ate ( 123 ) , to prove ( 143 ) we need only to sh ow th at either of the two cases, ˆ v i ( t ij 1 (+)) = ˆ v i ( t ij 1 ) + ˆ v j ( t ij 0 ) − ˆ v j i ( t ij 0 ) e i , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) = 0 , ˆ v i ( t ij 1 (+)) = ˆ v j ( t ij 0 ) + e i  1 n − ˆ v j i ( t ij 0 )  , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 , ˆ v − i i ( t ij 1 ) 2 < ˆ v − i j ( t ij 0 ) 2 (144) will imply ˆ v 2 i ( t ij 1 (+)) > ˆ v 2 i ( t ij 1 ) . If ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) = 0 then ( 14 4 ) together with ( 54 ) im ply , ˆ v 2 i ( t ij 1 (+)) = 1 n ˆ v i ( t ij 1 (+)) ′ 1 n , = 1 n  ˆ v i ( t ij 1 ) + ˆ v j ( t ij 0 ) − ˆ v j i ( t ij 0 ) e i  ′ 1 n , = 1 n  ˆ v i ( t ij 1 ) ′ 1 n + ˆ v j ( t ij 0 ) ′ 1 n − ˆ v j i ( t ij 0 )  , = ˆ v 2 i ( t ij 1 ) + 1 n ˆ v − i j ( t ij 0 ) ′ 1 n − 1 , > ˆ v 2 i ( t ij 1 ) , where the last ineq uality holds because ( 64 ) implies ˆ v − i j ( t ) ′ 1 n ≥ 1 n for all j ∈ V − i and t ≥ 0 . If ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 and ˆ v − i i ( t ij 1 ) 2 < ˆ v − i j ( t ij 0 ) 2 , then du e to ( 144 ) it fo llows tha t, ˆ v 2 i ( t ij 1 (+)) = 1 n ˆ v i ( t ij 1 (+)) ′ 1 n , = 1 n  ˆ v j ( t ij 0 ) + e i  1 n − ˆ v j i ( t ij 0 )   ′ 1 n , = 1 n  ˆ v j ( t ij 0 ) ′ 1 n + 1 n − ˆ v j i ( t ij 0 )  , = 1 n  ˆ v − i j ( t ij 0 ) ′ 1 n − 1 + 1 n  , > 1 n  ˆ v − i i ( t ij 1 ) ′ 1 n − 1 + 1 n  , = 1 n ˆ v i ( t ij 1 ) ′ 1 n = ˆ v 2 i ( t ij 1 ) , DRAFT 45 where the final ineq uality holds under the giv en assumption that ˆ v − i i ( t ij 1 ) 2 < ˆ v − i j ( t ij 0 ) 2 , and the second last equality holds due to Lemma 7.25. Lemma 7.30: (DDA Lower Bound o n I ncrease in Normal Consensus Magnitude) Und er the DD A algorithm, for any signal S ij ( t ij 0 , t ij 1 ) we h av e,  ˆ v − i i ( t ij 1 (+))  2 ≥ max {  ˆ v − i i ( t ij 1 )  2 ,  ˆ v − i j ( t ij 0 )  2 } . Pr oo f: If ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) = 0 then ˆ v i ( t ij 1 (+)) = ˆ v i ( t ij 1 ) + ˆ v j ( t ij 0 ) − ˆ v j i ( t ij 0 ) e i and th us ap plying ( 11 9 ) imp lies,  ˆ v − i i ( t ij 1 (+))  2 = 1 n ˆ v − i i ( t ij 1 (+)) ′ 1 n − 1 , = 1 n  ˆ v − i i ( t ij 1 ) ′ 1 n − 1 + ˆ v − i j ( t ij 0 ) ′ 1 n − 1  , ≥ max {  ˆ v − i i ( t ij 1 )  2 ,  ˆ v − i j ( t ij 0 )  2 } . If ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 and ˆ v − i i ( t ij 1 ) 2 < ˆ v − i j ( t ij 0 ) 2 then ˆ v i ( t ij 1 (+)) = ˆ v j ( t ij 0 ) + e i  1 n − ˆ v j i ( t ij 0 )  and thus,  ˆ v − i i ( t ij 1 (+))  2 = 1 n  ˆ v − i j ( t ij 0 ) + e − i i  1 n − ˆ v j i ( t ij 0 )   1 n − 1 = 1 n ˆ v − i j ( t ij 0 ) ′ 1 n − 1 ≥ max {  ˆ v − i i ( t ij 1 )  2 ,  ˆ v − i j ( t ij 0 )  2 } where the last inequ ality hold s by th e assum ption ˆ v − i i ( t ij 1 ) 2 < ˆ v − i j ( t ij 0 ) 2 . Fin ally , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 and ˆ v − i i ( t ij 1 ) 2 ≥ ˆ v − i j ( t ij 0 ) 2 then ˆ v i ( t ij 1 (+)) = ˆ v i ( t ij 1 ) and thus  ˆ v − i i ( t ij 1 (+))  2 = 1 n ˆ v − i i ( t ij 1 (+)) ′ 1 n − 1 , = 1 n ˆ v − i i ( t ij 1 ) ′ 1 n − 1 , ≥ max {  ˆ v − i i ( t ij 1 )  2 ,  ˆ v − i j ( t ij 0 )  2 } , where the last in equality holds by the assump tion ˆ v − i i ( t ij 1 ) 2 ≥ ˆ v − i j ( t ij 0 ) 2 . Lemma 7.31: (DDA Upper Bound on the Magn itude  ˆ v − i i ( t )  2 ) If ˆ v 2 i ( t ) = ˆ v 2 i ( t ′ ) for all ( t, t ′ ) ∈ [ t 0 , t 1 ] then  ˆ v − i i ( t )  2 ≤  ˆ v − j i ( t ′ )  2 , ∀ j ∈ V − i , ∀ ( t, t ′ ) ∈ [ t 0 , t 1 ] . Pr oo f: The result fo llows since ˆ v i ( t ) ∈ R n 0 , 1 n implies ˆ v ij ( t ) ∈ (0 , 1 n ) for all j ∈ V and t ≥ 0 , thus,  ˆ v − i i ( t )  2 = ˆ v 2 i ( t ) − 1 n 2 , = ˆ v 2 i ( t ′ ) − 1 n 2 , ≤ ˆ v 2 i ( t ′ ) − ˆ v 2 ij ( t ′ ) , =  ˆ v − j i ( t ′ )  2 . Lemma 7.32: (Er ror Exp ression for C [0 ,t 1 ] satisfying ( 37 ) ) For any communicatio n sequen ce C [0 ,t 1 ] satisfying ( 37 ) the total r eduction in norm al consensus error from t = 0 to t = t 1 (+) is, E 2 ( C [0 ,t 1 ] ) = P n i =1  E 2 1 n 1 n  ˆ v i (0)  − E 2 1 n 1 n  ˆ v i ( t 1 (+))   = n − 1 n − P n i =1 E 2 1 n 1 n  ˆ v i ( t 1 (+))  = P ℓ ∈ N E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) , C [ t ℓ 0 ,t ℓ 1 ] ∈ S V CC ≤ n − 1 n . (145) DRAFT 46 Pr oo f: The proo f is identical to Lemma 7.12 with ( 86 ) , ( 35 ) , an d S V SC replaced by ( 1 45 ) , ( 37 ) , and S V CC respectively . Lemma 7.33: (DDA V an ishing Reduction in Er ror for C [0 ,t 1 ] satisfying ( 37 ) ) For any comm unication sequence C [0 ,t 1 ] satisfying ( 37 ) th ere exists an integer ℓ ε such that, E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) ≤ ε , ∀ ℓ ≥ ℓ ε , for any ε > 0 . Pr oo f: The proo f is identical to Lemma 7.1 3 with ( 86 ) and ( 35 ) re placed by ( 145 ) an d ( 37 ) respectively . Lemma 7.34: (DDA Lower Bound on Non -Zero Reduction in Erro r) For a ny com munication seq uence C [ t 0 ,t 1 ] we have, E 2 ( C [ t 0 ,t 1 ] ) > 0 ⇒ E 2 ( C [ t 0 ,t 1 ] ) ≥ 1 n 2 . (146) Pr oo f: Applying Lemm a 7 .26 to the LHS o f ( 146 ) implies there exists some sign al S ij ( t ij 0 , t ij 1 ) ∈ C [ t 0 ,t 1 ] such that, ˆ v 2 i ( t ij 1 (+)) > ˆ v 2 i ( t ij 1 ) . (147) W e now show that ( 147 ) implies ˆ v 2 i ( t ij 1 (+)) ≥ ˆ v 2 i ( t ij 1 ) + 1 n 2 , the result ( 146 ) then follows d irectly by Lemma 7.28 together with Lemma 7.26. The Lemma 7.2 3 implies ˆ v i ( t ) ∈ R n 0 , 1 n for all i ∈ V and t ≥ 0 , th us und er ( 147 ) it follows that, ˆ v 2 i ( t ij 1 (+)) = 1 n ˆ v i ( t ij 1 (+)) ′ 1 n , = 1 n 2 |{ ℓ : ˆ v iℓ ( t ij 1 (+)) }| , > ˆ v 2 i ( t ij 1 ) , = 1 n ˆ v i ( t ij 1 ) ′ 1 n , = 1 n 2 |{ ℓ : ˆ v iℓ ( t ij 1 ) }| . (148) From ( 148 ) it then f ollows, |{ ℓ : ˆ v iℓ ( t ij 1 (+)) }| > |{ ℓ : ˆ v iℓ ( t ij 1 ) }| ⇒ |{ ℓ : ˆ v iℓ ( t ij 1 (+)) }| − 1 ≥ |{ ℓ : ˆ v iℓ ( t ij 1 ) }| . (149) Under the con straint ˆ v i ( t ) ∈ R n 0 , 1 n the last ineq uality in ( 149 ) implies ˆ v 2 i ( t ij 1 (+)) ≥ ˆ v 2 i ( t ij 1 ) + 1 n 2 . Lemma 7.35: (DDA Ex istence of a T ime fo r Zero Reduction in Error) F or any communication s equenc e C [0 ,t 1 ] satisfying ( 37 ) th ere exists an integer ℓ 1 n 2 such that, E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) = 0 , ∀ ℓ ≥ ℓ 1 n 2 . (150) Pr oo f: The result fo llows imm ediately by applying Lemm a 7.28 and Lemma 7.34 to Lem ma 7.33. Theor em 4.9 (DD A Network Con vergence to A verage-Consensus) Pr oo f: For any commun ication sequ ence C [0 ,t 1 ] satisfying ( 37 ) th e Lemma 7.35 implies there exists an integer ℓ 1 n 2 such that ( 150 ) h olds. W e now sho w that the cond ition, E 2 ( C [ t ℓ 0 ,t ℓ 1 ] ) = 0 , C [ t ℓ 0 ,t ℓ 1 ] ∈ S V CC (1 51) DRAFT 47 implies ( 44 ) at t = t ℓ 0 , and h ence b y Defin ition 7 .1 a n etwork average-consensus is obtained at t ℓ 0 . Applying Lem ma 7.30 to ( 151 ) imp lies,  ˆ v − ˆ i ˆ i ( t ˆ ij 1 (+))  2 ≥  ˆ v − ˆ i j ( t ˆ ij 0 )  2 , ∀ j ∈ V − ˆ i , ≥  ˆ v − j j ( t ˆ ij 0 )  2 , ∀ j ∈ V − ˆ i (152) where the last inequality holds due to Lemma 7.31. L ike wise, applying Lemma 7.30 an d Lemma 7.31 to ( 151 ) implies,  ˆ v − j j ( t j ˆ i 1 (+))  2 ≥  ˆ v − j ˆ i ( t j ˆ i 0 )  2 , ∀ j ∈ V − ˆ i , ≥  ˆ v − ˆ i ˆ i ( t j ˆ i 0 )  2 , ∀ j ∈ V − ˆ i . (153) Note that app lying Lemma 7.29 to ( 151 ) implies that ( 152 ) and ( 1 53 ) can be co mbined to obtain,  ˆ v − ˆ i ˆ i ( t ℓ 0 )  2 =  ˆ v − j ˆ i ( t ℓ 0 )  2 , ∀ j ∈ V − ˆ i ⇒ ˆ v 2 ˆ i ( t ℓ 0 ) − 1 n 2 = ˆ v 2 ˆ i ( t ℓ 0 ) − ˆ v 2 ˆ ij ( t ℓ 0 ) , ∀ j ∈ V − ˆ i . (154) The second line in ( 154 ) together with Le mma 7.25 implies, ˆ v ˆ ij ( t ℓ 0 ) = 1 n , ∀ j ∈ V , and thu s ˆ v ˆ i ( t ℓ 0 ) = 1 n 1 n . Sinc e C [ t ℓ 0 ,t ℓ 1 ] ∈ S V CC, the Lemm a 7.29 to gether with the up date ( 123 ) imp lies ˆ v j ( t ℓ 0 ) = 1 n 1 n for each j ∈ V − ˆ i , thus ( 44 ) holds and so by Definition 7.1 a verage-consen sus is obtaine d at time t ℓ 0 < t 1 . Remark 7.36: W e observe that if the S V CC conditio n is d efined by the cond ition ( C) stated in Theor em 4. 7, then using the definition ( 37 ) for the I V CC condition will no t imp ly Th eorem 4.9. In other words, u sing c ondition (C) to define an S V CC sequence will imply there exist examples o f I V CC sequences for which th e DDA algorithm will not o btain average-consensus. This is why we have defin ed S V CC only by ( 36 ) , which is actually a special case of the con dition (C) stated in Theorem 4.7. Furthe rmore, the DD A algo rithm normal consensus update ( 123 ) is only a global solution to ( 2 6 ) , it is not a unique solutio n. Und er ( 20 ) and ( 21 ) , the alter native g lobal solu tion to ( 26 ) is, ˆ v i ( t ij 1 (+)) =                ˆ v i ( t ij 1 ) + ˆ v j ( t ij 0 ) − ˆ v j i ( t ij 0 ) e i , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) = 0 , ˆ v j ( t ij 0 ) + e i  1 n − ˆ v j i ( t ij 0 )  , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 , ˆ v − i i ( t ij 1 ) 2 < ˆ v − i j ( t ij 0 ) 2 , ˆ v i ( t ij 1 ) , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 , ˆ v − i i ( t ij 1 ) 2 > ˆ v − i j ( t ij 0 ) 2 , ˆ v j ( t ij 0 ) , if ˆ v − i i ( t ij 1 ) ′ ˆ v − i j ( t ij 0 ) > 0 , ˆ v − i i ( t ij 1 ) 2 = ˆ v − i j ( t ij 0 ) 2 . (155) The above remark still h olds even when the alternativ e g lobal so lution ( 1 55 ) is used to update th e n ormal co nsensus estimate; howe ver , by randomly switching between the updates ( 123 ) and ( 155 ) leads to the following con jecture. Conjectur e 7.37: Let the S V CC condition b e de fined by the condition ( C) stated in Theorem 4.7. Supp ose up on reception of e ach signal the no rmal consensus estimate upd ate is defined by ( 123 ) with pr obability p ∈ (0 , 1) an d defined by ( 155 ) with pro bability 1 − p . Th en ( 6 ) h olds at time t = t 1 (+) almost surely for any co mmunicatio n sequence C [0 ,t 1 ] satisfying ( 37 ) . The significance of the above result is du e to the f act that conditio n ( C) in Theorem 4.7 is considerably weaker than ( 36 ) . If Conjecture 7 .37 holds, th en by defining the DDA algorithm using the above ran domized pro tocol, and defining the S V CC conditio n by the cond ition (C) stated in Theorem 4 .7, the V enn diagram in Fig.1 will then be DRAFT 48 completely accurate. W e note there are alterna ti ve defin itions o f th e S V CC conditio n and OH algorithm that w ill also lead to the same V enn diagram as Fig. 1. Howev er , we know of no wea ker suf ficient condition than that s tated in Theo rem 4.9 for conver gence of the DD A algorith m ( 27 ) − ( 32 ) , and this sufficient condition is based entirely on the S V CC con dition ( 36 ) . B. Comparison Alg orithms: Gossip and ARIS In this section we define the two comparison alg orithms, Gossip an d ARIS, in terms of the class of d istributed algorithm s ( 3 ) , ( 7 ) . 1) Compa rison Algorithm 1 (Gossip): T he Gossip algorith m proposed in [3] im plies a signal sp ecification an d knowledge s et update defined respectively as ( 156 ) and ( 15 7 ) below . Gossip Algorithm: Signal Specification: S ij ( t ij 0 , t ij 1 ) = K j ( t ij 0 ) ( 156) Knowledge Set Update : K i ( t ij 1 (+)) = { ˆ s i ( t ij 1 (+)) } ˆ s i ( t ij 1 (+)) = 1 2  ˆ s i ( t ij 1 ) + ˆ s j ( t ij 0 )  (157) Initialization: K i (0) = { ˆ s i (0) } , ˆ s i (0) = s i (0) , ∀ i ∈ V . (158) W e note that und er the Gossip algorithm the only com munication condition s proven to ensure a verage-consen sus require instantaneo us and bi-directional commu nication, thus implying S ij ( t ij 0 , t ij 1 ) ∈ C [0 ,t 1 ] ⇔ S j i ( t j i 0 , t j i 1 ) ∈ C [0 ,t 1 ] , t ij 0 = t ij 1 = t j i 0 = t j i 1 , (159) for any signa l S ij ( t ij 0 , t ij 1 ) ∈ C [0 ,t 1 ] . Und er th e assump tion ( 1 59 ) , any C [0 ,t 1 ] satisfying the I V SC conditio n ( 35 ) will imply the Gossip algorithm obtains average-consensus at time t = t 1 (+) , that is ( 6 ) is satisfied at t = t 1 (+) . W e note that in some works (e.g. [17], [2]) the Gossip algorith m is referred to as “pair wise a veraging”. 2) Compa rison Algorithm 2 (ARIS) : The ad aptation of the ran domized inform ation spreading algo rithm pr oposed in [1 9] th at we will co nsider can be d efined b y th e signal specification ( 1 60 ) and knowledge set u pdate ( 161 ) below . ARIS Algo rithm: Signal Specification: S ij ( t ij 0 , t ij 1 ) = K j ( t ij 0 ) \ { s i (0) , i, n } (160) Knowledge Set Update: K i ( t ij 1 (+)) = { k i ( t ij 1 (+)) , ˆ s i ( t ij 1 (+)) , W i ( t ij 1 (+)) , ˆ w i ( t ij 1 (+)) , s i (0) , i, n } (161) DRAFT 49 Initialization: K i (0) = { k i (0) , ˆ s i (0) , W i (0) , ˆ w i (0) , s i (0) , i, n } k i (0) = 0 , ˆ s i (0) = s i (0) /n , ˆ w i (0) = e i , W i (0) ∈ R d × r , W i ℓq (0) ∼ exp { s iℓ (0) } , ∀ ℓ = 1 , 2 , . . . , d , q = 1 , 2 , . . . , r . (162) W e clarify that the ( ℓq ) th element in the matrix W i (0) is an independ ent realization of a random variable f rom an exponential distribution with rate param eter s iℓ (0) , this is why th e elements of each initial con sensus vector s i (0) are require d to be p ositi ve valued under any version of the RIS algor ithm. W e next define the ARIS upd ate for each term in the knowledge set K i ( t ij 1 (+)) , we will o mit the time indices f or conv enience. ARIS Knowledge Set Update Procedu re: ˆ w 1 =          1 n − δ [ ˆ w i + ˆ w j ] , if k j = k i , 1 n − δ [ e i + ˆ w j ] , if k j > k i , 0 , if k j < k i , k i (+) =                      k j , if k j > k i , ˆ w 1 6 = 1 n k j + 1 , if k j > k i , ˆ w 1 = 1 n k i , if k j = k i , ˆ w 1 6 = 1 n k i + 1 , if k j = k i , ˆ w 1 = 1 n k i , if k j < k i , (163) ˆ w i (+) =          ˆ w 1 , if ˆ w 1 / ∈ { 1 n , 0 } e i , if ˆ w 1 = 1 n ˆ w i , if ˆ w 1 = 0 , (164) ˆ W 1 ℓq =          ∼ exp { s iℓ (0) } , if k i (+) > k i min { W j ℓq , W i ℓq } , if k i (+) = k i = k j W i ℓq , if k j < k i ℓ = 1 , 2 , . . . , d, q = 1 , 2 , . . . , r, W i ℓq (+) =          ∼ exp { s iℓ (0) } , if k j > k i , ˆ w 1 = 1 n min { W j ℓq , ˆ W 1 ℓq } , if k j = k i , ˆ w 1 6 = 1 n ˆ W 1 ℓq , if k j ≤ k i ℓ = 1 , 2 , . . . , d, q = 1 , 2 , . . . , r, (165) ¯ w iℓ = 1 r P r q =1 min { W j ℓq ( t ij 0 ) , W i ℓq ( t ij 1 ) } , ℓ = 1 , 2 , . . . , d, w iℓ = 1 r P r q =1 min { W j ℓq ( t ij 0 ) , ˆ W 1 ℓq } , ℓ = 1 , 2 , . . . , d. DRAFT 50 ˆ s i (+) =                      ˆ s i , if k j < k i , ˆ s i , if k j = k i , ˆ w 1 6 = 1 n k i ˆ s i + ¯ w − 1 i /n k i +1 , if k j = k i , ˆ w 1 = 1 n ˆ s j , if k j > k i , ˆ w 1 6 = 1 n k j ˆ s j + w − 1 i /n k j +1 , if k j > k i , ˆ w 1 = 1 n (166) The integer r is an RIS algor ithm parameter that affects the con vergence error of the algorithm; as r increases the algorithm is expected to con verge closer to the true average-consensus vector ¯ s (0) . Based on th e strong law of large numbers we make the following conjecture. Conjectur e 7.38: For any C [0 ,t 1 ] satisfying the S V SC con dition ( 34 ) , bo th th e RIS algor ithm pro posed in [19] and the ARIS algorith m ( 16 0 ) − ( 166 ) imply ( 6 ) at time t = t 1 (+) almost surely in the limit as r ap proache s infinity . As explained in Sec.V -A, the total resource cost of the RIS algorithm increases on the order O ( r d ) , and likewise, the total r esource cost o f the ARIS algorith m increases on the order O ( r d + n ) , thus it is not practical to assume r can be made arb itrarily large as the Conjec ture 7.38 requires. Note th at due to the resourc e costs of the RIS and ARIS algorith ms, the Conjectur e 7.38 does not con tradict Conjecture 7. 4, ev en thoug h both assume th e same commun ication conditions. An in formal descr iption of the ARIS algorithm is as follows: at t = 0 each node generates r rando m variables indepen dently from an e xponen tial distribution with rate parameter s iℓ (0) for each ℓ = 1 , 2 , . . . , d. A local cou nter k i is set to ze ro and a local vector ˆ w i is set to e i . Upon recep tion of a ny signal, if the transmitted counter k j is less than the local counter k i , th en the signal is igno red, if k j = k i then the receiving node records the minimum between the received r r andom variables and local r ran dom variables for each ℓ = 1 , 2 , . . . , d . Th e vector ˆ w i maintains a record of the set of nodes that have a communication path with the node i for any given counter value. Whenever ˆ w i ( t ) is updated to 1 n , then the counter k i is increased by one, the co nsensus estimate ˆ s i ( t ) is updated as a runnin g a verage of th e inv erse of the currently reco rded mean o f the minimum rand om variables for each ℓ = 1 , 2 , . . . , d , ˆ w i ( t ) is reset to e i , an d a ne w set of r d random v ariables ar e lo cally gen erated. If k j > k i then the local counter k i is s et to k j , the c onsensus estimate ˆ s i ( t ) is set to the r eceiv ed estimate ˆ s j , ˆ w i ( t ) is reset to e i and updated by ˆ w j , a new set of rd random variables are locally generated, and the node i then reco rds the minimum between the received r random v ariables and the newly g enerated local r random variables for each ℓ = 1 , 2 , . . . , d . It is not difficult to see that the local co unter of each nod e will appro ach infinity iff C [0 ,t 1 ] satisfies th e I V SC condition ( 35 ) , thus each element in the consensus estimate ˆ s i ( t ) of each node i ∈ V will ap proach the in verse of the mean of the minimum of infinitely many ran dom variables. By a well-known prop erty of the minimum o f a set of independ ently generated expon ential random variables, toge ther with the strong law of large number s, we th us make the following conjecture. DRAFT 51 Conjectur e 7.39: For any positi ve integer r , the ARIS algorithm ( 160 ) − ( 16 6 ) implies ( 6 ) at time t = t 1 (+) if f the com munication sequence C [0 ,t 1 ] satisfies the I V SC con dition ( 35 ) . The above conjectur e is significant because, besides the flood ing tech nique an d th e DA algo rithm ( 15 ) − ( 19 ) , there is no other co nsensus pro tocol in the literature that guarantees av erage-c onsensus fo r every commu nication sequence C [0 ,t 1 ] that satisfies the I V SC condition ( 35 ) . C. Resou r c e Cost Derivations In this s ection we expla in h ow the entries of T able I in Sec.V -A are ob tained. W e will assume an a rbitrary vector v ∈ R m requires 2 m scalars to be defined, an d similarly , an unordered set of scalars S with cardin ality | S | requires | S | scalars to b e defined, that is Resource cost of v ∈ R m = 2 m , Resource cost of S = | S | . (167) The rationale f or ( 167 ) is that each elemen t in v requ ires one scalar to d efine the location o f the element, and one scalar to define the value of the element itself. The location o f eac h scalar in S is irrelev ant because S is an unord ered set, thus S can be defin ed using only | S | scalars. An alternative to ( 167 ) is to assume tha t an arbitrary vector v ∈ R m requires m scalars to be defined . Altho ugh this alter nativ e will imp ly dif ferent entries fo r th e e xact values in T able I, the or der of the s torage and commun ication costs under each algorithm w ould remain the same. W e adher e to ( 167 ) for our resour ce cost computation s due to its relativ e precision. Next observe that under the BM, OH, and DDA algo rithms each normal consensu s estimate ˆ v i ( t ) will co ntain only elemen ts belonging to the set { 0 , 1 n } , similarly each vector ˆ w i ( t ) un der the ARIS algo rithm contains only elements be longing to the set { 0 , 1 } . Under the BM, OH, and DDA alg orithms we ca n thus define each ˆ v i ( t ) based only on the set ˆ V i ( t ) , ˆ V i ( t ) =    {− ℓ : ˆ v iℓ ( t ) = 0 } if n ˆ v i ( t ) ′ 1 n ≥ 1 / 2 , { ℓ : ˆ v iℓ ( t ) = 1 n } if n ˆ v i ( t ) ′ 1 n < 1 / 2 . (168) A set ˆ W i ( t ) can be d efined analog ous to ( 168 ) to sp ecify ˆ w i ( t ) . Note that b oth ˆ V i ( t ) and ˆ W i ( t ) are unor dered sets of scalars, and th us, assuming average-consensus h as not been ob tained, we have from ( 168 ) , ( 45 ) , and ( 162 ) , 1 ≤ {| ˆ V i ( t ) | , | ˆ W i ( t ) |} ≤ ⌊ n/ 2 ⌋ . (169) Under the conditio n ( 45 ) , applying ( 167 ) an d ( 169 ) to the algorithm update equa tions ( 10 ) , ( 11 ) , ( 20 ) , ( 21 ) , ( 27 ) , ( 28 ) , then yield the respective uppe r a nd lo wer resource costs for the BM, OH, and DD A algorithms stated in T able I in Sec.V -A. Likewise, apply ing ( 1 67 ) and ( 169 ) to ( 160 ) - ( 165 ) yield the respective upper and lower resou rce costs for the ARIS algor ithm. As detailed in ( 156 ) − ( 158 ) , the Gossip algorithm p roposed in [3] requires only the local consen sus estimate ˆ s i ( t ) to be comm unicated and stored at each node, thu s the co mmunicatio n and storage cost are both fixed at 2 d under this algo rithm. W e next observe that under the D A algorith m ( 15 ) − ( 19 ) , if a norm al con sensus estimate ˆ v i ( t ) contains any elements eq ual to z ero then t hese elem ents may be omitted from th e sign al and kn owledge set. Gi ven the con dition DRAFT 52 ( 45 ) it fo llows from ( 167 ) that the minimu m num ber of scalars required to define ˆ v i ( t ) under the DA algorith m is 2 , while the ma ximum numbe r of scalars required to define ˆ v i ( t ) is 2 n . T ogether with these up per and lo wer limits on the resou rce cost of ˆ v i ( t ) , applying ( 1 67 ) to ( 15 ) and ( 16 ) then yields the upper and lower resource costs for the DA alg orithm stated in T able I in Sec.V -A. D. Example S V SC Sequence The sequen ce defined in ( 170 ) below is a n S V SC seq uence that imp lies the D A, DD A, and BM algorithm all obtain average-consensus at the same instant. C [0 , 4 n − 5] = { S 1 , S 2 , . . . , S 2( n − 1) } , S i = S i +1 ,i (2( i − 1) , 2 i − 1) , i = 1 , 2 , . . . , n − 1 , S n = S 1 ,n (2( n − 1) , 2 n − 1) , S i = S i − n +1 ,i − n (2( i − 1) , 2 i − 1) , i = n + 1 , n + 2 , . . . , 2 n − 2 . (170) It is a “unit-delay ” sequence b ecause each signal sent a t time t is received at time t + 1 , and if a signal is receiv ed at time t + 1 then the next signal is sent at time t + 2 . T ogether with Theorem 4.3, the example ( 170 ) im plies that the D A and DD A algorith ms possess the weakest possible necessary conditions for a verag e-consen sus that a ny algorithm can have. In contrast, the OH algor ithm does not achieve average-con sensus und er ( 170 ) . R E F E R E N C E S [1] M. Bawa , H. Garcia-Molin a, A. Gionis, and R. Motwani, Es timati ng Aggre gates on a P eer-t o-P eer Network , T echnical Report, St anford Uni versi ty , 20 03. URL: http: //dbpubs.stan ford.edu/pub/2003-24 . [2] V . Blondel, J. He ndrickx, A. Olshe vsky , and J. Tsitsik lis, Con ver ge nce in Multi-a ge nt Coordi nation, Consensus, and Flocki ng , in Procee dings of IEEE Conf. on Decision and Control, Se ville , Spain, 2005. [3] S. Boyd, A. Ghosh, B. Prabhakar , a nd D. Shah, Randomized Gossip Algorithms , IEEE T rans. Info. Theory , vol.52, pp. 2508-2530, 2006. [4] S. Boyd, L. Xiao, and S. Lall, A Scheme for Robust Distrib uted Sensor Fusion based on Ave rag e Consensus , in Proceedings of the 4 th Interna tional IE EE Symposium on Informat ion Processing in S ensor Networks, Los Angeles, CA, 2005. [5] S. Boyd, L. Xiao, and S. Lall, Distribute d A ver ag e Consensus with T ime-varying Metr opolis W eights , in Proceedi ngs of the 4 th Interna tional Conferen ce o n Informat ion Processing in Sensor Networks, L os Angeles, CA, 2005. [6] J. Considine, F . Li, G. Kolli os, and J. Byers, Appr oximate Aggre gation T echnique s for Sensor Databases , International Conferenc e on Data Engineeri ng, 2004. [7] J. Cortes, Fini te-time Con verge nt Gradient Flows with Applica tions to Network Consensus , Automatica, vol.42, no.11, pp. 1993-2000, 2006. [8] G. Cybenko , Dynamic Load Balanc ing for Distribut ed Memory Multipr ocessors , Journal of Para llel and Distribut ed Computing, vol.7, no.2, pp.279-301, 1989. [9] M. Franceschelli , A. Giua, and C. Seatzu, Consensus on the A ver ag e on Arbitrary Str ongly Connecte d Digraphs Based on Br oadcast Gossip Algorit hms , 1 st IF A C W orkshop on Estimation and Control of Netw orked Systems, V enice , Italy , 2009. [10] Y . Hatano and M. Mesbahi, A gre ement ov er Random Networks , IEEE Tra ns. on Automat. Contro l, vol.50, no.11, pp.1867-1872, 2005. [11] S. Kar and J. Moura, Dist ribute d Conse nsus A lgorithms in Sensor Networks with Imperfect Communi cation: Link F ail ure s and Ch annel Noise , IE EE Transacti ons on Signal Processing, vol.57, no.1, pp.355-369, 2009. [12] D. Kempe, A. Dobra, and J. Gehrke, Gossip-based Computat ion of Aggr ega te Informat ion , in Proceedings of the 44 th IEEE S ymposium on Fou ndations of Computer Science, pp. 482-491, 2003. [13] V . Krishnamurthy , K. T ople y , and G. Y in, Consensus F ormation in a T wo-T ime-Sca le Marko vian System , SIAM MMS, vol.7, no.4, pp.1898- 1927, 2009. DRAFT 53 [14] T . Li and J. Zhang, Mean Squar e A verag e-Conse nsus under Measure ment Noises and F ixed T opol ogies: Necessary and Suffici ent Conditions , Automatic a, vol.45, pp. 1929-1936, 2009. [15] P . Lin and Y . Jia, A vera ge Consensus in Networks of Multi -ag ent with both Switc hing T opolo gy and Coupl ing T ime-Del ay , Physica A, vol.387 , pp.303-313, 2008. [16] M. Mehyar , D. Spanos, J. Pongsajapan, S. Low , and R. Murray , Asynchr onous Distribut ed A verag ing on Communication Network s , IEEE Tra ns. on Networking, vol .15, no.3, pp.512-520 , 2007. [17] C. Moall emi and B. V an Roy , Consensus Propa gation , IEE E Trans. on Info. Theory , vol.54, no.7, pp.2997-3007, 2008. [18] L. Moreau, Stabilit y of Mult i-ag ent Sy stems with T ime-de pendent Co mmunicatio n Link s , IEE E T rans. on Automat. Contr ol, vol.50, no.2, pp.169-182, 2005. [19] D. Mosk-Aoyama and D. Shah, F ast Gossip A lgorithm for Computing Separ able Func tions , IE EE Trans. on Info. Theory , vol.54, no.7, pp.2997-3007, 2008. [20] R. Olf ati-Sabe r , Distributed Kalman F ilter with Embedded C onsensus F ilters , i n Proce edings of the 44 th IEEE Conf . on Decision and Control, Se vill e, Spain, 2005. [21] R. Ol fati -Saber and R. Murray , Consensus P r oblems in Net works of Agents with Switc hing T opolo gy and T ime-Del ays , IEEE T rans. on Automatic Contro l, vol.49, no.9, 2004. [22] R. Olfati -Saber , R. Murray , and D. Spanos, Distribut ed Sensor Fusion using Dynamic Consensus , in Proceedin gs of the 16 th IF A C W orld Congress, P rague, Czec h Republic , 2005. [23] M. Porfiri and D. Stil well, Consensus Seeking Over Random W eighted Dir ect ed Graphs , IEE E Trans. on Automatic Contro l, v ol.52, no.9, pp. 1767-1773, 2007. [24] W . Ren and R. Beard, Consensus Se eking in Mul ti-Agen t Systems under Dynamically Chan ging Inte racti on T opolo gies , IEE E Trans. on Automatic Contro l, vol.50, no.5, pp.655-661, 2005. [25] A. T ahbaz-Saleh i and A. Jadbabai e, A Necessary and Suffic ient Condition for Consensus Over Random Networks , IEEE Trans. on Automat ic Control, v ol.53, no.3, 2008. [26] B. T ouri and A. Nedic, Distribute d Consensus over Network with Noisy Links , 12 th Interna tional Conference on Information Fusion, Seattl e, W A, 2009. [27] J. Tsitsiklis, D. Bertsekas, and M. Athans, D istrib uted Asynchr onous Determini stic and Stoch astic Gradient Optimizati on Algorithms , IEEE Tra ns. on Automatic Control, vo l.31, no.9, pp.803-812, 1986. [28] T . Z hang and H. Y u, A vera ge Consensus in Networks of Multi-a gent with Multipl e T ime-v arying Delays , in Internatio nal Journal of Communicat ions and System Scie nces, vol.3, pp.196-203, 2010. [29] M. Z hu and S. Martinez, Dynamic Ave rag e Consensus on Synchr onous Communication Networks , in Proceedings of the American Control Conferen ce, Seattle, 2008. DRAFT

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment