Monitoring Link Faults in Nonlinear Diffusively-coupled Networks

Fault detection and isolation is an area of engineering dealing with designing on-line protocols for systems that allow one to identify the existence of faults, pinpoint their exact location, and overcome them. We consider the case of multi-agent sys…

Authors: Miel Sharf, Daniel Zelazo

Monitoring Link Faults in Nonlinear Diffusively-coupled Networks
1 Monitoring Link F aults in Nonlinear Dif fusi v ely-coupled Networks Miel Sharf, Graduate Student Member , IEEE and Daniel Zelazo, Senior Member , IEEE Abstract —Fault detection and isolation is an area of engineer - ing dealing with designing on-line protocols f or systems that allow one to identify the existence of faults, pinpoint their exact location, and over come them. W e consider the case of multi- agent systems, where faults correspond to the disappearance of links in the underlying graph, simulating a communication failure between the corresponding agents. W e study the case in which the agents and controllers are maximal equilibrium-independent passive (MEIP), and use the known connection between steady- states of these multi-agent systems and network optimization theory . W e first study asymptotic methods of differentiating the faultless system from its faulty versions by studying their steady-state outputs. W e explain how to apply the asymptotic differentiation to detect and isolate communication faults, with graph-theoretic guarantees on the number of faults that can be isolated, assuming the existence of a “con vergence assertion protocol”, a data-driven method of asserting that a multi-agent system con verges to a conjectured limit. W e then construct two data-driven model-based con ver gence assertion protocols. W e demonstrate our results by a case study . I . I N T R O D U C TI O N Multi-agent systems (MAS) have been widely studied in recent years, as they present both a variety of applications, as well as a deep theoretical framework [1]–[3]. One of the deepest concerns when considering applications of MAS is communication failures, which can drive the agents to act poorly , or fail their task altogether . These communication fail- ures, which we term network failur es , can either be accidental or planned by an adversary . There is a need of detecting network faults and dealing with them in real-time for the network to be secure. Fault detection and isolation (FDI) for multi-agent systems usually deals with faults in one of the agents, see e.g. [4] and references therein. The possibility of faults in the communica- tion links was first studied in [5] using the notion of structural controllability , which was later used in [6] to solv e the problem of leader localization. The problem of network FDI, i.e., detection and isolation of network faults, was studied primarily for linear and time-in v ariant (L TI) systems with a kno wn model. In [7], [8], the authors use jump discontinuities in the deriv ati ve of the output to detect topological changes in the network. T ools from switching systems theory , namely mode- observability , was used in [9] for network FDI. Combinatorial tools were used in [10], [11] to solve the FDI problem for consensus-seeking networks. Recently , [12] proposed a network FDI method which allows an uncertainty in the model, but is restricted for networks with L TI systems. A related M. Sharf and D. Zelazo are with the Faculty of Aerospace Engineering, Israel Institute of T echnology , Haif a, Israel. mielsharf@gmail.com, dzelazo@technion.ac.il . problem in which one tries to distinguish between multi-agent systems with identical agents but different communication graphs was studied in [13]–[15], from which only [14] also deals with nonlinear agents. W e aim at a network FDI scheme applicable also for nonlinear systems by relying on another concept widespread in multi-agent systems, namely passivity . Passi vity was first used to address faults by [16] for control-affine systems, although only fault-tolerance is addressed, and no synthesis procedures are suggested. Later works used FDI for a single nonlinear agent [16]–[19]. T o the extent of our knowledge, passivity has not been previously used to give network FDI schemes, and no other works consider networks with nonlinear components. Passi vity theory is a cornerstone of the theoretical frame- work of networks of dynamical systems [20]. It allows for the analysis of multi-agent systems to be decoupled into two separate layers, the dynamic system layer and the in- formation exchange layer . Passi vity theory was first used to study the con ver gence properties of network systems in [21]. Many variations and extensions of passivity have been applied in dif ferent aspects of multi-agent systems. For example, the related concepts of incremental passivity or relaxed co- coercivity have been used to study various synchronization problems [2], [22], and more general frameworks including Port-Hamiltonian systems on graphs [23]. Passivity is also widely used in coordinated control of robotic systems [24]. One prominent variant is maximal equilibrium-independent passivity (MEIP), which was applied in [25] in order to reinterpret the analysis problem for a multi-agent system as a pair of network optimization problems. Network optimization is a branch of optimization theory dealing with optimization of functions defined ov er graphs [26]. The main result of [25] showed that the asymptotic behavior of these networked sys- tems is (in verse) optimal with respect to a family of network optimization problems. In fact, the steady-state input-output signals of both the dynamical systems and the controllers comprising the networked system can be associated to the optimization variables of either an optimal flow or an optimal potential problem; these are the two canonical dual network optimization problems described in [26]. The results of [25] were used in [27], [28] in order to solve the synthesis problem for multi-agent systems, and were further used in [14], [29] to solve the network identification problem. W e aim to use the network optimization frame work of [25], [27], [28] for analysis and synthesis of multi-agent systems in order to pro vide a strategy for detecting and isolating network faults. W e also consider adversarial games regarding communication faults. W e stri ve to gi ve graph-theoretic-based results, showing that network fault detection and isolation 2 can be done for any MEIP multi-agent system, so long that the graph G satisfies certain conditions. W e show that if the graph G is “connected enough”, we can solve the network FDI problem. Namely , if G is 2 -connected, then detecting the existence of any number of faults is possible, and if G is k - connected with k > 2 , we can isolate up to k − 2 faults. The rest of the paper is as follows. Section II surveys the relev ant parts of the network optimization framew ork. Section III presents the problem formulation of this w ork and states the assumptions used throughout the paper . Section IV presents the first technical tool used for b uilding the network fault detection schemes, namely edge-indication vectors, and shows how to construct them. Section V uses edge-indication vectors to design network FDI schemes, as well as strate gies for adversarial games, assuming the existence of a “con ver gence assertion protocol”, a data-driv en method of asserting that a giv en MAS con ver ges to a conjectured limit. Section VI studies these conv ergence assertion protocols, prescribing two approaches for constructing them. Lastly , we present simula- tions demonstrating the constructed algorithms. Notations: W e use basic notions from algebraic graph theory [30]. An undirected graph G = ( V , E ) consists of a finite set of vertices V and edges E ⊂ V × V . W e denote by e = { i, j } ∈ E the edge that has ends i and j in V . For each edge e , we pick an arbitrary orientation and denote e = ( i, j ) . The incidence matrix of G , denoted E G ∈ R | E |×| V | , is defined such that for edge e = ( i, j ) ∈ E , [ E G ] ie = +1 , [ E G ] j e = − 1 , and [ E G ] `e = 0 for ` 6 = i, j . W e also use simple notions from graph theory [31]. A path is a sequence of distinct nodes v 1 , v 2 , · · · , v n such that { v i , v i +1 } ∈ E for all i . A cycle is the union of a path v 1 , · · · , v n with the edge { v 1 , v n } . A cycle v 1 , v 2 , · · · , v n − 1 , v 1 is called simple if v i 6 = v j for all i 6 = j . A collection of paths is called vertex-disjoint if no two share a node, except possibly for their first and last nodes. Furthermore, for a linear map T : U → V between vector spaces, we denote the kernel of T by k er T . I I . N E T W O R K O P T I M I Z ATI O N A N D M E I P M U LTI - A G E N T S Y S T E M S The role of network optimization theory in cooperati ve control was introduced in [25], and was used in [27], [28] to solve the synthesis problem for MAS. In this section, we brief on the main results we need from [25], [27], [28]. A. Diffusively-Coupled Networks and Their Steady-States Consider a collection of SISO agents interacting over a network G = ( V , E ) . The agents { Σ i } i ∈ V and the controllers { Π e } e ∈ E hav e the following models: Σ i : ( ˙ x i = f i ( x i , u i ) y i = h i ( x i ) . , Π e : ( ˙ η e = φ e ( η e , ζ e ) µ e = ψ e ( η e , ζ e ) . (1) W e consider stacked vectors of the form u = [ u > 1 , ..., u > | V | ] > and similarly for x, y, ζ , η and µ . The agents and controllers are coupled by defining the controller input as ζ = E > G y and the control input as u = −E G µ . This closed-loop system is called a diffusively-coupled network , and is denoted by ( G , Σ , Π) . Its structure is illustrated in Figure 1. W e wish Fig. 1. Block-diagram of the closed loop. to study the steady-states of the closed-loop. Suppose that (u , y , ζ , µ ) is a steady-state of ( G , Σ , Π) . For e very i ∈ V , e ∈ E , (u i , y i ) is a steady-state input-output pair of the i -th agent, and ( ζ e , µ e ) is a steady-state pair of the e -th controller . This motiv ates the following definition, originally from [25]: Definition 1. The steady-state input-output relation k of a dynamical system is the collection of all steady-state input- output pairs of the system. Given a steady-state input u and a steady-state output y , we define k (u) = { y : (u , y ) ∈ k ) } , k − 1 (y) = { u : (u , y) ∈ k ) } . Let k i be the steady-state input-output relation for the i -th agent, γ e be the steady-state input-output relation for the e -th controller , and k , γ be their stacked versions. Then the closed- loop steady-state (u , y , ζ , µ ) has to satisfy y ∈ k (u) , ζ = E > G y , µ ∈ γ ( ζ ) , u = −E G µ . By a simple manipulation, one can show that y is a closed-loop steady-state for the agent output if and only if 0 ∈ k − 1 (y) + E G γ ( E > G y) [28]. B. MEIP Systems and Closed-Loop Con ver gence The con ver gence of the diffusi v ely-coupled network ( G , Σ , Π) can be assured using passivity . W e first recall the classic definition of (shifted) passivity: Definition 2 ( Passi vity [32]) . Let Υ be a SISO system with input u ( t ) , output y ( t ) and state x ( t ) , and let (u , y) be a steady-state input-output pair of the system. F or a differ entiable function S = S ( x ) and a number ρ > 0 , we consider the inequality d dt S ( x ) ≤ − ρ k y ( t ) − y k 2 + ( y ( t ) − y)( u ( t ) − u) W e say Υ is passive (w .r .t. (u , y ) ) if ther e exists a semi-definite storage function S ( x ) and ρ ≥ 0 such that the inequality holds for any trajectory . Also, we say the system is output-strictly passive (w .r .t. (u , y) ) if the same condition holds for some ρ > 0 . The lar gest number ρ for which the condition holds is called the (output) passi vity index w .r .t. (u , y) . Passi vity was first used for diffusi vely-coupled networkes in [21]. It is known that if (u , y , ζ , µ ) is an equilibrium of the network, and the agents and controllers are passive with respect to (u i , y i ) and ( ζ e , µ e ) , then the network conv erges to said equilibrium. The existence of an equilibrium for the closed-loop network can be proved using network optimization tools under certain monotonicity assumptions on the steady- state input-output relation of the agents and controllers [25], [28], namely under the following variant of passivity . Definition 3 ( Maximal Equilibrium Independent Passivity [25]) . Consider the SISO dynamical system of the form Υ : ˙ x = f ( x, u ); y = h ( x, u ) , (2) 3 with input-output relation r . The system Υ is said to be (output-strictly) MEIP if the following conditions hold: i) The system Υ is (output-strictly) passive with r espect to any steady-state pair (u , y) , i.e., with r espect to any u , y such that y ∈ r (u) . ii) The steady-state input-output relation r is maximally monotone. That is, if (u 1 , y 1 ) , (u 1 , y 2 ) ∈ r then (u 1 − u 2 )(y 1 − y 2 ) ≥ 0 , and r is not contained in any lar ger monotone r elation [26]. The passivity index of the system Υ is defined as min y ∈ r(u) ρ u , y , wher e ρ u , y is the passivity index with respect to (u , y) . Such systems include single integrators, gradient systems, port-Hamiltonian systems on graphs, and others (see [25], [28] for more examples). In this work we often consider networks of control-affine systems. The theorem below giv es a sufficient condition for a control-affine system to be MEIP: Theorem 1. Let Σ be the SISO system of the form ˙ x = − f ( x ) + q ( x ) u, y = h ( x ) . Suppose that q ( x ) is positive for all x , that h is strictly monotone C 1 ascending, and that f /q is C 1 and monotone ascending. 1) A pair (u , y ) is a steady-state input-output pair for Σ if and only if there exists some x ∈ R such that u = f (x) /q (x) and y = h (x) ; 2) F or any x ∈ R , the function S ( x ) = R x x h ( σ ) − h (x) q ( σ ) dσ is a stora ge function for the steady-state input-output pair u = f (x) /q (x) and y = h (x) ; 3) The function S ( x ) pro ves that Σ is passive w .r .t. (u , y) with passivity index ρ = inf x ∈ R f ( x ) q ( x ) − f (x) q (x) h ( x ) − h (x) ≥ 0; 4) If either lim | t |→∞ | f ( t ) /q ( t ) | = ∞ or lim | t |→∞ | h ( t ) | = ∞ , then the system is MEIP . 5) If the derivative of h is always positive, then the inver se steady-state r elation k − 1 is dif fer entiable. Pr oof. The first, second and fourth parts are prov ed in [33, Proposition 1]. As for the third, we note that: ˙ S = ( h ( x ) − h (x)) q ( x ) ˙ x = ( h ( x ) − h (x)) q ( x ) ( − f ( x ) + q ( x ) u ) = =( h ( x ) − y)( u − u) − ( h ( x ) − h (x))  f ( x ) q ( x ) − f (x) q (x)  ≤ ( y − y)( u − u) − ρ ( y − y) 2 W e note that ρ ≥ 0 as ( h ( x ) − h (x))  f ( x ) q ( x ) − f (x) q (x)  ≥ 0 by monotonicity . Moreov er , we note that R x x h ( σ ) − h (x) q ( σ ) dσ ≥ 0 , with strict inequality whene ver x 6 = x , as h is strictly monotone and q ( x ) > 0 . Thus S is a C 1 storage function, and we conclude the system is passiv e with passivity index ρ ≥ 0 with respect to the steady-state input-output pair (u , y ) . Lastly , we note that because h is strictly monotone, the inv erse h − 1 can be defined. Thus, the in verse steady-state relation k − 1 is giv en by k − 1 (y) = f ( h − 1 (y)) q ( h − 1 (y)) , which is dif ferentiable by the in verse function theorem. As we previously remarked, MEIP can be used to prov e existence of a closed-loop equilibrium for networks: Theorem 2 ( [25], [27]) . Consider the network ( G , Σ , Π) . Assume the agents Σ i ar e MEIP , and the contr ollers Π e ar e output-strictly MEIP (or vice versa). Then the signals u ( t ) , y ( t ) , ζ ( t ) , µ ( t ) of the closed-loop system con ver ge to steady-state values u , y , ζ , µ , wher e 0 ∈ k − 1 (y) + E G γ ( E > G y) . C. The Synthesis Pr oblem for MEIP Multi-Agent Systems The synthesis problem of MAS with MEIP agents has been studied in [27], [28]. The problem deals with synthesizing controllers { Π e } forcing the closed-loop network to conv erge to some desired steady-state output y ? , when the agents Σ and the graph G are known. W e cite the following results from [28]: Theorem 3 ( [28]) . Let Σ be any MEIP ag ents and let G be any graph. Let y ? ∈ R | V | be any desired steady-state output. Then there exists a solution to the synthesis pr oblem (i.e., a r ealization of the controller s Π ) with desired output y ? for which the contr ollers are output-strictly MEIP . Remark 1. The paper [28] depicts many possible solutions to the synthesis pr oblem with output-strictly MEIP controller s. It is shown that one can always solve the pr oblem using affine contr ollers. Another sugg ested solution is an augmentation of any preferr ed output-strictly MEIP contr oller with a constant exo genous input. In practice , we will usually opt for the augmentation pr ocedur e when using the theor em as a tool for synthesis, as many real-world networks ar e alr eady equipped with some given edge contr ollers. If this is not the case, one can use the affine controller s instead. I I I . P R O B L E M F O R M U L AT I O N This section presents the problem we aim to solve, and states the assumptions we make to tackle it. W e con- sider a dif fusiv ely-coupled network of the form N G = ( G , { Σ i } i ∈ V , { Π e } e ∈ E G ) , where G = ( V , E G ) is the interaction graph, Σ i are the agents, and Π e are the edge controllers. F or any subgraph H = ( V , E H ) of G , we can consider another diffusi v ely-coupled network N H = ( H , { Σ i } i ∈ V , { Π e } e ∈ E H ) . W e can think of N H as a faulty version of N G , in which the edge controllers corresponding to the edges E G \ E H hav e malfunctioned and stopped working. Edges can fault mid- run, but we assume that once an edge has malfunctioned, it remains faulty for the remainder to the run. If we let G be the collection of all nonempty subgraphs of G , then one can think of the closed-loop diffusi vely-coupled network as a switched system, where the switching signal ς : [0 , ∞ ) → G designates the functioning edges at each time instant. The assumption that faulty edges remain faulty throughout the run can be described using the switching signal ς . Namely , we require that the switching signal ς is non-increasing , in the sense that for all times t 1 < t 2 , ς ( t 2 ) is a subgraph of ς ( t 1 ) . W e denote the number of faulty edges at time t by ˆ ς ( t ) . Now , consider a collection of agents { Σ i } and a graph G . Fix some constant vector y ? ∈ R | V | . Our goal is to design a control scheme for which the closed-loop network will con ver ge to the steady-state output y ? . In the absence of faults, we can solve the synthesis problem as in Theorem 3. Howe ver , designing controllers while ignoring faults might prevent the system from achieving the control goal. For that reason, we 4 also seek for a fault monitoring system , consisting of the agents and networked controllers, that attempts to identify aberrant behavior . When it does, it declares a fault [34]. W e now formulate the problems of network fault detection and isolation: Problem 1 (Network Fault Detection) . Let { Σ i } i ∈ V be a set of agents, G be a graph, y ? be any desir ed steady-state output, and let ς ( t ) be any non-incr easing switching signal. Find edge contr ollers { Π e } e ∈ E G and a fault monitoring system such that, i) if no faults occur , i.e. ς ( t ) = G , ∀ t , then the closed-loop diffusively-coupled network con ver ges to the steady-state output y ? , and the fault monitoring system never declar es a fault; ii) if faults do occur , i.e. ∃ t, ς ( t ) 6 = G , then the fault monitoring system declares a fault. Problem 2 (Network Fault Detection and Isolation) . Let { Σ i } i ∈ V be a set of agents, G be a graph, and y ? be any desir ed steady-state output. Given some r < | E G | , find a synthesis for the edge contr oller s such that for any monotone non-incr easing switching signal ς such that ˆ ς ( t ) ≤ r , ∀ t , the closed-loop diffusively-coupled network con ver ges to the steady-state output y ? , i.e., the effect of up to r faults can be isolated fr om the network. 1 A. Assumptions W e no w state the assumptions used throughout the work. For the remainder of this work, we fix the agents { Σ i } , and make the following assumption. Assumption 1. The ag ent dynamics { Σ i } are MEIP , and the chosen contr oller dynamics { Π e } are output-strictly MEIP (or vice versa). Mor eover , the r elations k − 1 i , γ e ar e C 1 functions. Furthermor e, the derivative dk − 1 i d y i is positive at any y i ∈ R . The passivity assumption assures that all the systems N H will globally asymptotically conv erge to some limit. The added smoothness assumptions, together with the positiv e deriv ativ e assumption, are technical assumptions that are needed to apply tools from manifold theory that will be used later . The passivity assumption allows the consideration of, e.g. port-Hamiltonian systems and gradient-descent systems [25]. Moreov er , if a system satisfies any dissipation inequality with respect to all equilibria, one can use output-feedback and input-feedthrough to force MEIP [35]. Theorem 1 shows that the smoothness assumption holds for many control-af fine systems. Moreover , it can be easily sho wn using the definition of passivity that if Σ i is output-strictly MEIP with passivity index ρ , then dk − 1 i d y i ≥ ρ > 0 whenev er k − 1 i is differentiable. Furthermore, the smoothness assumption can be relaxed by allowing k − 1 i , γ e to not be differentiable at finitely many points. The arguments presented below still hold, but require heavier tools from measure theory , so we a v oid them for clarity of presentation. In some cases, we need to sense the state of the system, including the state of the controllers. Sometimes, the control 1 Some authors refer to fault isolation in this case as identifying the faulty links, which is achiev ed by the algorithms in Subsection V -C as a side effect. model is such that the controller state has a physical meaning that can be measured e ven for non-connected agents. F or example, in the traf fic control model in [36], the state η e is the relativ e position between two vehicles. Ho we ver , the controller state of some systems might not hav e a physical meaning. For example, consider a collection of robots trying to synchronize their positions, where the output y ( t ) is the position of each robot and the edge controllers are PI controllers. In that case, the controller state η ( t ) has no physical meaning, and thus cannot be defined for non-connected agents. Some of the techniques developed later require us to be able to sense the state of the system, including the controllers’ states. Thus, we will sometimes make the follo wing assumption: Assumption 2. The contr ollers Π e ar e static nonlinearities given by the functions g e , i.e., µ e ( t ) = g e ( ζ e ( t )) for all t . In this case, the steady-state r elation γ e is equal to the function g e , and the closed-loop system is ˙ x = f ( x, −E G g ( E > G h ( x ))) , or equivalently , ˙ x i = f i  x i , P e = { i,j } g e ( h j ( x j ) − h i ( x i ))  . In one of the methods belo w , we will want to hav e a clear relationship between the measurements h i ( x i ) and the storage functions S i ( x i ) . T o achieve this, we follow Theorem 1 and assume that the agents are control-af fine: Assumption 3. Assumption 2 holds, and the agents have the form ˙ x i = − f i ( x i ) + q i ( x i ) u i ; y i = h i ( x i ) . Thus, the closed- loop system is governed by: ˙ x i = − f i ( x i ) + q i ( x i ) X e = { i,j } g e ( h j ( x j ) − h i ( x i )) . (3) It should be noted that the MEIP property for the static controllers g e reduces to monotonicity of the functions g e . In the next section, we start heading toward a solution to Problems 1 and 2. W e do so by exhibiting a method for asymptotically differentiating between the nominal dynamical system N G and the faulty dynamical systems N H . Later, we show ho w this asymptotic differentiation can induce a finite- time dif ferentiation of the systems. I V . A S Y M P T OT I C D I FF E R E N T I A T I O N B E T W E E N N E T W O R K S In this section, we dev elop the notion of edge-indication vectors, which will be used for network fault detection later . In [14], the notion of indication vectors was first dev eloped. These are constant exogenous inputs used to dri ve the closed- loop system, chosen appropriately to give different steady- state limits for systems with identical agents and controllers, but different underlying graphs. The idea of using constant exogenous inputs to drive the system into fa vorable steady- state outputs was also used in [29] to giv e a network recon- struction algorithm with optimal time complexity , although it considers sets of multiple constant exogenous inputs applied in succession. Here, we opt for a slightly different strategy . In [14], [29], the problem of network reconstruction was considered, in which we cannot affect the agents, controllers, or the underlying graph. In network FDI, we are doing synthesis, so we can manipulate the controllers and (in most cases) the underlying network. For that reason, we opt for a 5 slightly different idea, in which we add a constant exogenous signal to the output of the controller s , that is, we consider u ( t ) = −E G ( µ ( t ) + w) . A system implementing this control law is said to hav e the interaction protocol (Π , w) . Anal- ogously to the notion of indication vectors, we desire that networks with identical agents and controllers, but dif ferent underlying graphs, will be forced to conv erge to dif ferent steady-state outputs. This is because we can monitor the output y of the system and use it to detect changes in the underlying graph, i.e., network faults. For that, we first determine what the steady-state limit is for these systems ( G , Σ , (Π , w)) . Proposition 1. Consider a diffusively-coupled network N H = ( H , Σ , Π) satisfying Assumption 1. Suppose that w ∈ R | E H | is any constant signal added to the contr oller output, i.e., the loop is closed as u ( t ) = −E H ( µ ( t ) + w) . Then y is a steady- state output of the closed-loop system if and only if k − 1 (y) + E H γ ( E > H y) = −E H w . (4) Pr oof. Follows from the discussion after Definition 1, as the new steady-state relation for the controllers is given as ˜ γ ( ζ ) = γ ( ζ ) + w . In our case, the constant signal w will be in R | E G | , as we determine the exogenous controller output on each edge of G . If one then considers the system N H for some H ∈ G , then the exogenous controller output will be dif ferent from w , as it will only hav e entries of w corresponding to edges in H . T o formulate this, take any graph H ∈ G , and let P H be the linear map R | E G | → R | E H | removing entries corresponding to edges absent from H . In other words, this is a R | E H |×| E G | matrix with entries in { 0 , 1 } , whose ro ws are the ro ws of the identity matrix Id ∈ R | E G |×| E G | corresponding to the edges of H . W e can now define the notion of edge-indication vectors. Definition 4. Let ( G , Σ , Π) be a network satisfying Assump- tion 1. Let w ∈ R | E G | by any vector , and for any graph H ∈ G , we denote the solution of (4) with underlying graph H and exo genous input P H w by y H . • The vector w is called a ( G , H ) -edge-indication vector if for any H 0 ∈ G such that H 0 6 = H , we have y H 6 = y H 0 . • The vector w is called a G -edge-indication vector if for any two graphs H 1 6 = H 2 in G , y H 1 6 = y H 2 . Note 1. An edge-indication vector is a bias chosen on each edge in G . This bias can be pr ogr ammed into the contr ollers and nodes, and need not be changed nor computed on-line. In this light, for any w ∈ R | E G | , (4) transforms into k − 1 (y) + E H γ ( E > H y) = −E H P H w , (5) W e wish to find a G -edge-indication vector for giv en agents and controllers, or at least a ( G , G ) -edge-indication vector . As in [14], we use randomization. W e claim that random vectors are G -edge-indication vectors with probability 1 . Theorem 4. Let P be any absolutely continuous 2 pr obability measur e on R | E G | . Let w be a vector sampled according to P . Then P (w is a G -edge-indication vector ) = 1 . 2 Unless stated otherwise, absolute continuity is with respect to the Lebesgue measure. Pr oof. From the definition, w is not a G -edge-indication vector if and only if there are two graphs G 1 , G 2 ∈ G such that the same vector y solves equation (5) for both graphs. W e sho w that for any G 1 , G 2 ∈ G , the probability that the two equations share a solution is zero. Let n be the number of vertices in G . F or each graph H ∈ G , define a function F H : R n × R | E G | → R n by F H (y , w) = k − 1 (y)+ E H γ ( E > H y)+ E H P H w . The set of steady- state exogenous input and output pairs for the system N H is given by the set A H = { (y , w) : F H (y , w) = 0 } . W e note that the dif ferential dF H always has rank n . Indeed, it can be written as [ ∇ k − 1 (y) + E H ∇ γ ( E > H y) , E H P H ] , where ∇ γ ( E > H y) ∈ R |E H |× n . By assumption 1, the first matrix, of size n × n , is positiv e-definite as a sum of a positiv e-definite matrix and a positive semi-definite matrix, hence in vertible. Thus, by the implicit function theorem, A H is a manifold of dimension | E G | . Moreover , by Assumption 1, for any w there is a unique y such that (4) is satisfied. Thus, P gives rise to an absolutely continuous 3 probability measure on each manifold A H . 4 Hence, it is enough to show that for any G 1 6 = G 2 , the intersection A G 1 ∩ A G 2 has dimension ≤ | E G | − 1 . T o show this, we take any point (y , w) ∈ A G 1 ∩ A G 2 . As both A G 1 , A G 2 are of dimension | E G | , it is enough to show that they do not have the same tangent space at (y , w) . The tangent space of the manifold A H is given by the kernel of the differential dF H (y , w) : R n × R | E G | → R n , so we show that if G 1 6 = G 2 , the kernels ker dF G 1 , ker dF G 2 are different at (y , w) . As G 1 6 = G 2 , we can find an edge existing in one of the graphs and not the other . Assume without loss of generality that the edge e exists in G 1 but not in G 2 , and let v = (0 , 1 e ) , where 1 e is the vector in R | E G | with all entries zero, except for the e -th entry , which is equal to 1 . Then v ∈ ker dF H if and only if 1 e ∈ ker( E H P H ) . It is clear that 1 e 6∈ ker( E G 1 P G 1 ) , as P G 1 1 e = 1 e , and thus E G 1 P G 1 1 e = E G 1 1 e 6 = 0 . Moreov er , 1 e ∈ ker( E G 2 P G 2 ) , as P G 2 1 e = 0 , so k er dF G 1 6 = ker dF G 2 at (y , w) . Thus A G 1 ∩ A G 2 is of dimension ≤ | E G | − 1 , meaning that it is a zero-measure set inside both A G 1 , A G 2 . Theorem 4 presents a way to choose a G -edge-indication vector , but does not deal with the control goal. One could sat- isfy the control goal by using Theorem 3 to solve the synthesis problem for the original graph G , but we cannot assure we get an edge-indication vector . Note that any w ∈ k er E G P G giv es a solution of (5) identical to the solution for w = 0 . Thus, choosing an exogenous control input in k er E G P G does not change the steady-state output of the system N G . Howe ver , it does change the steady-state output of all other systems N H . This suggests to search for an edge-indication vector in k er E G P G . W e show that this is possible if G is “sufficiently connected”, defined below in an exact manner . Proposition 2 (Menger’ s Theorem [31]) . Let G be any con- nected graph. The following conditions are equivalent: 1) Between every two nodes ther e ar e k verte x-disjoint simple paths. 3 W ith respect to the | E G | -dimensional Hausdorff measure, or equiv alently , with respect to the standard Riemannian volume form on A H . 4 As the push-forward measure of P under the map w 7→ ( ϕ (w) , w) , where ϕ is the local map w 7→ y gi ven by the implicit function theorem. 6 2) F or any k − 1 vertices v 1 , · · · , v k − 1 ∈ V , the graph G − { v 1 , · · · , v k − 1 } is connected. Graphs satisfying either of these conditions ar e called k - connected graphs. W e will take special interest in 2 -connected graphs. Specifi- cally , we can state the following theorem about edge-indication vectors in ker E G P G . Theorem 5. Let P be any absolutely continuous pr obability distribution on k er E H P H , wher e H is 2-connected. Suppose furthermor e that w is a vector sampled accor ding to P . Then P (w is a ( G , H ) -edge-indication vector ) = 1 . W e first need to state and prov e a lemma: Lemma 1. Let H be a 2-connected graph. Suppose we color the edges of H in two colors, red and blue. If not all edges have the same color , then there is a simple cycle in H with both r ed and blue edges. Pr oof. Suppose, heading to ward contradiction, that any simple cycle in H is monochromatic. W e claim that for each vertex x , all the edges touching x hav e the same color . Indeed, take any verte x x , and suppose that there are two neighbors v 1 , v 2 of x such that the edge { x, v 1 } is blue and the edge { x, v 2 } is red. W e note that v 1 → x → v 2 is a path from v 1 to v 2 , meaning there is another path from v 1 to v 2 which does not pass through x . Adding both edges to the path yields a simple cycle with edges of both colors, as { x, v 1 } is blue and { x, v 2 } is red. Thus, ev ery node touches edges of a single color . Let V red be the set of nodes touching red edges, and V blue be the set of nodes touching blue edges. W e know that V red and V blue do not intersect. Moreover , if we had an edge between V red and V blue , it had a color . Assume, without loss of generality , it is blue. That would mean some verte x in V red would touch a blue edge, which is impossible. Thus there are no edges between V red and V blue . By assumption, there is at least one edge of each color in the graph, meaning that both sets are nonempty . Thus we decomposed the set of vertices in H to two disjoint, disconnected sets. As H is a connected graph, we find a contradiction and complete the proof. W e can now prov e Theorem 5. Pr oof. W e denote m 1 = dim k er E H P H . The proof is similar to the proof of Theorem 4. W e again define functions F G 1 for graphs G 1 ∈ G n as F G 1 (y , w) = k − 1 (y) + E G 1 g ( E > G 1 y) + E G 1 P G 1 w , but this time we consider the function F G 1 as defined on the space k er E H P H ⊂ R | E G | . As before, we define A G 1 = { (y , w : F G 1 (y , w) = 0 } and use the implicit function theorem to sho w that A G 1 are all manifolds, but their dimension this time is m 1 = dim k er E H P H . This time, we want to show that if H 6 = G 1 , then A H ∩ A G 1 is an embedded sub-manifold of dimension ≤ m 1 − 1 , as we want to sho w that (with probability 1), the solutions (5) with graph G 1 and graph H are different. As before, it is enough to show that if (y , w) ∈ A G 1 ∩ A H then the kernels ker dF G 1 and ker dF H are dif ferent at (y , w) . W e compute that for any graph G 1 , dF G 1 = [ ∇ k − 1 (y) + E G 1 ∇ γ ( E > G 1 y) , ( E G 1 P G 1 ) | ker E H P H ] , where ·| ker E H P H is the restriction of the matrix to k er E H P H . Thus, if G 1 is any graph in G which is not a subgraph of H , it contains an edge e absent from H . Follo wing the proof of Theorem 4 word-by-word, noting that 1 e ∈ ker E H P H , we conclude that the ker dF G 1 and ker dF H are different at (y , w) . Thus we restrict ourselves to non-empty subgraphs G 1 of H . For any collection E of edges in E H , we consider v = (0 , 1 E ) , where 1 E is equal to P e ∈ E 1 e . If E is a the set of edges of a cycle in H , then the vector v lies in the kernel of dF H . W e show that there is some cycle in H such that v does not lie in the kernel of dF G 1 , completing the proof. The graph G 1 defines a coloring of the graph H - edges in G 1 are colored in blue, whereas edges absent from G 1 are colored in red. Because G 1 is a non-empty proper subgraph of H , this coloring contains both red and blue edges. By the lemma, there is a simple cycle in H ha ving both red and blue edges. Let E be the set of the edges trav ersed by the cycle. W e claim that E G 1 P G 1 1 E 6 = 0 , which will complete the proof of the theorem. Indeed, because the simple cycle contains both red and blue edges, we can find a vertex touching both a red edge in the cycle and a blue edge in the cycle. W e let v be the vertex, and let e 1 , e 2 be the corresponding blue and red edges. Recalling the cycle is simple, these are the only cycle edges touching v . Howe ver , by the coloring, e 1 is in G 1 , but e 2 is not. Thus, ( E G 1 P G 1 1 E ) v = ( E G 1 ) v e 1 ( P G 1 ) e 1 e 1 + ( E G 1 ) v e 2 ( P G 1 ) e 2 e 2 = ( E G 1 ) v e 1 = ± 1 6 = 0 , and in particular, 1 E 6∈ ker E G 1 P G 1 . V . M O N I T O R I N G N E T W O R K F AU LT S In this section, we consider two applications of the dev el- oped framew ork, namely network fault detection and isolation, and defense strategies for adversarial games ov er networks. W e first present a simple algorithm for network fault detection. Then, we discuss defense strategies for adversarial games over networks, which will require a bit more effort. Lastly , we exhibit a network fault isolation protocol, which will be a combination of the pre vious two algorithms, which will be demonstrated in a case study in Section VII. In order to apply the framework of edge-indication vectors, we need an algorithm ele vating the asymptotic differentiation achie ved in the pre vious section to an on-line dif ferentiation scheme. Thus, we make the following assumption: Assumption 4. There exists an algorithm A which receives a model for a diffusively-coupled network ( G , Σ , Π) and a conjectur ed limit y ? as input, and takes measur ements of the network in-run. The algorithm stops and declar es “no” if and only if the network does not con ver ge to y ? , and otherwise runs indefinitely . The algorithm A is called a conver g ence assertion algorithm. In the language of Section III, this is a fault-monitoring system that nev er gives false positi ves nor false negati ves. For now , we assume such algorithm exists. W e will discuss this assumption in Section VI. 7 A. Detecting Network F aults W e first focus on Problem 1. T o tackle the problem, we use the notion of edge-indication vectors from Section IV. Suppose we hav e MEIP agents { Σ i } . W e first take any output- strictly MEIP controllers { Π e } solving the classical synthesis problem, i.e., forcing the closed loop system to con ver ge to y ? (see Theorem 3). As we noted, if w ∈ R | E G | lies in the kernel of E G P G , then the solution of the following equations is the same: k − 1 (y) + E G γ ( E > G y) = −E G P G w , k − 1 (y) + E G γ ( E > G y) = 0 . Thus, if w lies in ker( E G P G ) , running the interaction protocol (Π , w) does not change the steady-state output of the system. Howe v er , by Theorem 5, a random vector in ker( E G P G ) giv es a ( G , G ) -edge-indication vector , as long as G is 2 - connected. Thus, using the interaction protocol (Π , w) , where w ∈ k er E G P G is chosen randomly , will guarantee that all faulty system would con ver ge to a steady-state different than y ? , with probability 1 . Thus, applying the algorithm A al- lows an on-line, finite time distinction between the nominal faultless system and its faulty versions. W e explicitly write the prescribed algorithm below: Algorithm 1 Network Fault Detection in MEIP MAS 1: Find edge controllers { Π e } e ∈ E G solving the synthesis problem with graph G , agents Σ , and control goal y ? (see Theorem 3 and Remark 1). 2: Find a basis { b 1 , ..., b k } for the linear space k er E G P G 3: Pick a Gaussian vector α ∈ R k and define w = P k i α i b i 4: Define the interaction protocol as (Π , w) . 5: Run the system with the chosen interaction protocol. 6: Implement the algorithm A for the system ( G , Σ , (Π , w)) with limit y ? . Declare a fault in the network if A declares that the system does not con v erge to the prescribed v alue. Theorem 6 (Network Fault Detection) . Suppose that n agents { Σ i } and an underlying graph G ar e given, and that { Σ i } satisfy Assumption 1. Suppose furthermor e that G is 2- connected. Then, with pr obability 1, Algorithm 1 synthesizes an interaction pr otocol (Π , w) solving Pr oblem 1, i.e., the algorithm satisfies the following pr operties: i) If no faults occur in the network, the output of the closed- loop system conver g es to y ? . ii) If any number of faults occur in the network, the algo- rithm detects their existence. Pr oof. Follows from the discussion preceding Algorithm 1. Namely , Theorem 5 assures that w is a ( G , G ) -edge-indication vector , so long that G is 2-connected. In other words, the output of the closed-loop system with graph G conv erges to y ? , and for any graph G 6 = H ∈ G , the output of the closed-loop system with graph H conv erges to a v alue dif ferent from y ? . It remains to sho w that the algorithm declares a fault if and only if a fault occurs. If no faults occur , A ne ver declares a fault, and the same is true for Algorithm 1. On the contrary , assume any number of faults occur in the network, and let H be the current underlying graph. The output of the closed Algorithm 2 Planner Strate gy in Adversarial Multi-Agent Synthesis Game with MEIP agents - Synthesis 1: Define N = P r ` =0  m `  . Let Graphs be an array with N entries, and let j = 1 . 2: f or ` = 0 , · · · , r do 3: for 1 ≤ i 1 < i 2 < · · · < i ` ≤ m do 4: Insert the graph H = G − { e i 1 , · · · , e i ` } to the j -th entry of Graphs . Advance j by 1 . 5: end f or 6: end for 7: Define two arrays Con trollers , SSLimits of length N . 8: Choose w as Gaussian random vector of length m . 9: f or j = 1 , · · · , N do 10: T ake any edge controllers { Π e } e ∈ E G satisfying As- sumption 1. 11: Compute the steady-state limit of the network with agents Σ , underlying graph Graphs( j ) , and interaction protocol (Π , w) . Insert the result into SSLimits( j ) 12: Solve the synthesis problem for agents Σ and under- lying graph Graphs( j ) as in Theorem 3 and Remark 1. Insert the result into Controllers( j ) 13: end for loop system con ver ges to y 6 = y ? , so A e ventually stops and declares that the network does not con ver ge to the conjectured limit. Thus Algorithm 1 declares a fault. B. Multi-Agent Synthesis in the Pr esence of an Adversary Consider the following 2-player game. Both players are giv en the same n SISO agents Σ 1 , · · · , Σ n , the same graph G on n vertices and m edges, and the same vector y ? ∈ R n . There is also a server that can measure the state of the agents at certain intervals, and broadcast a single message to all agents once. The planner acts first, and designs a control scheme for the network and the server . The adversary acts second, removing at most r edges from G . The system is then run. The planner wins if the closed-loop system con ver ges to y ? , and the adversary wins otherwise. W e show that the planner can always win by using a strategy stemming from edge-indication vectors, assuming the agents are MEIP . Namely , consider the following strategy . T ake all possible P r ` =0  m `  underlying graphs. For each graph, the planner solves the synthesis problem as in Theorem 3. If the planner finds out the adversary changed the underlying graph to H , he could notify the agents of that fact (through the server), and ha ve them run the protocol solving the synthesis problem for H . Thus the planner needs to find a way to identify the underlying graph after the adversary took action, without using the server’ s broadcast. This can be done by running the system with a G -edge-indication vector , and using the server to identify the network’ s steady-state. Namely , consider Algorithms 2, 3 and 4, detailing the synthesis procedure and in-run protocol for the planner . W e prove: Theorem 7. Consider the game above . W ith pr obability 1, Algorithms 2, 3 and 4 describe a winning strate gy for the planner . Mor eover , if r is independent of n (i.e., r = O (1) ), 8 the synthesis algorithm has polynomial time complexity . Oth- erwise, the time complexity is O ( n cr ) for some universal con- stant c > 0 . Furthermore , the size of the message br oadcasted by the server is O ( r log n ) . Pr oof. Suppose the adversary changed the underlying graph to H , which has entry j in Graphs. W e note that Assumption 4 assures that A nev er declares a fault if and only if the closed- loop system conv erges to the conjectured steady-state, and that w is a G -edge-indication vector by Theorem 4. Thus, the j -th instance of con v ergence assertion protocol can nev er return a fault, and all other instances must ev entually declare a fault. Thus, the server correctly guesses the underlying graph. It then broadcasts the index j to the agents, allo wing them to change the interaction protocol and run the solution of the synthesis problem with desired output y ? and underlying graph H . Thus the network will con ver ge to y ? , and the planner wins. W e now move to time complexity . Note that N = O ( m r ) . The first for-loop has N iterations, each of takes no more than O ( mn ) actions done (where a graph is saved in memory by its incidence matrix, which is of size ≤ m × n ). Thus the first for-loop takes a total of O ( m r +1 n ) time. The second for- loop is a bit more complex. It solves the synthesis problem for H by solving an equation of the form E H v = v 0 for some known vector v 0 and an unknown v . This can be done using least-squares, which tak es no more than O (max { m, n } 3 ) time. As for finding the steady-state, this can be done by solving a con ve x network optimization problem [25, Problem (OPP)], which takes a polynomial amount of time in n, m (e.g. via gradient descent). Recalling that m ≤  n 2  = O ( n 2 ) , we conclude that if r is bounded, the total time used is polynomial in n . Moreover , if r is unbounded, the bottleneck is the first for-loop which takes O ( m r +1 n ) time. Plugging m ≤ n 2 giv es a bound on the time complexity of the form O ( n 2 r ) , as n 3 = O ( n 2 r ) . As for the communication complexity ,the server broadcasts a number between 1 and N , so a total of O (log 2 N ) bits are needed to transmit the message. Plugging in N = O ( m r ) giv es that the number of bits needed is O ( r log 2 m ) = O ( r log n ) . C. Detection and Isolation of Network F aults W e now consider Problem 2, in which faults occur through- out the run, and we want to detect their existence and ov ercome them. This problem can be thought of as a tougher hybrid of the previous two problems - in subsection V -A, the faults could appear throughout the run, b ut we only needed to find out they exist. In subsection V -B, all of the faults occur before the run, but we had to overcome them. Motiv ated by this view , we offer a hybrid solution. Ideally , the interaction protocol will hav e two disjoint phases - a first, “stable” phase in which the underlying graph is known and no extra faults hav e been found, and a second, “exploratory” phase in which Algorithm 3 Planner Strategy - In-Run Protocol for Agents 1: Run the interaction protocol (Π , w) . 2: When a message j is received, run the interaction protocol described by Controllers( j ) . Algorithm 4 Planner Strategy - In-Run Protocol for Server 1: Define HasFaulted as an array of zeros of size N . 2: while HasFaulted has at least two null entries do 3: Run N instances of the algorithm A simultaneously , with conjectured steady-states limits from SSLimits. 4: for j = 1 to N do 5: if The j -th instance declared “no” then 6: Change the value of HasFaulted ( j ) to 1 . 7: end if 8: end f or 9: end while 10: Find the index j such that HasFaulted ( j ) = 0 . Broadcast the message j to the agents. extra faults have been found, and the current underlying graph is not yet known. The first phase can be solved by using the network fault detection Algorithm 1, as long as the current underlying graph is 2-connected. The second phase can be solved by the pre-broadcast stage of the planner strategy described in Algorithms 2, 3, and 4. The main issue that remains is what happens if the un- derlying graph changes again in the exploratory phase, i.e. we entered the exploratory phase with underlying graph H 1 , but it changed to H 2 before we identified that graph. In the exploratory phase, we run an instance of A on all of the possible graphs simultaneously , until all but one instance declared a fault. If the instance related to H 2 has not declared a fault yet, it will not declare a fault from now on, unless another fault occurs before the end of the exploratory phase. If the same instance has already declared a fault, we hav e a problem - all other instances will eventually also declare a fault, and there are two options in this case. The first option is that one instance will declare a fault last, i.e. there is a time in which all but one instance have declared a fault. In this case, we identify the graph as some H 3 . When we return to the stable phase and run the interaction protocol (Π , w) corresponding to H 3 , a fault will be declared and we will return to the exploratory phase. This is because w was synthesized as a ( G , H 3 ) -edge-indication vector , meaning the de-facto steady-state limit (with graph H 2 ) will be different than the conjectured steady-state limit (with graph H 3 ), and A will declare a fault. The second option is that the last few instances of the conv ergence assertion protocol declare a fault simultaneously , i.e. there is a time in which all instances hav e declared a fault, which is dealt with by restarting the exploratory phase. W e get the synthesis algorithm and in-run protocol presented in Algorithms 5 and 6. W e claim these solve Problem 2. W e prov e: Theorem 8. Let Σ 1 , · · · , Σ n be agents satisfying Assumption 1, and let G be a k -connected graph for k ≥ 3 on n vertices and m edges. Then, with pr obability 1, Algorithms 5 and 6, run with r = k − 2 , solve Problem 2 for up to r faults. Pr oof. W e refer to steps 2 to 3 of Algorithm 6 as the stable phase of the algorithm, and to steps 4 to 13 as the e xploratory phase. As the number of faults is no bigger than r = k − 2 , the underlying graph remains 2-connected throughout the run. W e 9 Algorithm 5 Synthesis for Network Fault Isolation 1: Define N = P r ` =0  m `  . Let Graphs be an array with N entries, and let j = 1 2: f or ` = 0 , · · · , r do 3: for 1 ≤ i 1 < i 2 < · · · < i ` < m do 4: Insert the graph H = G − { e i 1 , · · · , e i ` } to the j -th entry of Graphs . Advance j by 1 . 5: end f or 6: end for 7: Define two arrays IP , SSLimits of length N . 8: Choose w as a Gaussian random vector of length m . 9: Choose controllers { Π e } e ∈ E satisfying Assumption 1. 10: f or j = 1 , · · · , N do 11: Run steps 1-4 of Algorithm 1. Insert the resulting interaction protocol into IP( j ) . 12: Compute the steady-state limit of the closed-loop system with the interaction protocol (Π , w) . Insert the result into SSLimits( j ) 13: end for Algorithm 6 In-Run Protocol for Network Fault Isolation 1: Find the index j for which Graphs ( j ) = G . 2: Command the agents to change their interaction protocol to the one described in IP ( j ) . Define H = Graphs ( j ) . 3: Run A for the closed-loop system with graph H and interaction protocol IP( j ) . Only if the algorithm declares a fault, continue to step 4. 4: Define HasFaulted as an array of zeros of size N . 5: Change the agents’ interaction protocol to (Π , w) . 6: while HasFaulted has at least two null entries do 7: Run N instances of the con ver gence assertion proto- col from Assumption 4 simultaneously , with conjectured limits from SSLimits. 8: for j = 1 to N do 9: if The j -th instance has declared a fault then 10: Change the value of HasFaulted ( j ) to 1 . 11: end if 12: end f or 13: end while 14: if HasFaulted has no entries equal to zero then 15: Go to step 4. 16: end if 17: Find the index j such that HasFaulted ( j ) = 0 . Set H = Graphs ( j ) . Go to step 2. claim the theorem follows from the following simple claims: 1) If we are in the stable phase, the current underlying graph is H = Graphs( j ) , and no more faults occur throughout the run, the closed-loop system con ver ges to y ? . 2) If we are in the stable phase, b ut the assumed graph H = Graphs( j ) is not the current underlying graph, we will ev entually move to the exploratory phase. 3) Each instance of the exploratory phase ev entually ends. 4) If an instance of the exploratory phase is executed, and no more f aults occur throughout the run, it correctly identifies the current underlying graph. W e first explain why the claims hold with probability 1. Claims 1 and 2 follow from Theorem 6 with probability 1, as the underlying graph G is always 2-connected. Claim 4 holds with probability 1, as follows from Theorem 7. As for Claim 3, if no faults occur throughout the instance of the exploratory phase, then it ev entually ends by Claim 4. If faults do occur throughout the run, the discussion above shows that at some time, all (except possibly one) instances of the con ver gence assertion protocol hav e declared a fault. If all of them declared a fault, we start another instance of the exploratory phase, and otherwise, we move to the stable phase. In either case, the instance of the exploratory phase ends, and Claim 3 is proved. W e now e xplain ho w the theorem follows from these claims. Suppose a total of ` ≤ r faults occur throughout the run. Let T < ∞ be the time at which the last fault occurs. W e look at the phase of the algorithm at times t > T , and show that in either case the system must con ver ge to y ? . i) If we arri ve at the stable phase with and the conjectured graph H is the true underlying graph, then the system con ver ges to y ? (Claim 1). ii) If we start an instance of the exploratory phase, it ev entually ends (Claim 3) and the stable phase starts, in which the conjectured graph H is the true underlying graph (Claim 4). By i), the system conv erges to y ? . iii) If we are in the stable phase, but the conjectured graph H is dif ferent from the true underlying graph, we eventually start an exploratory phase (Claim 2). Thus, the system con ver ges to y ? by ii). iv) Lastly , we could be in the middle of an instance of the exploratory phase. In that case, the instance ev entually ends (Claim 3), after which we either apply a new instance of the exploratory phase, or the stable phase. In both cases, we can use either i), ii) or iii) and conclude the system must con verge to y ? . Remark 2. W e can use a similar pr otocol to isolate mor e complex faults. W e consider the collection of subgraphs H of G in which ther e is a set of vertices of size ≤ r , so that each edge in G − H touches at least one verte x in the set. This observation allows us to offer similar network fault detection and isolation algorithms for more complex types of faults. F or example, we can consider a case in which each agent communicates with all other agents by a single transceiver , and if it faults, then all edges touching the corr esponding vertex are r emoved from the graph. W e can even use a hybrid fault model, in which faults corr espond to certain subsets of edges touching a common vertex ar e remo ved fr om the gr aph. F or example , suppose ther e ar e two distant gr oups of agents. Agents in the same gr oup ar e close, and communicate using Bluetooth communication. Agents in differ ent gr oups ar e farther , and communicate us- ing W i-F i (or broadband cellular communication). When an agent’ s Bluetooth transceiver faults, all inter-gr oup edges are r emoved, and when the W i-F i transceiver faults, all intra-gr oup edges are r emoved. V I . O N L I N E A S S E RT I O N O F N E T W O R K C O N V E R G E N C E In the previous section, we used the notion of edge- indication vectors, together with Assumption 4, to suggest 10 algorithms for network fault detection and isolation. The goal in this section is to propose algorithms A satisfying Assumption 4. This will be achie ved by using conv ergence estimates, relying on passivity . W e re visit a result from [25]. Proposition 3 ( [25]) . Let (u , y , ζ , µ ) be a steady-state of ( G , Σ , Π) of the form (1) . Suppose that the agents Σ i , with state x i , ar e passive with r espect to (u i , y i ) with passivity inde x ρ i ≥ 0 , and that the contr ollers Π e , with state η e , ar e passive with respect to ( ζ e , µ e ) , with passivity index ν e ≥ 0 . Let S i ( x i ) and W e ( η e ) be the agents’ and the controller s’ storag e functions. Then S ( x, η ) = P i ∈ V S i ( x i ) + P e ∈ E W e ( η e ) is a positive-definite C 1 -function, which nulls only at the steady- states (x , η ) corr esponding to (u i , y I ) and ( ζ e , µ e ) , and satis- fying the inequality: dS dt ≤ − X i ∈ V ρ i ( y i ( t ) − y i ) 2 − X e ∈ E ν e ( µ e ( t ) − µ e ) 2 . (6) Pr oof. The proof follo ws immediately from S i , W e being positiv e-definite C 1 -functions nulling only at x i , η e , by sum- ming the following inequalities: dS i dt ≤ ( u i ( t ) − u i )( y i ( t ) − y i ) − ρ i ( y i ( t ) − y i ) 2 dW e dt ≤ ( µ e ( t ) − µ e )( ζ e ( t ) − ζ e ) − ν e ( µ e ( t ) − µ e ) 2 , and using the equality ( u ( t ) − u) > ( y ( t ) − y) = − ( µ ( t ) − µ ) > E > G ( y ( t ) − y) = − ( µ ( t ) − µ )( ζ ( t ) − ζ ) . The inequality (6) can be thought of as a way to check that the system is functioning properly . Indeed, we can monitor x , y , η , and µ , and check that the inequality holds. If it does not, there must have been a fault in the system. This idea has a few drawbacks, linked to one another . First, as we commented in Subsection III-A, in some networks, the controller state η e ( t ) can be defined only for existing edges, so using η ( t ) requires us to know the functioning edges, which is absurd. Thus, in some cases, we must use Assumption 2. Second, in practice, even if we hav e access to x , we cannot measure it continuously . Instead, we measure it at certain time interv als. One can adapt (6) to an equi valent integral form: S ( x ( t k +1 ) , η ( t k +1 )) − S ( x ( t k ) , η ( t k )) ≤ (7) − Z t k +1 t k ( X i ∈ V ρ i ∆ y i ( t ) 2 + X e ∈ E ν e ∆ µ e ( t ) 2 ) dt, where ∆ y i = y i ( t ) − y i and ∆ µ e = µ e ( t ) − µ e . Howe ver , this giv es rise to the third problem - unlike the function S , we can not assure that the functions ( y i ( t ) − y i ) 2 and ( µ e ( t ) − µ e ) 2 (or their sum) is monotone. Thus, we cannot compute the integral appearing on the right-hand side of the inequality . W e present two approaches to address this problem. First, we try and estimate the integral using high-rate sampling, by linearizing the right hand side of (7) and bounding the error . Second, we try to bound the right-hand side as a function of S , resulting in an inequality of the form ˙ S ≤ −F ( S ) , which will gi ve a con ver gence estimate. A. Asserting Conver g ence Using High-Rate Sampling Consider the inequality (7), and suppose t k +1 − t k = ∆ t k is very small, so the functions y i ( t ) − y i and µ e ( t ) − µ e are roughly constant in the time period used for the integral. More precisely , recalling that y = h ( x ) and µ = φ ( η , E > G y ) , and assuming these functions are differentiable near x ( t k ) , η ( t k ) , we expand the right-hand side of (7) to a T aylor series, Z t k +1 t k X i ∈ V ρ i ∆ y i ( t ) 2 + X e ∈ E ν e ∆ µ e ( t ) 2 ! dt = (8) X i ∈ V ρ i ∆ y i ( t k ) 2 + X e ∈ E ν e ∆ µ e ( t k ) 2 ! ∆ t k + O (∆ t 2 k ) . W e wish to giv e a more explicit bound on the O (∆ t 2 k ) term. W e consider the follo wing function G , defined on the interval [ t k , t k +1 ] by the formula G ( t ) = X i ρ i ( y i ( t ) − y i ) 2 + X e ν e ( µ e ( t ) − µ e ) 2 . (9) The equation (8) is achiev ed from the approximation G ( t ) = G ( t k ) + O ( | t − t k | ) which is true for differentiable functions. Using Lagrange’ s mean value theorem for t ∈ [ t k , t k +1 ] , we find some point s ∈ ( t, t k +1 ) such that G ( t ) = G ( t k ) + dG dt ( s )( t − t k ) . If we manage to bound the time deriv ati ve dG dt in the interval [ t k , t k +1 ] , we would find a computational way to assert con vergence. By the chain rule, the time deriv ativ e of G is giv en by dG dt = X i ∈ V ρ i ( y i ( t ) − y i ) ˙ y i + X e ∈ E ν e ( µ e ( t ) − µ e ) ˙ µ e . (10) In order to compute the time deriv ativ e of y i , µ i , we recall that both are functions of x and η , namely y = h ( x ) and µ = φ ( η , E > G y ) = φ ( η, E > G h ( x )) . Thus, we have that ( ˙ y = ∇ x h ( x ( t )) ˙ x ˙ µ = ∇ η φ ( η ( t ) , ζ ( t )) ˙ η + ∇ x φ ( η ( t ) , ζ ( t )) E > G ∇ h ( x ( t )) ˙ x, (11) where ζ ( t ) = E > G h ( x ( t )) , ˙ x = f ( x, u ) = f ( x, −E G φ ( η , ζ )) , and ˙ η = ψ ( η , ζ ) = ψ ( η, E > G h ( x )) . Thus we can write the time deriv ati ve of G as a continuous function of x ( t ) , η ( t ) , as we plug the expressions for ˙ y , ˙ µ into (10). Howe ver , we do not kno w the value of x ( t ) , η ( t ) between measurements. T o tackle this problem,notice that we have some information on where x ( t ) , η ( t ) can lie. Namely , equation (6) shows that S ( x ( t ) , η ( t )) is a monotone descending function. Thus, we know that x ( t ) , η ( t ) lie in the set B = { x, η : S ( x, η ) ≤ S ( x ( t k ) , η ( t k )) } . More precisely , we show the following. Proposition 4. Assume the functions h i , f i , φ e , ψ e ar e all continuously differ entiable. Then for any time t ∈ [ t k , t k +1 ] , the following inequality holds:     dG dt ( t )     ≤ ( ρ ? M ∆ y M ˙ y + ν ? M ∆ µ M ˙ µ,x ) M ˙ x + ν ? M ∆ µ M ˙ µ,η M ˙ η , wher e M ˙ x = max ( x,η ) ∈ B k f ( x, −E G φ ( η , E > G h ( x )) k , M ˙ η = max ( x,η ) ∈ B k ψ ( η, E > G h ( x )) k , M ˙ y = max ( x,η ) ∈ B k∇ x h ( x ) k , M ˙ µ,x = max ( x,η ) ∈ B k∇ ζ φ ( η , E > G h ( x )) E > G ∇ x h ( x ) k , M ˙ µ,η = max ( x,η ) ∈ B k∇ η φ ( η , E > G h ( x )) k , M δ y = max ( x,η ) ∈ B k h ( x ) − h (x) k , M δ µ = max ( x,η ) ∈ B k ψ (( η, E > G h ( x )) − µ k , , ρ ? = 11 max i ρ i , ν e = max e ν e , and B = { ( x, η ) : S ( x, η ) ≤ S ( x ( t k ) , η ( t k )) } . Pr oof. W e fix some t ∈ [ t k , t k +1 ] , so that ( x ( t ) , η ( t )) ∈ B . W e use the e xpressions for ˙ x, ˙ η , ˙ y , ˙ µ found in (11). First, the con- ditions k ˙ x k ≤ M ˙ x and k ˙ η k ≤ M ˙ η are obvious. Equation (11) shows that k ˙ y k ≤ M ˙ y M ˙ x and k ˙ µ k ≤ M ˙ µ,x M ˙ x + M ˙ µ,η M ˙ η . By using Cauchy-Schwartz inequality on (10), we obtain   dG dt   ≤ ρ ? M δ y k ˙ y k + ν ? M δ µ k ˙ µ k , concluding the proof. Corollary 1. F ix any two times t k < t k +1 , and consider the notation of Pr oposition 4. Then the following inequality holds: S ( x ( t k +1 )) − S ( x ( t k )) ≤ (12) − X i ρ i ∆ y i ( t k ) 2 + X e ∈ E ν e ∆ µ e ( t k ) 2 ! ∆ t k + M 2 ∆ t 2 k , wher e M = ( ρ ? M ∆ y M ˙ y + ν ? M ∆ µ M ˙ µ,x ) M ˙ x + ν ? M ∆ µ M ˙ µ,η M ˙ η . Pr oof. Recall that G ( t ) = P i ∈ V ρ i ( y i ( t ) − y i ) 2 + P e ∈ E ν e ( µ e ( t ) − µ e ) 2 . By Proposition 4, for ev ery t ∈ [ t k , t k +1 ] we have G ( t ) ≤ G ( t k +1 )+ M | t − t k +1 | . Thus (7) im- plies that S ( x ( t k +1 )) − S ( x ( t k )) ≤ G ( t k +1 )∆ t k + M 2 ∆ t 2 k . The corollary proposes a mathematically-sound method for asserting con ver gence of the output y ( t ) to y . One samples y ( t ) , x ( t ) , η ( t ) , and µ ( t ) at times t 1 , t 2 , t 3 , . . . . At every time instance t k +1 , one checks that the inequality (12) holds. W e show that when ∆ t k → 0 , this method asserts that the output of the system con ver ges to the said val ue. In other words, assuming we sample the system at a high-enough rate, we can assert it conv erges very closely to the supposed steady-state output. Indeed, we prove the following. Proposition 5. Let t 1 , t 2 , · · · , be any monotone sequence of times such that t k → ∞ , and suppose that the inequality (12) holds for any k . Then for any ε > 0 , ther e ar e infinitely many N > 0 such that P i ∈ V ρ i ∆ y i ( t N ) 2 + P e ∈ E ν e ∆ µ e ( t N ) 2 < M 2 ∆ t N + ε . Mor e pr ecisely , for any two times t N 1 ≤ t N 2 , if t N 2 ≥ t N 1 + ε − 1 S ( x ( t N 1 ) , η ( t N 1 )) , then ther e exists some k ∈ { N 1 , N 1 + 1 , · · · , N 2 } such that P i ∈ V ρ i ∆ y i ( t k ) 2 + P e ∈ E ν e ∆ µ e ( t k ) 2 < M 2 ∆ t k + ε . The proposition can be thought of as a close-conv ergence estimate. The left-hand side, viewed as a function of x, η , is a non-negati ve smooth function, which nulls only at the steady-state (x , η ) . Thus it is small only when x ( t ) , η ( t ) are close to (x , η ) , and because we know that S ( x ( t ) , η ( t )) is monotone descending, once the trajectory arriv es near (x , η ) , it must remain near (x , η ) . One might ask why “infinitely many times” is more useful in this case. Indeed, it does not add any more information if the time intervals ∆ t k are taken as constant (i.e., we sample the system at a constant rate). Howe v er , we can measure the system at an ever -increasing rate, at least theoretically . T aking ∆ t k → 0 (while still having t k → ∞ , e.g. t k = 1 /k ), we see that we must ha ve x ( t ) → x and η ( t ) → η , meaning we can use the proposition to assert con ver gence. W e now prove the proposition. Pr oof. It is enough to show that for each ε > 0 and any N 1 > 0 , there is some N > N 1 such that P i ∈ V ρ i ∆ y i ( t N ) 2 + P e ∈ E ν e ∆ µ e ( t N ) 2 < M 2 ∆ t N + ε . Indeed, suppose this is not the case. Then for any k > N 1 , the right-hand side of (12) is upper-bounded by − ε ∆ t k . W e sum the telescopic series and conclude that for any k > N 1 , S ( x ( t k )) − S ( x ( t N 1 )) ≤ − k X j = N 1 +1 ε ∆ t j = − ε ( t k − t N 1 ) , (13) so t N 2 ≥ t N 1 + ε − 1 S ( x ( t N 1 ) , η ( t N 1 )) implies that S ( x ( t k ) , η ( t k )) < 0 . This is absurd, as S ≥ 0 . Thus there must exist some N > N 1 such that P i ∈ V ρ i ∆ y i ( t N ) 2 + P e ∈ E ν e ∆ µ e ( t N ) 2 < M 2 ∆ t N + ε . The second part follows from (13) and the demand that S ( x ( t k )) ≥ 0 . Proposition 5 can be used for con ver gence assertion. W e can consider the follo wing scheme - be gin at time t 0 and state x 0 , η 0 . W e want to show that S ( x ( t ) , η ( t )) → 0 . W e instead show that G ( t ) , defined in (9), gets arbitrarily close to 0 . As we said, this is enough as G ( t ) is a C 1 non-negati ve function of the state x ( t ) , η ( t ) that is only small when x ( t ) , η ( t ) is close to the steady-state (x , η ) . W e prove: Theorem 9. Consider the algorithm A , defined in the fol- lowing form. Sample the system at times t 1 , t 2 , · · · , and check whether the inequality (12) holds. If it does, continue, and if does not, then stop and declare “no”. Then ther e exists a sequence t 1 , t 2 , · · · , depending on the system and the initial conditions, such that A satisfies assumption 4. Pr oof. W e present the following method of choosing t 1 , t 2 , · · · . W e first choose t 0 = 0 , an arbitrary δ 1 > 0 , compute M as in Proposition 4, and choose ∆ 1 t = δ 1 M and ε = δ 1 2 . Sample the system at rate ∆ 1 t until time t N 1 > t 0 + ε − 1 S ( x 0 , η 0 ) . The process is then reiterated with δ k +1 = δ 1 / 2 k for k = 1 , 2 , · · · , giving rates ∆ k t and times t N k . W e claim that A , with this choice of sample times, satisfies Assumption 4. If the diffusi vely-coupled network ( G , Σ , Π) con ver ges to (x , η ) , then Corollary 1 implies the algorithm nev er stops, as required. It remains to show that if the algorithm ne ver stops, the network con ver ges to the conjectured limit. By the discussion above, and the fact that S ( x ( t ) , η ( t )) is a monotone descending function, it is enough to show that lim inf k →∞ G ( t k ) = 0 . W e first show at some point, G ( t ) < δ 1 . By choice of ∆ 1 t , if (12) holds at each time, then when we reach time t N 1 , we know that at some point, we had G ( t ) ≤ M 2 ∆ 1 t +  = δ 1 . Reiterating shows that at some times t ? k , G ( t ? k ) ≤ δ k , where δ k +1 = δ 1 2 k , so lim inf k →∞ G ( t k ) = 0 . The term “High-Rate Sampling” comes from the fact that if M is not updated when we re-iterate with smaller δ , then ev entually , t k +1 − t k → 0 , which is impractical in real-world cases. Howe ver , we note that the number M decreases as S ( x ( t ) , η ( t )) decreases, as sho wn in Proposition 4. Thus, if M is updated between iterations, we might hav e ∆ t 6→ 0 . Remark 3. Ther e is a trade-off between the time-steps ∆ t and the time it takes to find a point in which G ( t ) < M 2 ∆ t + ε , which is t = S ( x (0) ,η (0)) ε . On one hand, we want lar ger 12 time-steps (to avoid high-rate sampling) and shorter overall times, however , incr easing both ∆ t and ε creates a worse eventual bound on G ( t ) . W e can choose both by maximizing an appr opriate cost function C (∆ t, ε ) , monotone in both ∆ t and ε , subject to M 2 ∆ t + ε = δ 1 , ε ≥ 0 , ∆ t ≥ 0 . Choosing C (∆ t, ε ) as linear is inadvisable, as the maximizing a linear function with linear constraints always leads to the optimizer being on the boundary , which means either ∆ t = 0 or ε = 0 . The choice ∆ t = δ 1 M and ε = δ 1 2 mentioned above corr esponds to the geometric average cost function C (∆ t, ε ) = √ ∆ tε . Other choices of C can expr ess practical constraints, e.g . r elative apathy to lar ge con ver gence times relative to high- rate sampling should result in a cost function penalizing small values of ∆ t more harshly than small values of ε . B. Asserting Conver g ence Using Con ver gence Pr ofiles For this subsection, we now assume that Assumption 3 holds and that the agents are output-strictly MEIP , i.e., that ρ i > 0 . Consider (6) and suppose there is a non-negativ e monotone function F such that for any t , the right-hand side of (6) is bounded from abov e by −F ( S ) . In that case, we get an estimate of the form ˙ S ≤ −F ( S ) . This is a weaker estimate than (6), but it has a more appealing discrete-time form, S ( x ( t k +1 )) − S ( x ( t k )) ≤ − Z t k +1 t k F ( S ( x ( t ))) dt (14) ≤ − F ( S ( x ( t k +1 ))) · ( t k +1 − t k ) , where we use the monotonicity of F and the fact that S ( x ( t )) is monotone non-ascending. Due to Assumption 3, we focus on the elements of the right-hand side of (6) corresponding to the agents, and neglect the ones corresponding to controllers, as S is now the sum of the functions S i ( x i ) . Because controllers are passi ve, we ha ve ν e ≥ 0 , so removing the said term does not change the inequality’ s validity . In order to find F , it is natural to look for functions Ω i satisfying Ω i ( S i ) ≤ ( y i ( t ) − y i ) 2 . W e define the existence of the functions Ω i properly in the following definition. Definition 5. Let Ω : [0 , ∞ ) → [0 , ∞ ) be any function on the non-ne gative r eal numbers. W e say that an autonomous system has the con ver gence profile ( ρ, Ω) with r espect to the steady-state (u , y) if there exists a C 1 storag e function S ( x ) such that the following inequalities hold: i) dS ( x ( t )) dt ≤ ( u ( t ) − u)( y ( t ) − y) − ρ ( y ( t ) − y) 2 , ii) Ω( S ( x ( t ))) ≤ ( y ( t ) − y) 2 . Example 1. Consider the SISO system Σ defined by ˙ x = − x + u, y = x , and consider the steady-state input-output pair (0 , 0) . The stora ge function S ( x ( t )) = 1 2 x ( t ) 2 satisfies ˙ S ( x ( t )) = x ( t ) ˙ x ( t ) = ( u ( t ) − 0)( y ( t ) − 0) − ( y ( t ) − 0) 2 . Thus Σ has con ver gence pr ofile (1 , Ω) for Ω( θ ) = 2 θ . More generally , when considering an L TI system with no input-feedthrough, both functions S ( x ) and ( y ( t ) − y) 2 are quadratic in x . Thus there is a monotone linear function Ω such that the inequality Ω( S ( x ( t )) ≤ ( y ( t ) − y) 2 holds. In particular , a function Ω e xists in this case. W e show that functions Ω e xist for rather general systems. Theorem 10. Let Σ be the SISO system of the form ˙ x = − f ( x ) + q ( x ) u, y = h ( x ) . Suppose q is a positive continuous function, that f /q is C 1 and monotone ascending and that h is C 1 and strictly monotone ascending. Let (u = f (x) /q (x) , y = h (x)) be any steady-state input-output pair of the system. Then i) using the storag e function S ( x ) = R x x h ( σ ) − h (x) q ( σ ) dσ , the system Σ has the conver g ence pr ofile ( ρ, Ω) for a strictly ascending function Ω and ρ = inf x f ( x ) − f (x) h ( x ) − h (x) ≥ 0 ; ii) suppose ther e e xists some α > 0 such that the limit lim x → x | h ( x ) − h (x) | | x − x | α exists and is finite. Then the limit lim θ → 0 Ω( θ ) θ β exists and is finite, where β = 2 α α +1 . In other wor ds, if h behaves like a power law near x , then Ω behaves like a power law near 0 . Pr oof. The passi vation inequality follo ws from Theorem 1, so we focus on constructing the function Ω . For ev ery θ ≥ 0 , we define the set A θ = { x ∈ R : ( h ( x ) − h (x)) 2 ≤ θ } . W e want that x ∈ A θ would imply that that Ω( S ( x )) ≤ θ . Because h is continuous and monotone, it is clear that A θ is an interval containing x . Now , let ω be the function on [0 , ∞ ) defined as ω ( θ ) = sup x ∈ A θ S ( x ) . W e note that ω can take infinite values (e.g. when h is bounded, but S is not). Howe ver , we show that the restriction of ω on { θ : ω ( θ ) < ∞} is strictly monotone. If we show that this claim is true, then ω has an in verse function which is also strictly monotone. Define Ω = ω − 1 as the strictly monotone in verse function. By definition, for any x ∈ R we hav e that x ∈ A θ for θ = ( h ( x ) − h (x)) 2 , so S ( x ) ≤ ω ( θ ) . Thus Ω( S ( x )) ≤ Ω( ω ( θ )) = θ = ( h ( x ) − h (x)) 2 , concluding the first part of the proof. W e no w pro ve that the restriction of ω on { θ : ω ( θ ) < ∞} is strictly monotone. It is clear that if 0 ≤ θ 1 < θ 2 then the interval { x : ( h ( x ) − h (x)) 2 ≤ θ 1 } = A θ 1 is strictly contained in the interval { x : ( h ( x ) − h (x)) 2 ≤ θ 2 } = A θ 2 , as h is strictly monotone. Moreover , It is clear that S is strictly ascending in [x , ∞ ) and strictly descending in ( −∞ , x] , as the function h ( x ) − h (x) g ( x ) is positive on (x , ∞ ) and negati ve on ( −∞ , x) . Thus we have ω ( θ 1 ) < ω ( θ 2 ) , unless ω ( θ 1 ) = ∞ , which is what we wanted to prove. W e now move to the second part of theorem, in which we show that if h behav es like a power la w near x , then Ω beha ves like a power law near zero. W e use big- O notation (in the limit x → x ). By assumption and strict monotonicity of h , we hav e: h ( x ) − h (x) = C sgn( x − x) | x − x | α + o ( | x − x | α ) (15) for some constant C > 0 , implying ( h ( x ) − h (x)) 2 = C 2 | x − x | 2 α + o ( | x − x | 2 α ) . By definition, we conclude that for θ > 0 small enough, A θ is an interval centered at x and has radius θ 1 / 2 α /C 1 /α + o ( θ 1 / 2 α ) . W e recall that S ( x ) = R x x h ( σ ) − h (x) q ( σ ) dσ . W e write q ( x ) = q (x) + o (1) as q is continuous, so (15) implies that h ( σ ) − h (x) q ( σ ) = 1 q (x) ( C sgn( x − x) | x − x | α + o ( | x − x | α )) . Thus, S ( x ) = C q (x)( α +1) | x − x | α +1 + o ( | x − x | α +1 ) . W e no w 13 compute ω ( θ ) by definition, using our characterization of A θ : ω ( θ ) = max x ∈ A θ S ( x ) = max x ∈ A θ  C | x − x | α +1 q (x)( α + 1) + o ( | x − x | α +1 )  = C q (x)( α + 1)  θ 1 2 α C 1 /α  α +1 + o  ( θ 1 2 α ) α +1  = ( D + o (1)) θ α +1 2 α for D = 1 q (x)( α +1) C 1 /α > 0 . Thus, the in verse function Ω( θ ) is gi ven as Ω( θ ) = ( D − 2 α 1+ α − o (1)) θ 2 α 1+ α , as desired. Example 2. Consider a system with q ( x ) = 1 , h ( x ) = 3 √ x and a steady state u = x = y = 0 . h ( x ) behaves like a power law with power α = 1 3 . P art ii) of Theor em 10 implies that Ω also behaves like a power law , with power β = 2 α α +1 = 1 2 . W e exemplify the computation of Ω as done in the pr oof, and show it behaves like a power law with β = 1 2 , as for ecasted by the theor em. Indeed, S ( x ) = R x 0 3 √ σ dσ = 3 4 x 4 / 3 , and ( h ( x ) − h (x)) 2 = x 2 / 3 . F or every θ ≥ 0 , A θ = { x : x 2 / 3 ≤ θ } = [ − θ 1 . 5 , θ 1 . 5 ] . Thus ω ( θ ) = sup x ∈ A θ S ( x ) = sup | x |≤ θ 1 . 5 3 4 x 4 / 3 = 3 4 θ 2 , implying that Ω , the inver se function of ω , is given by q 4 3 θ , and one observes that actually ( h ( x ) − h (x)) 2 = Ω( S ( x )) . Remark 4. Theorem 10 gives a prescription to design the function Ω . However , some steps, namely the inver sion of ω , ar e computationally hard. F or example , if h ( x ) = 1 − e − x and q ( x ) = 1 , then ω ( θ ) = log e 1 1 − √ θ − √ θ for θ < 1 and ω ( θ ) = ∞ for θ ≥ 1 , which is almost impossible to in vert analytically . T o solve this pr oblem, we can either precompute the differ ent values of Ω numerically and stor e them in a table, or appr oximate them on-line using the bisection method. The str ength of Theor em 10 is that it shows that a function Ω can always be found, implying this method is always applicable. Up until now , we managed to transform the equation (7) to the equation dS dt ≤ P i − ρ i Ω i ( S i ) , for some non-negativ e monotone functions Ω i . This is closer to an inequality of the form ˙ S ≤ −F ( S ) , b ut we still cannot use it without high-rate sampling, as we cannot assume that S i ( x i ( t )) are monotone decreasing. W e want to transform the right hand side into a function of S . W e note that Ω i ( θ i ) = 0 only at θ i = 0 , as S i = 0 happens only at x i . W e claim the following: Proposition 6. Let ρ 1 , ..., ρ n be any positive numbers, and let Ω 1 , ..., Ω n : [0 , ∞ ) → [0 , ∞ ) be any C 1 strictly monotone functions such that Ω i ( θ i ) = 0 only at θ i = 0 . Suppose further that for any i ther e exists some β i > 0 such that the limit lim θ i → 0 Ω i ( θ i ) θ β i i exists and is positive. Define Ω ? : [0 , ∞ ) → [0 , ∞ ) as Ω ? ( θ ) = min i Ω i ( θ ) . Then for every D > 0 , there e xists some constant C > 0 such that for all D ≥ θ 1 , · · · , θ n ≥ 0 , we have P n i =1 ρ i Ω i ( θ i ) ≥ C · Ω ? ( P n i =1 θ i ) . The proof of the proposition can be found in the appendix. Corollary 2. Let S 1 , ..., S n be the storag e functions of the agents, let S = P i S i , and let Ω 1 , · · · , Ω n be C 1 strictly monotone functions such that Ω i ( θ i ) = 0 only at θ i = 0 . Suppose that for any i there exists some β i > 0 such that the limit lim θ i → 0 Ω i ( θ i ) θ β i i exists and is positive. Mor eover , Suppose that ˙ S ≤ P i ρ i Ω i ( S i ) . Then for every bounded set B ⊂ R n ther e e xists a constant C > 0 such that for any trajectory of the closed-loop system with initial condition in B , the inequality ˙ S ≤ − C · Ω ? ( S ) holds, wher e Ω ? ( θ ) = min i Ω i ( θ ) . Pr oof. Use θ i = S i and D = S ( x (0)) in Proposition 6. Proposition 6 and Corollary 2 show that an inequality of the form (14) can be achieved, so long the functions Ω i from Theorem 10 “behave nicely” around 0 , namely do not grow faster nor slower than a power law . This condition is very general, and only excludes pathologies as Ω( θ ) = 1 log(1 /θ ) , growing faster than any po wer law , and Ω( θ ) = exp( − 1 /θ 2 ) , growing slo wer than any power law . Theorem 10 sho ws that if h behav es like a power law near x , then so does Ω , so pathological functions Ω ? can only come from pathological measurement functions h i . W e show it is enough to check the discretized equation (14) to assert con ver gence. Proposition 7. Let Ω ? : [0 , ∞ ) → [0 , ∞ ) be any contin- uous function such that Ω ? ( θ ) = 0 only at θ = 0 . Let ˜ S ( t ) be any time-dependent monotone decr easing function ˜ S : [0 , ∞ ) → [0 , ∞ ) . Let t 1 , t 2 , t 3 , · · · be any unbounded sequence of times such that liminf k →∞ ( t k +1 − t k ) > 0 , and suppose that for every k , the inequality ˜ S ( t k +1 ) − ˜ S ( t k ) ≤ − Ω ? ( ˜ S ( t k +1 ))( t k +1 − t k ) holds. Then ˜ S ( t ) → 0 as t → ∞ . The proof of the proposition can be found in the appendix. W e want to use ˜ S ( t ) = S ( x ( t )) . The results abov e suggest an algorithm for conv ergence assertion. Algorithm 7 Con ver gence Assertion using Conv ergence Pro- file Input: A diffusi vely-coupled network ( G , Σ , Π) , an initial condition x (0) and a conjectured steady-state ˆ x . 1: Let S i ( x i ) = R x i ˆ x i h i ( σ i ) − h i (ˆ x i ) q i ( σ i ) dσ i , S ( x ) = P i S i ( x i ) . 2: Use Theorem 10, Proposition 6 and Corollary 2 to find a function Ω such that ˙ S ≤ − Ω( S ) for all times t , with initial condition x (0) . 3: Choose δ 0 = S ( x (0)) and t 0 = 0 4: f or k = 0 , 1 , 2 , 3 , · · · do 5: Define δ k +1 = δ k / 2 .; 6: Define M = min x : S ( x ) ≥ δ k +1 Ω( S ( x )) 7: Sample the system at time some t k +1 > t k + S ( x 0 ) M . 8: if S ( x ( t k +1 )) − S ( x ( t k )) 6≤ − Ω( S ( x ( t k +1 )))( t k +1 − t k ) then 9: Stop and return “no”; 10: end if 11: end for Theorem 11. Algorithm 7, taking the system ( G , Σ , Π) , the initial state x (0) , and the conjectured steady-state ˆ x = h − 1 (y) as input, satisfies Assumption 4. Pr oof. W e denote the true limit of the system ( G , Σ , Π) by x . W e first assume the algorithm nev er stops, and show that ˆ x = x . W e show that S ( x ( t k )) ≤ δ k , which would suffice as δ k → 0 and S ( t ) → 0 implies that x ( t ) → ˆ x , and thus x = ˆ x . Suppose, heading tow ard contradiction, that S ( x ( t k )) 6≤ δ k . Then Ω( S ( x ( t k ))) ≥ M , meaning that the right-hand side 14 of the checked inequality is larger than − S ( x ( t k )) . Thus, if the inequality holds then S ( x ( t k +1 )) < 0 , which is absurd. Thus S ( x ( t k )) ≤ δ k , and ˆ x = x . On the contrary , if the conjectured limit ˆ x is the true limit of the network, then Theorem 10, Proposition 6 and Corollary 2 sho w that S ( x ( t k +1 )) − S ( x ( t k )) ≤ − Ω( S ( x ( t k +1 )))( x ( t k +1 ) − x ( t k )) always holds, so the algorithm nev er stops, as expected. Remark 5. Proposition 7 shows we can take any sample times t k such that lim inf k →∞ ( t k − t k − 1 ) > 0 , and still get a valid con ver gence assertion algorithm. The suggested algorithm gives extra information, as it also bounds the distance of x ( t k ) fr om x . Another way to choose t k is to use the solution of the ODE ˙ ˜ S = − Ω( ˜ S ) with ˜ S ( t 0 ) = S ( x 0 ) . Let t k be the earliest time in which ˜ S ( t k ) ≤ δ k . The inequality ˙ S ≤ − Ω( S ) assur es that S ( x ( t k )) ≤ δ k . This method is more demanding, as the minima M computed befor e can be stored in a table, but the solution to the ODE must be computed on-line. Remark 6. Although we can pr ove conver g ence with this method using very seldom measur ements, we should still sample the system at a r easonable rate. This is because we want to detect faults as soon as possible. If we sample the system in too lar ge intervals, we will not be able to sense a fault until a larg e amount of time has passed. W e conclude this section with a short discussion about the perks and drawbacks of the two presented con ver gence assertion methods. The con v ergence profile method allows the designer to sample the system at any desired rate, allowing one to prov e con vergence using very seldom measurements. Moreov er , it gi ves certain rate of con ver gence guarantees before running the system. On the contrary , the high-rate sampling method can require a long time to assert conv ergence to a δ -ball around the desired steady-state, unless one is willing to increase the sampling rate, perhaps arbitrarily . Ho wev er , its main upshot over the conv ergence profile method is that we need not assume that Assumption 3 holds, and that the method is computationally easier, as one can av oid function in v ersion needed to compute Ω . V I I . C A S E S T U DY : V E L O C I T Y C O O R D I NA T I O N I N V E H I C L E S W I T H D R AG Consider a collection of n = 20 vehicles trying to coor- dinate their velocity . Each vehicle is modeled as a double integrator G ( s ) = 1 /s 2 in vacuum, but is subject to drag in the real world. The drag model of a vehicle is usually modeled as a force against the direction of the vehicle’ s motion, which is quadratic in the size of the velocity [37, p. 164]. Thus, each vehicle has the follo wing model: Σ i : ˙ x i = − C i x i | x i | + u i , y i = x i , (16) where x i is the velocity of the i -th vehicle, and C i is a constant which aggregates different parameters affecting the drag, e.g. density of the air, viscosity of the air , the geometry of the vehicle, and the mass of the vehicle. The vehicles are trying to coordinate their velocity - Agents #1 − #7 want to trav el at 60 km / h , agents #8 − #13 want to trav el at 70 km / h , and agents #14 − #20 want to trav el at 50 km / h . The edge controllers are Fig. 2. The faultless underlying graph in the case study . static nonlinearities gi ven by sigmoid functions of the form µ e = tanh( ζ e ) . This diffusi vely coupled netw ork satisfies both Assumptions 1 and 3, and we note both the agents and the controllers are nonlinear . W e choose an interaction graph G as seen in Figure 2. One can check that G is 4 -connected, either using Menger’ s theorem or using other known algorithms [38]. The parameters C i were chosen as log-uniformly between 0 . 01 and 0 . 1 . The initial velocity of the agents was chosen to be Gaussian with mean µ = 70 km/h and standard deviation σ = 20 km/h . W e solve the synthesis problem, forcing the network to con ver ge to y ? = [60 · 1 > 7 , 70 · 1 > 6 , 50 · 1 > 7 ] > km / h , where 1 m is the all-one vector of size m , allowing up to 2 edges to fault. W e run our FDI protocol, implementing the profile-based conv ergence assertion scheme, sampling the system at 10 Hz (i.e., a modified version of Algorithm 7). W e consider four different scenarios., each lasting 100 seconds. 1) A faultless scenario. 2) At time t = 20 sec , the edge { 2 , 3 } faults, and at time t = 50 sec , the edge { 13 , 14 } faults. 3) At time t = 20 sec , the edge { 2 , 3 } faults, and at time t = 21 sec , the edge { 13 , 14 } faults. 4) At time t = 0 . 5 sec , the edge { 2 , 3 } faults, and at time t = 4 sec , the edge { 13 , 14 } faults. The first scenario tests the nominal behavior of the protocol. The second tests its ability to handle with one single fault at a time. The third tests its ability to handle more than one fault at a time. The last tests its ability to deal with faults before the network conv erged. The results are av ailable in Figures 3, 4, 5, and 6. It can be seen that we achiev e the control goal in all four scenarios. Moreover , in all scenarios and at all times, the velocities of the agents are not too far from the values found in y ? , meaning that this protocol cannot harm the agents by demanding them to have very wild states. In the second and third scenario, the exploratory phases begins at the first measurement after the fault occurred. On the contrary , in the fourth scenario, it takes the exploratory phase begins only at t = 1 . 3 sec , a little under a second after the first fault. This is because the steady-states of the faulty and nominal closed- loop system are relati vely close, meaning it takes a little extra time to find that a fault exists. V I I I . C O N C L U S I O N W e considered multi-agent networks prone to communi- cation faults in which the agents are output-strictly MEIP 15 0 10 20 30 40 50 60 70 80 90 100 20 40 60 80 100 120 Velocity [km/h] 0 10 20 30 40 50 60 70 80 90 100 Time [sec] Stable Exploratory Phase Fig. 3. Results of first scenario. and the controllers are MEIP. W e exhibited a protocol in which a constant bias w is added to the controller output, and showed that if w is chosen randomly , no matter what the underlying graph G is, we can asymptotically differentiate between any two versions (faulty or faultless) of the system. W e also showed that if w is chosen randomly within a certain set, we asymptotically dif ferentiate the faultless v ersion of the system from its faulty version, while also solving the synthesis problem for the faultless version, assuming G was connected enough, i.e., 2 -connected. These results were used to describe algorithms for network fault detection and isolation protocols for general MEIP multi-agent systems, where the number of isolable faults is given by a graph- theoretic characteristic of G , while no extra information on the agents and controllers but MEIP is used. These were achie ved by assuming the existence of an on-line algorithm asserting that a gi ven network con verges to a conjectured steady-state, allowing us to move from asymptotic differentiation to on-line differentiation. Later, two such algorithms were built using passivity of the agents and controllers, and their correctness was proved. W e demonstrated our protocols by a case study , in which we successfully detect communication faults in a nonlinear network. W e emphasize that the proposed method is prov ed to work so long the agents and controllers are MEIP , and the graph G is connected enough. In particular , there is no assumption on the scale of the network. Future directions can include more robust network fault detection and isolation 0 10 20 30 40 50 60 70 80 90 100 20 40 60 80 100 120 Velocity [km/h] 0 10 20 30 40 50 60 70 80 90 100 Time [sec] Stable Exploratory Phase Fig. 4. Results of second scenario. 0 10 20 30 40 50 60 70 80 90 100 20 40 60 80 100 120 Velocity [km/h] 0 10 20 30 40 50 60 70 80 90 100 Time [sec] Stable Exploratory Phase Fig. 5. Results of third scenario. techniques, in which a larger number of faults can be isolated by studying more delicate graph-theoretical properties of the underlying graph. R E F E R E N C E S [1] K.-K. Oh, M.-C. Park, and H.-S. Ahn, “ A survey of multi-agent formation control, ” Automatica , vol. 53, pp. 424 – 440, 2015. [2] L. Scardovi, M. Arcak, and E. D. Sontag, “Synchronization of intercon- nected systems with applications to biochemical networks: An input- output approach, ” IEEE T ransactions on Automatic Control , vol. 55, no. 6, pp. 1367–1379, 2010. [3] R. Olfati-Saber, J. A. Fax, and R. M. Murray , “Consensus and coop- eration in networked multi-agent systems, ” Pr oceedings of the IEEE , vol. 95, pp. 215–233, Jan 2007. [4] A. T eixeira, I. Shames, H. Sandberg, and K. H. Johansson, “Distributed fault detection and isolation resilient to network model uncertainties, ” IEEE T ransactions on Cybernetics , vol. 44, pp. 2024–2037, Nov 2014. [5] S. Jafari, A. Ajorlou, A. G. Aghdam, and S. T afazoli, “On the structural controllability of multi-agent systems subject to failure: A graph- theoretic approach, ” in 49th IEEE Conference on Decision and Contr ol (CDC) , pp. 4565–4570, 2010. [6] S. Jafari, A. Ajorlou, and A. G. Aghdam, “Leader localization in multi-agent systems subject to failure: A graph-theoretic approach, ” Automatica , vol. 47, no. 8, pp. 1744–1750, 2011. [7] M. A. Rahimian and V . M. Preciado, “Detection and isolation of failures in directed networks of L TI systems, ” IEEE Tr ansactions on Control of Network Systems , vol. 2, no. 2, pp. 183–192, 2014. [8] M. A. Rahimian and V . M. Preciado, “Failure detection and isolation in integrator networks, ” in 2015 American Contr ol Conference (ACC) , pp. 677–682, 2015. [9] G. Battistelli and P . T esi, “Detecting topology variations in dynamical networks, ” in 2015 54th IEEE Conference on Decision and Contr ol (CDC) , pp. 3349–3354, IEEE, 2015. 0 10 20 30 40 50 60 70 80 90 100 20 40 60 80 100 120 Velocity [km/h] 0 10 20 30 40 50 60 70 80 90 100 Time [sec] Stable Exploratory Phase Fig. 6. Results of fourth scenario. 16 [10] M. A. Rahimian, A. Ajorlou, and A. G. Aghdam, “Detectability of multiple link failures in multi-agent systems under the agreement protocol, ” in 2012 IEEE 51st IEEE Conference on Decision and Control (CDC) , pp. 118–123, IEEE, 2012. [11] M. E. V alcher and G. Parlangeli, “On the effects of communication failures in a multi-agent consensus network, ” in 2019 23rd Interna- tional Conference on System Theory , Contr ol and Computing (ICSTCC) , pp. 709–720, IEEE, 2019. [12] Y . Zhang, Y . Xia, and J. Zhang, “Generic detectability and isolabil- ity of topology failures in networked linear systems, ” arXiv preprint arXiv:2005.04687 , 2020. [13] M. A. Rahimian, A. Ajorlou, and A. G. Aghdam, “Digraphs with distinguishable dynamics under the multi-agent agreement protocol, ” Asian Journal of Contr ol , vol. 16, no. 5, pp. 1300–1311, 2014. [14] M. Sharf and D. Zelazo, “Network identification: A passivity and network optimization approach, ” in 2018 IEEE 57th Annual Conference on Decision and Control (CDC) , 2018. [15] D. Patil, P . T esi, and S. Trenn, “Indiscernible topological v ariations in D AE networks, ” Automatica , vol. 101, pp. 280–289, 2019. [16] H. Y ang, V . Cocquempot, and B. Jiang, “Fault tolerance analysis for switched systems via global passivity , ” IEEE T ransactions on Circuits and Systems II: Express Briefs , vol. 55, pp. 1279–1283, Dec 2008. [17] W . Chen, S. X. Ding, A. Q. Khan, and M. Abid, “Energy based fault detection for dissipativ e systems, ” in 2010 Conference on Control and F ault-T oler ant Systems (SysT ol) , pp. 517–521, Oct 2010. [18] L. Mrton, “Energetic approach to deal with faults in robot actuators, ” in 2012 20th Mediterranean Conference on Control Automation (MED) , pp. 85–90, July 2012. [19] Q. Lei, R. W ang, and J. Bao, “Fault diagnosis based on dissipati vity property , ” Computers and Chemical Engineering , vol. 108, pp. 360 – 371, 2018. [20] H. Bai, M. Arcak, and J. W en, Cooperative Control Design: A System- atic, P assivity-Based Approac h . Communications and Control Engineer- ing, Springer , 2011. [21] M. Arcak, “Passivity as a design tool for group coordination, ” IEEE T ransactions on Automatic Control , vol. 52, pp. 1380–1390, Aug. 2007. [22] G.-B. Stan and R. Sepulchre, “ Analysis of interconnected osciallators by dissipativity theory , ” IEEE Tr ansactions on Automatic Control , vol. 52, no. 2, pp. 256 – 270, 2007. [23] A. van der Schaft and B. Maschke, “Port-hamiltonian systems on graphs, ” SIAM Journal on Contr ol and Optimization , vol. 51, no. 2, pp. 906–937, 2013. [24] N. Chopra and M. Spong, Advances in Robot Control, Fr om Everyday Physics to Human-Like Movements , ch. Passi vity-based Control of Multi-Agent Systems, pp. 107–134. Springer , 2006. [25] M. B ¨ urger , D. Zelazo, and F . Allg ¨ ower , “Duality and network theory in passivity-based cooperative control, ” Automatica , vol. 50, no. 8, pp. 2051–2061, 2014. [26] R. T . Rockafellar, Network Flows and Monotropic Optimization . Bel- mont, Massachusetts: Athena Scientific, 1998. [27] M. Sharf and D. Zelazo, “ A network optimization approach to coopera- tiv e control synthesis, ” IEEE Control Systems Letters , vol. 1, pp. 86–91, July 2017. [28] M. Sharf and D. Zelazo, “ Analysis and synthesis of MIMO multi-agent systems using network optimization, ” IEEE T r ansactions on Automatic Contr ol , vol. 64, no. 11, pp. 4512–4524, 2019. [29] M. Sharf and D. Zelazo, “A Passivity-Based Network Identifica- tion Algorithm with Minimal T ime Complexity , ” arXiv e-prints , p. arXiv:1903.04923, Mar 2019. [30] C. Godsil and G. Royle, Algebraic Graph Theory . Graduate T exts in Mathematics, Springer , 1st ed., 2001. [31] J. Bondy and U. Murty , Gr aph Theory with Applications . Macmillan, 1977. [32] H. K. Khalil, Nonlinear Systems . Pearson, 3rd ed., 2001. [33] M. Sharf, A. Romer, D. Zelazo, and F . Allg ¨ ower , “Model-free practical cooperativ e control for diffusi vely coupled systems, ” arXiv preprint arXiv:1906.05204 , 2019. [34] A. S. W illsky , “ A survey of design methods for failure detection in dynamic systems, ” Automatica , vol. 12, no. 6, pp. 601–611, 1976. [35] M. Sharf, A. Jain, and D. Zelazo, “A Geometric Method for Passi vation and Cooperative Control of Equilibrium-Independent Passi vity-Short Systems, ” arXiv e-prints , p. arXiv:1901.06512, Jan. 2019. [36] M. Bando, K. Hasebe, A. Nakayama, A. Shibata, and Y . Sugiyama, “Dynamical model of traffic congestion and numerical simulation, ” Phys. Rev . E , vol. 51, pp. 1035–1042, Feb 1995. [37] R. A. Serway and J. W . Jewett, Physics for scientists and engineers with modern physics . Cengage learning, 2018. [38] Z. Galil, “Finding the vertex connectivity of graphs, ” SIAM Journal on Computing , vol. 9, no. 1, pp. 197–199, 1980. A P P E N D I X This appendix includes the proof of various technical propo- sitions from Subsection VI-B. W e start with the proof of Proposition 6: Pr oof. Without loss of generality , we assume that Ω i = Ω ? for all i , as proving that P n i =1 ρ i Ω ? ( θ i ) ≥ C Ω ? ( P n i =1 θ i ) would imply the desired inequality . W e also assume that ρ i = 1 for all i , as proving that P i Ω i ( θ i ) ≥ C · Ω ? ( P i θ i ) would giv e P i ρ i Ω i ( θ i ) ≥ C min i ρ i · Ω ? ( P i θ i ) . Define F : [0 , D ] n \{ 0 } → R as F ( θ 1 , · · · , θ n ) = P n i =1 Ω ? ( θ i ) Ω ? ( P n i =1 θ i ) , where the claim is equiv alent to F being bounded from below . For any r > 0 , F is continuous on the compact set [0 , D ] n \{ x : || x || > r } , so its minimum is obtained at some point. As F does not vanish on the set, the min- imum is positiv e, so F is bounded from belo w on that set by a constant greater than zero. It remains to show that lim inf θ 1 , ··· ,θ n → 0 F ( θ 1 , · · · , θ n ) > 0 . Let β = max i β i , so that lim θ → 0 Ω ? ( θ ) θ β > 0 . Then F ( θ 1 , · · · , θ n ) = P n i =1 Ω ? ( θ i ) Ω ? ( P n i =1 θ i ) = P n i =1 Ω ? ( θ i ) ( P n i =1 θ i ) β · ( P n i =1 θ i ) β Ω ? ( P n i =1 θ i ) . W e want to bound both factors from below when θ 1 , · · · , θ n → 0 . It is clear that the second factor is equal to (lim θ → 0 Ω ? ( θ ) θ β ) − 1 , which is a positi ve real number by assumption. As for the first factor , we can bound it as lim θ 1 ··· θ n → 0 P n i =1 Ω ? ( θ i ) ( P n i =1 θ i ) β ≥ lim θ 1 ··· θ n → 0 Ω ? (max i θ i ) ( n max i θ i ) β > 0 as P n i =1 θ i ≤ n max i θ i and P i Ω ? ( θ i ) ≥ Ω ? (max i θ i ) . This completes the proof. W e no w prov e Proposition 7 Pr oof. By assumption ˜ S ( t k ) is monotone decreasing and bounded from below , as ˜ S ( t k ) ≥ 0 . Thus it con ver ges to some value, denoted ˜ S ∞ . Using ˜ S ( t k +1 ) − ˜ S ( t k ) ≤ − Ω ? ( ˜ S ( t k +1 ))( t k +1 − t k ) and taking k → ∞ gi ves that 0 ≤ − Ω ? ( ˜ S ∞ ) . Howe v er , Ω ? is non-negati ve, so we must hav e Ω ? ( ˜ S ∞ ) = 0 , and thus S ∞ = 0 , meaning that ˜ S ( t k ) → 0 . By monotonicity of ˜ S , we conclude that ˜ S ( t ) → 0 as t → ∞ .

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment