The Nash Equilibrium with Inertia in Population Games

In the traditional game-theoretic set up, where agents select actions and experience corresponding utilities, an equilibrium is a configuration where no agent can improve their utility by unilaterally switching to a different action. In this work, we…

Authors: Basilio Gentile, Dario Paccagnan, Bolutife Ogunsula

The Nash Equilibrium with Inertia in Population Games
The Nash Equilibrium with Inertia in P opulation Games Basilio Gentile, Dario Paccagnan, Bolutife Ogunsula, and John L ygeros Abstract —In the traditional game-theoretic set up, where agents select actions and experience corresponding utilities, an equilibrium is a configuration where no agent can impr ove their utility by unilaterally switching to a different action. In this work, we introduce the novel notion of inertial Nash equilibrium to account for the fact that, in many practical situations, action changes do not come for free. Specifically , we consider a population game and introduce the coefficients c ij describing the cost an agent incurs by switching fr om action i to action j . W e define an inertial Nash equilibrium as a distribution over the action space where no agent benefits in moving to a different action, while taking into account the cost of this change. First, we show that the set of inertial Nash equilibria contains all the Nash equilibria, but is in general not con vex. Second, we argue that classical algorithms for computing Nash equilibria cannot be used in the presence of switching costs. W e then propose a natural better-r esponse dynamics and pr ove its con vergence to an inertial Nash equilibrium. W e apply our results to predict the drivers’ distribution of an on-demand ride-hailing platform. I . I N T RO D U C T I O N Game theory has originated as a set of tools to model and describe the interaction of multiple decision makers, or agents. The goal is typically to determine whether decision makers will come to some form of equilibrium, the most common of which is the Nash equilibrium. Informally , a set of strategies constitutes a Nash equilibrium if no agent benefits by unilaterally de viating form the current action, while the other agents stay put. This notion of equilibrium has found countless applications, among others to ener gy systems [1], transmission networks [2], commodity markets [3], traffic flo w [4], and mechanism design [5]. While the original definition of Nash equilibrium does not account for the cost incurred by agents when moving to a different action, in practical situations decision makers often incur a physical, psychological, or monetary cost for such deviation. This is the case, for example, when relocating to a ne w neighbourhood [6], or when switching financial strategy in the stock market [7]. When the decision makers are humans, the psychological resistance to change has been well documented and studied at the professional and organizational lev el [8] as well as at the individual and priv ate level [9], or at the customer le vel [10]. T o take into account such phenomena, we introduce the nov el concept of inertial Nash equilibrium . Specifically , we consider a setup where a large number of agents choose among n common actions. Agents selecting a gi ven action recei ve a utility that depends only on the agents’ distribution ov er the action space, in the same spirit of population games [11]. In this context, a Nash equilibrium consists in an agent distri- bution over the action space for which e very utilized action yields maximum utility . The same concept was proposed in This work was supported by the European Commission project D YMASOS (FP7-ICT 611281), and by the SNSF Grant #P2EZP2-181618. the seminal work of W ardrop for a route-choice game in road traffic networks [4]. W e extend this framework and model the cost incurred by any agent when moving from action i to action j with the non-ne gativ e coefficients c ij . W e define an inertial Nash equilibrium as a distrib ution over the action space where no agent has any incentive to unilaterally change action, where the quality of an alternative action is measured by its net utility , i.e., the corresponding utility minus the cost of the action change. W e sho w that introducing such costs leads to a lar ger set of equilibria that is in general not con vex, even if the set of Nash equilibria without switching costs is so. W e argue that classical algorithms to compute a Nash equilibrium are not suitable for computing inertial Nash equilibria, because i) they may not terminate e ven if already at an inertial Nash equilibrium, and ii) their ex ecution is not compatible with the agents’ rationality assumption, as agents might be required to perform a detrimental move. T o overcome these issues, we propose an algorithm based on better -response dynamics, where agents switch action only if it is to their adv antage when factoring the cost of such change. Contributions. Our main contributions are as follows. i) W e introduce the notion of inertial Nash equilibrium (Definition 2) and position it in the context of the existing literature, notably in relation to population g ames [11] and more specifically migration equilibria [12]. ii) W e sho w that the set of inertial Nash equilibria can be equiv alently characterized through a variational inequality (Theorem 1) and we prove a strong negati ve result: the operator that arises in the resulting variational inequality is non-monotone in all the meaningful instances of the inertial Nash equilibrium problem (Theorem 2). This implies that existing algorithms for computing equilib- ria based on the solution of variational inequalities are in general not suitable for computing an inertial Nash equilibrium. iii) W e propose and analyse a nov el algorithm and prove its con ver gence to an inertial Nash equilibrium under weak assumptions (Theorem 3). Organization. In Section II we introduce the notion of inertial Nash equilibrium, and show its non-uniqueness as well as the non-con v exity of the equilibrium set. A comparison with related works is presented in Section II-C. In Section III we reformulate the inertial Nash equilibrium problem as a variational inequality , study the monotonicity properties of the corresponding operator (more precisely , the lack thereof), and present the issues associated with the use of existing algorithms. In Section IV we propose a modified best-response dynamics that provably con ver ges to an inertial Nash equilib- rium. Extensions of the model are presented in Section V. 1 2 In Section VI we validate our model with a numerical study of area coverage for on demand ride-hailing in Hong K ong. Appendix A provides background material, while all the proofs are reported in Appendix B. Notation. The space of n -dimensional real vectors is de- noted with R n , while R n ≥ 0 is the space of non-negati ve n - dimensional real vectors and R n > 0 is the space of strictly positiv e n -dimensional real v ectors. The symbol 1 n indicates the n -dimensional vector of unit entries, whereas 0 n is the n - dimensional vector of zero entries. If x, y ∈ R n , the notation x ≥ y indicates that x j ≥ y j for all j ∈ { 1 , . . . , n } . The vector e i denotes the i th vector of the canonical basis. Gi ven A ∈ R n × n , A  0 (  0 ) if and only if x > Ax = 1 2 x > ( A + A > ) x > 0 ( ≥ 0) , for all x 6 = 0 n . blkdiag ( A 1 , . . . , A M ) is the block diagonal matrix with blocks A 1 , . . . , A M . k A k is the induced 2 -norm on A . Given g ( x ) : R n → R m we define ∇ x g ( x ) ∈ R n × m with [ ∇ x g ( x )] i,j : = ∂ g j ( x ) ∂ x i . If n = m = 1 , we use g 0 ( x ) to denote the deri v ativ e of g at the point x . I n denotes the n × n identity matrix. Pro j X [ x ] is the Euclidean projection of the v ector x onto a closed and con ve x set X . I I . I N E RT I A L N A S H E Q U I L I B R I U M : D E FI N I T I O N A N D E X A M P L E S A. Definition of Inertial Equilibrium W e consider a large number of competing agents with a finite set of common actions { 1 , . . . , n } . F or selecting action i ∈ { 1 , . . . , n } , an agent receiv es a utility u i ( x ) , where x = [ x 1 , . . . , x n ] , and x i denotes the fraction of agents selecting action i . Observe that, with the introduction of the utility functions u i : R n ≥ 0 → R , we are implicitly assuming that the utility receiv ed by playing action i only depends on the distribution of the agents, and not on which agent selected which action, a modelling assumption typically employed in population games [11]. Within this framew ork, a Nash equilibrium is a distribution over the action space where no agent has any incentive in deviating to a dif ferent action. This requirement can be formalized by introducing the unit simplex 1 in dimension n , denoted with S , and its relative interior S + S : = { x ∈ R n s.t. x ≥ 0 , 1 > n x = 1 } , S + : = { x ∈ R n s.t. x > 0 , 1 > n x = 1 } . Definition 1 (Nash equilibrium, [4]) . Given n utilities { u i } n i =1 with u i : R n ≥ 0 → R , the vector ¯ x ∈ S is a Nash equilibrium if ¯ x i > 0 = ⇒ u i ( ¯ x ) ≥ u j ( ¯ x ) , ∀ i, j ∈ { 1 , . . . , n } . (1) Despite being widely used in the applications, Definition 1 does not account for the cost associated with an action switch. W e extend the previous model by introducing the non-negativ e coefficients c ij to represent the cost experienced by any agent when moving from action i to j . W e then define an inertial Nash equilibrium as a distribution over the action space where no agent can benefit by moving to a dif ferent action, while taking into account the cost of such change. 1 The formulation with unitary mass and the corresponding results seam- lessly generalise to agents of combined mass γ > 0 . Definition 2 (Inertial Nash equilibrium) . Given n utilities { u i } n i =1 , u i : R n ≥ 0 → R , n 2 non-ne gative switching costs { c ij } n i,j =1 , the vector ¯ x ∈ S is an Inertial Nash equilibrium if ¯ x i > 0 = ⇒ u i ( ¯ x ) ≥ u j ( ¯ x ) − c ij , ∀ i, j ∈ { 1 , . . . , n } . (2) In the remainder of this manuscript we focus on problems where there is no cost for staying put, as formalized next. Standing assumption. The switc hing costs satisfy c ii = 0 for all i ∈ { 1 , . . . , n } . Observe that conditions (1) and (2) do not impose any constraint on actions that are not currently selected by any agent (i.e., those with ¯ x i = 0 ). In other words, the utility of one such action can be arbitrarily low , and the configuration ¯ x still be an equilibrium. Despite being a natural extension to the traditional notions of equilibrium in game theory , to the best of our knowledge, Definition 2 is novel. Its rele vance stems from the observ ation that the coefficients c ij can model different and common phenomena, such as: - the tendency of agents to adhere to their habits, or their reluctance to try something dif ferent; - actual costs or fees that agents incur for switching action; - the lack of accurate information about other options. In the following, we provide two examples of problems that can be captured within this frame work. On demand ride-hailing. Ride-hailing systems are platforms that allow customers to tra vel from a gi ven origin to a desired destination, typically within the same city . Examples include taxi companies as well as platforms such as Uber , L yft or Didi. In our frame work, the drivers correspond to agents and geographical locations to av ailable actions. Each utility describes the profitability of a giv en location, which depends on the arri v al rate of customers in that location, and on the fraction of vehicles a v ailable in that same location. The cost (fuel and time) that a dri ver incurs while moving between two different physical locations is captured by c ij . Such model can predict how dri vers distribute themselv es over the city . T ask assignment in server network. W e are given a finite number of geographically dispersed servers represented with nodes, and connected through a network. Each server cor - responds to an action i ∈ { 1 , . . . , n } . A large number of agents has a list of jobs that originates in various nodes on the network and wishes to ex ecute this list as swiftly as possible. The speed at which each server can process a job depends on the load on the server and is captured by u i ( x i ) . Moving a job between server i and j requires an amount of time and resources captured by c ij . This model can predict ho w agents distribute their jobs over the set of servers. W e note that the set of inertial Nash equilibria contains the set of Nash equilibria, due to the non ne gati vity of c ij . Lemma 1. Every Nash equilibrium is an inertial Nash equi- librium. The proof follows from Definition 1 and 2, since condition (1) implies condition (2), as c ij ≥ 0 for all i, j ∈ { 1 , . . . , n } . 3 In the following we refer to an (inertial) Nash equilibrium as just an (inertial) equilibrium. B. Non-uniqueness and non-conve xity of the equilibrium set The following example sho ws that the set of inertial equilib- ria is in general neither conv ex, nor a singleton. This will pose significant algorithmic challenges, as discussed in Section III. Example 1. Let n = 3 , and consider utilities and switching costs of the form u 1 ( x ) = 1 . 2 − x 1 u 2 ( x ) = 1 . 2 − x 2 C =   0 0 . 2 0 . 3 1 0 0 . 8 0 . 1 1 . 2 0   , u 3 ( x ) = 1 − x 3 , wher e the entry ( i, j ) of C equals c ij . Note that x 3 = 1 − x 1 − x 2 . The equilibrium conditions (2) then become x 1 > 0 ⇒ x 2 ≥ x 1 − 0 . 2 (3a) x 1 > 0 ⇒ x 2 ≤ − 2 x 1 + 1 . 5 (3b) x 2 > 0 ⇒ x 2 ≤ x 1 + 1 (3c) x 2 > 0 ⇒ x 2 ≤ − 0 . 5 x 1 + 1 (3d) x 3 > 0 ⇒ x 2 ≥ − 2 x 1 + 1 . 1 (3e) x 3 > 0 ⇒ 2 x 2 ≥ − x 1 , (3f) wher e inequalities (3c) , (3d) , (3f) ar e alr eady implied by x ∈ S . W e color the r emaining three inequalities similarly to F igur e 1, which r eports the solution to (3) (i.e ., the inertial equilibrium set) in gr ay . 0 0 . 2 0 . 4 0 . 6 0 . 8 1 0 0 . 2 0 . 4 0 . 6 0 . 8 1 x 1 x 2 Nash equilibrium Fig. 1: The shaded region, including the thick red, yello w , green, and black lines, represents the inertial Nash equilibrium set for Example 1 projected on the plane ( x 1 , x 2 ). The component x 3 can be reconstructed from x 3 = 1 − x 1 − x 2 . The dashed line represents the simplex boundary , while the yellow , green and red lines describe the inequalities in (3). The blue point is the unique Nash equilibrium ¯ x = [0 . 4 , 0 . 4 , 0 , 2] , which satisfies condition (1). W e note that the inertial equilibrium set is not a single- ton. The lack of uniqueness is due to the positivity of the coefficients c ij . Indeed, if c ij = 0 for all i, j , then the inertial equilibrium set coincides with the equilibrium set of Definition 1, which is a singleton marked in blue in Figure 1. Moreov er , the inertial equilibrium set is not con ve x. This is due to the line joining the point (0 . 1 , 0 . 9) to (0 , 1) in Figure 1. The points on this segment belong to the inertial equilibrium set e ven though the y do not satisfy x 2 ≥ − 2 x 1 + 1 . 1 . This is because (3e) is enforced only when x 3 > 0 , whereas x 3 = 0 on the considered segment. The observed non-con vexity of the solution set is, in a sense, structural. T o see this, note that, by Definition 2, a point x ∈ S + is an inertial equilibrium if and only if it lies at the intersection of inequality constraints of the form u j ( x ) − c ij − u i ( x ) ≤ 0 ; these might be non conv ex, e ven if we restrict attention to conv ex or concav e utility functions. C. Related W ork The notion of inertial equilibrium is, to the best of our knowledge, nov el, due to the presence of the switching costs c ij . A related line of works comes from population games [13]. Here the focal point is the analysis and design of (continuous-time) agent dynamics that achie ve an equilibrium in the sense of Definition 1. A particular class of dynamics is imitation dynamics. These are reminiscent of the discrete-time Algorithm 2 below , as agents move to more attractive actions. Different works provide local [11], [14] and global [15], [16] con ver gence guarantees. Rather than delving into the vast literature of population games, we observ e that in all of the works there is no switching cost, i.e., c ij = 0 . Thus, the literature of population games study the problem of finding an equilibrium in the sense of Definition 1 and not an equilibrium in the sense of Definition 2, which is the focus here. Finally , we note that [11] and references therein provide con ver gence results to an equilibrium set, whereas we provide conv ergence to a point in the inertial equilibrium set. A more closely related equilibrium concept was proposed in the study of migration models in the seminal works [12], [17], [18] by Nagurne y . These works introduce the notion of migration equilibrium, in a way that resembles Definition 2, but with a number of important differences. First, the problem formulation is dif ferent. In the migration equilibrium problem we are giv en a fixed initial distrib ution x 0 ∈ S , with x 0 j rep- resenting the fraction of agents residing at a physical location j . These agents recei ve utility u j ( x 0 ) . The initial distribution x 0 is transformed into the final distribution x 1 ∈ S , which is a function of the migrations ( f ij ) n i,j =1 (the decision variables). Each migration comes with a migration cost c ij ( f ij ) which is a function of the number f ij of agents migrating. A migration equilibrium consists of a set of migrations ( f ij ) n i,j =1 such that, considering the fixed initial utilities u ( x 0 ) , the migration costs c ij ( f ij ) and the final utilities u ( x 1 ) , no other set of migrations is more con venient. Second, while the better- response algorithm we will introduce in Section IV can be interpreted as the natural dynamics of the agents seeking an equilibrium, this is not the case for the algorithms proposed to find a migration equilibrium, which are instead VI algorithms to be carried out of fline. 4 I I I . V A R I A T I O NA L I N E Q U AL I T Y R E F O R M U L AT I O N In this section we first recall that the set of equilibria defined by (1) can be described as the solution of a certain variational inequality . W e then show that a similar result holds for the inertial equilibrium set of (2). While the former equiv alence is known, the latter connection is novel and requires the careful definition of the variational inequality operator . The interest in connecting the inertial equilibrium problem with the theory of variational inequalities stems from the possibility of inheriting readily av ailable results, such as existence of the solution, properties of the solution set, and algorithmic con ver gence. Basic properties and results from the theory of variational inequalities used in this manuscript are summarized in Appendix A. Definition 3 (V ariational inequality) . Consider a set X ⊆ R n and an operator F : X → R n . A point ¯ x ∈ X is a solution of the variational inequality VI ( X , F ) if F ( ¯ x ) > ( x − ¯ x ) ≥ 0 , ∀ x ∈ X . The variational inequality problem was first introduced in in- finite dimensional spaces in [19], while the finite-dimensional problem in Definition 3 was identified and studied for the first time in [20]. The monograph [21] includes a wide range of results on VI, amongst which their connection to Nash equilibria. Proposition 1 (Equilibria as VI solutions, [13, Thm 2.3.2]) . A point ¯ x ∈ S is an equilibrium if and only if it is a solution of VI( S , − u ), where u ( x ) : = [ u i ( x )] n i =1 . The follo wing theorem sho ws that inertial equilibria can also be described by suitable v ariational inequalities. Theorem 1 (Inertial equilibria as VI solutions) . A point ¯ x ∈ S is an inertial equilibrium if and only if it is a solution of VI ( S , F ) , where F ( x ) : = [ F i ( x )] n i =1 , F i ( x ) : = max j ∈{ 1 , . . . ,n } ( u j ( x ) − u i ( x ) − c ij ) . (4) If the utilities are continuous, the existence of an inertial equilibrium is guaranteed. A. Lack of monotonicity If the operator F in VI( S , F ) is monotone (see Definition 6 in Appendix A), an inertial equilibrium can be computed efficiently using one of the many algorithms av ailable in the literature of variational inequalities, see [21, Chapter 12]. On the contrary , if this is not the case, the problem is kno wn to be intractable in general. Since the inertial equilibrium set of Figure 1 is not con ve x, the corresponding v ariational inequality operator F cannot be monotone (see Proposition 7 in Appendix A). The question is whether this observation extends to more general settings. In the following we provide a strong negativ e result showing that the variational inequality operator is non monotone in many meaningful instances of the inertial equilibrium problem. Theorem 2 ( F is not monotone) . Assume that for all i ∈ { 1 , . . . , n } the function u i is Lipschitz and that ∇ x i u i ( x ) < 0 for all x ∈ S . If ther e exists a point ˆ x ∈ S which is not an inertial equilibrium, then F is not monotone in S . The theorem certifies that either e very point of the simple x is an equilibrium, or F is not monotone and consequently the variational inequality problem is hard. The only technical assumption is that ∇ x i u i ( x ) < 0 . W e observe that this is the situation for many applications; indeed the condition implies that u i ( x ) decreases if the number of agents on action i increases, as commonly assumed in congestion problems. Moreov er , the condition can be further weakened, as for the proof it suffices that ∇ x i ? u i ? ( x ? ) < 0 , only for a specific x ? and i ? defined in Appendix B. W e conclude this section by pointing out that Example 1 sat- isfies the conditions of Theorem 2. The lack of monotonicity of the corresponding operator F is confirmed by the fact that ∇ x F ( x ) is not positiv e semidefinite for all x ∈ S (a condition equiv alent to monotonicity , see Proposition 6 in Appendix A). Indeed, there are points where ∇ x F ( x ) + ∇ x F ( x ) > is indefinite, e.g., ˜ x = [0 . 2 , 0 . 2 , 0 . 6] , where ∇ x F ( ˜ x ) + ∇ x F ( ˜ x ) > =   0 0 − 1 0 0 0 − 1 0 2   . B. Thr ee drawbac ks of existing algorithms Lemma 1 ensures that any equilibrium is an inertial equi- librium. Thus, one might be tempted to use an algorithm for computing an equilibrium to determine an inertial equilibrium. Unfortunately , a number of difficulties make this approach impractical. In this section we describe one such algorithm and highlight its drawbacks in the computation of an inertial equilibrium, which generalise to other algorithms that con- ver ge to an equilibrium. W e consider the projection algorithm [21, Alg. 12.1.1] for the solution of VI( S , − u ), where x ( k ) indicates the iterate k of the algorithm. Note that the projection step necessitates the presence of a central operator . Algorithm 1 Projection algorithm Initialization: ρ > 0 , k = 0 , x (0) ∈ S Iterate: x ( k + 1) = Pro j S [ x ( k ) + ρu ( x ( k ))] k ← k + 1 Proposition 2. If u i is L -Lipschitz for all i , ρ ≤ 2 /L , and if ther e exists a concave function θ : R n → R such that ∇ x θ ( x ) = u ( x ) for all x ∈ S , then Algorithm 1 con ver ges to an equilibrium, and thus an inertial equilibrium. 2 2 For this proposition to hold, we ha ve to assume the existence of a concav e function θ whose gradient matches u ( x ) . One such case is when the utility function u i depends only on the number of agents on action i , i.e. u i ( x ) = u i ( x i ) for all i , and is decreasing. This case covers a wide range of applications. If no θ whose gradient matches u ( x ) exists, but − u is monotone, one can resort to a different algorithm such as the extra-gradient algorithm [21, Thm. 12.1.11]. Finally , observe that if − u is strongly monotone (see [21, Def. 2.3.1]), the projection algorithm con verges without requiring the existence of θ ( x ) , see [21, Alg. 12.1.1]. 5 Thr ee fundamental shortcomings In the follo wing we analyse the behaviour of Algorithm 1 on Example 1, and use it to highlight three fundamental shortcomings of this approach. W e begin by observing that ¯ x = [ ¯ x 1 , ¯ x 2 , ¯ x 3 ] = [0 . 4 , 0 . 4 , 0 . 2] is an equilibrium, as it solves VI( S , − u ), since for all x ∈ S " − u 1 ( ¯ x 1 ) − u 2 ( ¯ x 2 ) − u 3 ( ¯ x 3 ) # > " x 1 x 2 x 3 # − " ¯ x 1 ¯ x 2 ¯ x 3 #! = " − 0 . 8 − 0 . 8 − 0 . 8 # > " x 1 x 2 x 3 # − " ¯ x 1 ¯ x 2 ¯ x 3 #! = 0 . Additonally , [ ¯ x 1 , ¯ x 2 , ¯ x 3 ] = [0 . 4 , 0 . 4 , 0 . 2] is the unique solution of VI( S , − u ) and thus the unique equilibrium (see [21, Thm. 2.3.3]). This is consistent with Lemma 1 and Figure 1 where the equilibrium point ¯ x belongs to the inertial equilibrium set. Thanks to Proposition 2, Algorithm 1 conv erges to ¯ x ( L = 1 for the utilities in (3), so that we have to select ρ < 2 ). W ith the choice of ρ = 1 , it is immediate to verify that Algorithm 1 con ver ges in one iteration for any initial condition x (0) . In the following we consider two cases: i) the case in which x (0) is neither an inertial equilibrium nor an equilibrium; ii) the case in which x (0) is an inertial equilibrium, but not an equilibrium. Case i): consider x (0) = [0 . 4 , 0 . 2 , 0 . 4] . The point x (0) is not an inertial equilibrium (and thus not an equilibrium), because x 3 (0) > 0 and u 3 ( x (0)) = 1 − 0 . 4 = 0 . 6 < 0 . 7 = 0 . 8 − 0 . 1 = u 1 ( x (0)) − c 31 . The first iteration of Algorithm 1 amounts to a mass of 0 . 2 being moved from ac- tion i = 3 to action i = 2 . Ne vertheless, we observ e that agents selecting action i = 3 are not interested in moving to action i = 2 . Indeed u 3 ( x (0)) = 0 . 6 ≥ − 0 . 2 = u 2 ( x (0)) − c 32 , so the switch from i = 3 to i = 2 is detrimental for the agents performing it. Case ii): Consider x (0) = [0 . 4 , 0 . 3 , 0 . 3] , and note that x (0) is already an inertial equilibrium. Nonetheless, the first iteration of Algorithm 1 forces a mass of 0 . 1 to move from action 3 to 2 . The drawbacks of Algorithm 1 are summarized next: i) Agents are forced to switch action ev en when such switch is detrimental to their well being. ii) Agents are forced to switch action even if already at an inertial equilibrium. iii) The projection step necessitates the presence of a central operator . Such operator requires information not only on the utilities u i ( x ( k )) for all i , but also on x ( k ) . In the next section we ov ercome these issues and present a natural dynamics that i) prov ably conv erges to an inertial equilibrium, ii) respects the agent’ s strategic nature, and iii) requires limited coordination. I V . A B E T T E R - R E S P O N S E A L G O R I T H M W e begin by introducing the definition of the en vy set. Definition 4 (En vy set) . Given x ∈ S , for each i such that x i > 0 , we define the en vy set of i as E out i ( x ) : = { j ∈ { 1 , . . . , n } s.t. u i ( x ) < u j ( x ) − c ij } , wher eas for i such that x i = 0 , we define E out i ( x ) = ∅ . Informally , the en vy set E out i ( x ) contains all the actions j to which agents currently selecting action i would rather move to. The following fact immediately follows from Definition 4 and Definition 2 of inertial equilibrium. Proposition 3. A point ¯ x ∈ S is an inertial equilibrium if and only if ¯ x ∈ S and E out i ( ¯ x ) = ∅ , for all i ∈ { 1 , . . . , n } . The proposed Algorithm 2 in volv es a single, intuitive step. At iteration k , let x ( k ) ∈ S denote the distribution of the agents on the resources. For every action i , a mass x i → j ( k ) ∈ [0 , x i ( k )] is mov ed from action i to some other action j ∈ E out i ( x ( k )) , that is, the mo vement tak es place only if the alternative action j is attracti ve for agents currently select- ing action i . This simple dynamics is described in Algorithm 2, where we denote with u i ( k ) = u i ( x ( k )) , E out i ( k ) = E out i ( x ( k )) for brevity . Algorithm 2 Better-response algorithm Initialization : k = 0 , x (0) ∈ S Iterate : ∆ x ( k ) ← 0 repeat for all i , j ∈ E out i ( k ) choose x i → j ( k ) ∈ [0 , x i ( k )] ∆ x i ( k ) ← ∆ x i ( k ) − x i → j ( k ) , ∆ x j ( k ) ← ∆ x j ( k ) + x i → j ( k ) , end r epeat x ( k + 1) ← x ( k ) + ∆ x ( k ) k ← k + 1 The agents’ dynamics presented in Algorithm 2 is fully specified once we define the mass x i → j ( k ) moving from action i to j ∈ E out j ( k ) as a function of x i ( k ) and x j ( k ) . At this stage we rather not giv e a particular expression to x i → j ( k ) , as the con ver gence of Algorithm 2 is guaranteed under very weak conditions and dif ferent choices of x i → j ( k ) . A possible modelling assumption sees agents moving from a less attractiv e action i to a more f av ourable action j ∈ E out i ( k ) independently from the value of the utility u j ( k ) . For instance, this can be achiev ed by setting x i → j ( k ) = β x i ( k ) with β > 0 . A dif ferent modelling assumption entails agents being responsiv e to the level of the utility u j ( k ) o ver all j ∈ E out i ( k ) , and thus redistributing themselves based on the percei ved gain. Both these cases (and many more) are covered by Theorem 3. W e observ e that Algorithm 2 does not present any of the issues encountered with the use of Algorithm 1. First, agents switch action only if the switch is con v enient. Second, no agent moves if the current allocation is an inertial equilibrium. Third, there is no need for a central operator , and each agent requires information only regarding the other actions’ utilities u ( x ( k )) . As a consequence, Algorithm 2 can be interpreted as the natural dynamics of agents switching to a more fa vourable action whene ver one is av ailable. Finally , agents are not limited to mo ving to the best alternativ e action (as in best-r esponse dynamics ), b ut can instead choose any action providing a better net utility (hence the term better-r esponse dynamics ). Theorem 3 (Con ver gence of Algorithm 2) . Assume that - for each i ∈ { 1 , . . . , n } the utility u i depends only on x i , that u i is non-incr easing and L -Lipschitz. 6 - there exists c min > 0 such that c ij ≥ c min for all i 6 = j with i, j ∈ { 1 , . . . , n } . - there exist 0 < τ ≤ 1 , and ε > 0 such that at each iteration k ∈ N , x i → j ( k ) ≥ 0 for all i ∈ { 1 , . . . , n } , j ∈ E out i ( x k ) , and τ x i ( k ) ≤ X j ∈E out i ( k ) x i → j ( k ) ≤ x i ( k ) , i ∈ { 1 , . . . , n } , (5a) X i : j ∈E out i ( k ) x i → j ( k ) ≤ c min L − ε, j ∈ { 1 , . . . , n } . (5b) Then x ( k ) in Algorithm 2 con ver ges to an inertial equilibrium ¯ x . If additionally ¯ x ∈ S + , then the algorithm terminates in a finite number of steps. The first assumption is typical of many congestion-like problems. The second assumption is technical, and requires the switching costs between different actions to be strictly positiv e. W ith respect to the third assumption, the requirement on the right hand side of (5a) together with the condition x i → j ( k ) ≥ 0 for all i ∈ { 1 , . . . , n } , j ∈ E out i ( x k ) , is needed to ensure that x ( k ) remains in the simplex. Thus, the only non- trivial constraint imposed on x i → j ( k ) is that on the left hand side of (5a), and that of (5b); these are discussed in detail in Remark 1 below . Finally , we note that the proof of Theorem 3 does not require the agents to move synchronously . As a consequence, an asynchronous implementation of Algorithm 2 is also guaranteed to con ver ge. Remark 1 (Tightness of conditions (5a) and (5b)) . Condi- tion (5a) is a mild requir ement. It mer ely asks for a minimum pr oportion of agents to move fr om their curr ent unfavourable action to a better one. Equation (5b) , on the other hand, r equir es the switching to happen suf ficiently slowly . W ithout this condition, the algorithm may not conver ge, as shown with the following example . Consider n = 2 , u 1 ( x 1 ) = 1 − x 1 , u 2 ( x 2 ) = 1 − x 2 , c 12 = c 21 = 0 . 5 , and note that c min /L = 0 . 5 . T ake δ > 0 small enough and initial condition x 1 (0) = 0 . 75 + δ / 2 , x 2 (0) = 0 . 25 − δ / 2 . Since u 1 (0) = 0 . 25 − δ / 2 and u 2 (0) = 0 . 75 + δ / 2 , then x (0) is not an inertial equilibrium. Assume that, as a consequence, 0 . 5 + δ > c min /L units of mass move fr om action 1 to action 2 , resulting in x 1 (1) = 0 . 25 − δ/ 2 , x 2 (1) = 0 . 75 + δ/ 2 , and thus u 1 (1) = 0 . 75 + δ / 2 , u 2 (1) = 0 . 25 − δ / 2 , so x (1) is not an inertial equilibrium either . A r epeated transfer of 0 . 5 + δ mass fr om the action which is worse-of f to the one whic h is better- off r esults in x (2 k ) = x (0) and x (2 k + 1) = x (1) . Thus, a slight violation of (5b) brak es the con ver gence of Algorithm 2. V . E X T E N S I O N S W e present three modifications of the inertial equilibrium problem, and highlight ho w the results can be adapted. Non-engaging agents. W ith the current Definition 2 all the agents are forced to engage, i.e. to choose one of the actions in { 1 , . . . , n } . Let us now consider an extra action labeled e , so that the extended actions set it { 1 , . . . , n, e } . Set c j e = c ej = 0 for all j ∈ { 1 , . . . , n } and u e ( x ) as some constant v alue representing, for instance, the utility perceiv ed when not participating in the game. Within this setup, an agent that does not engage in the game at time k = 0 , will revise his decision at every time-step k ≥ 1 , and will rejoin whenev er more fa vourable actions appear . For example, in the ride-hailing application presented in Section VI, action e could represent electing to temporarily not work as a driver . Atomic agents with discrete action set. Instead of a continuum of agents, one could consider a finite number M of atomic agents. Each agent possesses unitary mass and can choose only one of the actions { 1 , . . . , n } . The utility u j is then a function of ho w agents distribute themselves over the actions. The definition of inertial equilibrium requires that no agent i ∈ { 1 , . . . , M } has an incentiv e to switch action, considering the utilities of the alternativ e actions and the corresponding switching costs. The model with a continuum of agents studied above represents, in a sense, the limiting case obtained as the number of agents M grows. Since the action space is discrete, the reformulation as a VI is not possible. Nonetheless, one can formulate Algorithm 2 by letting an agent i switch to an arbitrary action whenev er such action is attractive. Con ver gence is guaranteed upon substituting the expression P j ∈E out i ( k ) x i → j ( k ) in (5a) and (5b) with the number of agents that mo ve at the same time. Multi-class inertial equilibrium. The concept of inertial equi- librium relies on the idea that each agent perceiv es the same utility u j and the same switching costs c ij . This assumption can be relaxed by introducing different agents’ classes. Let A be the total number of classes, and x α i be the mass of agents belonging to class α ∈ A which choose action j . W e denote x i = P A α =1 x α i and x α = { x α i } n i =1 . Definition 5. Consider utilities u α i : R n ≥ 0 → R , switching costs c α ij ≥ 0 and masses γ α > 0 , with i, j ∈ { 1 , . . . , n } , α ∈ { 1 , . . . , A } . The vector ¯ x = [ ¯ x 1 , . . . , ¯ x A ] ∈ R nA is a multi-class inertial equilibrium if ¯ x ≥ 0 nA , 1 > n ¯ x α = γ α for all α , and ¯ x α i > 0 = ⇒ u α i ( ¯ x r ) ≥ u α j ( ¯ x r ) − c α ij , ∀ j ∈ { 1 , . . . , n } , for all i ∈ { 1 , . . . , n } and α ∈ { 1 , . . . , A } , where the vector ¯ x r : = P A α =1 ¯ x α . Note that e ven though different classes might perceiv e dif- ferent utilities at the same action i , each of these utilities is a function of the sole distribution of the agents on the actions i.e. of the reduced variable x r . This is indeed what couples the dif ferent classes together . Upon redefining S = ˜ S 1 × · · · × ˜ S A ⊂ R nA as the Cartesian product of the weighted simplex es ˜ S α = { x α ∈ R n ≥ 0 , 1 > n x α = γ α } , one can redefine F : S → R nA ≥ 0 , where F ( x ) = [[ F α j ( x )] A α =1 ] n j =1 , F α j ( x ) = max h ∈{ 1 , . . . ,n } u α h A X α =1 x α ! − u α j A X α =1 x α ! − c α j h ! . Using a straightforward extension of the proof of Theorem 1, one can show that the set of multi-class inertial equilibria coincides with the solution set of VI ( S , F ) . Theorem 2 about lack of monotonicity also extends to the multi-class case. 7 Finally , Algorithm 2 can also be modified appropriately to account for the presence of multiple classes, and a similar con ver gence result to that of Theorem 3 follows. V I . A P P L I C A T I O N : A R E A C OV E R A G E F O R TA X I D R I V E R S In this section we apply the theory developed to the problem of area coverage for taxi driv ers. Understanding the spatial behavior of taxi driv ers has attracted the interest of the trans- portation community [22], [23], as it allo ws to infer informa- tion for di verse scopes, including land-use classification [24] and analysis of collecti ve beha viour of a city’ s population [25]. W e focus on the urban area of Hong K ong, as the work [26] provides relev ant data for our model. The authors of [26] divide the region of interest into n = 18 neighborhoods, which represent the resources in our game. W e assume that a taxi driv er in neighborhood i enjoys the utility u i ( x i ) , depending on the fraction x i of taxi drivers covering the same neighborhood. W e aim at determining an equilibrium distribution of the drivers, across the different neighborhoods of the urban area. The problem can be described through the introduction of an undirected graph, where the nodes represent the neighborhoods. W e construct an edge from i to j (and from j to i ) if and only if the two neighborhoods are adjacent. The cost c ij is taken as the fuel cost of a trip from i to j according to [27] and c ij = c j i , with the fuel cost set as extremely high for non-adjacent neighborhoods, so that mov ement cannot occur between those. A taxi dri v er stationing in node i e xperiences a utility u i ( x i ) describing his re venue minus the costs. This takes the form of u i ( x i ) = α i v i ( x i ) − (1 − v i ( x i )) β where α i is the a verage profit per trip starting from location i (ranging from 30 to 140 HK$ according to [26]), β = 6 . 34 HK$ is the operational cost of vacant taxi trips inclusiv e of fuel costs, rental costs and toll charges associated with the trips. The function v i ( x i ) describes the percentage of the time a taxi is occupied and according to [28, Eq. (1)] is modeled by v i ( x i ) = 1 −  x i 1 + x i  p i , where p i > 1 is the number of passengers requesting a taxi at node i . W e select p i to be proportional to the values in Figure 3 of [26]. Note that v i (0) = 1 and lim x i →∞ v i ( x i ) = 0 , as one would expect. Moreover , through simple algebraic manip- ulations, u i ( x i ) can be shown to be decreasing for x i ≥ 0 and to have Lipschitz constant L i = 4( α i − β ) p i ( p i − 1) p i − 1 ( p i +1) p i +1 . In our numerical study we compare the projection algorithm (Algo- rithm 1) with the better-response algorithm proposed in Algo- rithm 2, with stopping criterion k x ( k + 1) − x ( k ) k ≤ 10 − 6 , and equal neighbour redistrib ution function x i → j ( k ) = τ x i ( k ) , j ∈ E out i ( k ) . For Algorithm 1, ρ is chosen slightly smaller than 1 /L = 1 . 7 · 10 − 3 as required to achie ve con ver gence by Proposition 2. Similarly , τ is chosen slightly smaller than c min / ( Lγ ) = 1 . 4 · 10 − 4 in accordance to the requirement of Theorem 3. T able I (top) shows a comparison in terms of iterations needed to reach con ver gence by both algorithms. Note that a single iteration of Algorithm 1 is more costly 20 30 40 50 60 km 0 10 20 30 40 km Fig. 2: The equilibrium ¯ x achiev ed by Algorithm 2 with initial condition x (0) = 1 n /γ . The radius of each node is proportional to its utility u i ( ¯ x i ) , while the thickness of edge ( i, j ) is proportional to the corresponding cost c ij . than one of Algorithm 2. Indeed, Algorithm 1 requires the computation of a projection step, while Algorithm 2 requires simple addition and multiplication operations. W e note that the number of required iterations to reach con ver gence is rather high, due to the small v alues of ρ and τ imposed by the theoretical bounds of Proposition 2 and Theo- rem 3. For this reason we perform another simulation with the values ρ = τ = 10 − 2 , which provide no theoretical guarantees of conv ergence. Nonetheless, both algorithms con ver ge in 100 dif ferent repetitions with random initial conditions. The number of iterations is reported in T able I (bottom) and is considerably smaller than those in T able I (top). Moreov er , Algorithm 1 outperforms Algorithm 2 in the first case, but the viceversa happens in the second case. Finally , Figure 2 sho ws the steady state distrib ution of taxi dri vers across the n neighbourhood of Hong K ong, with initial condition x (0) = 1 n /γ . Algorithm # iterations mean # iterations St. Dev . Alg. 1, ρ = 10 − 3 26 311 3329 Alg. 2, τ = 10 − 4 152 856 9130 Alg. 1, ρ = 10 − 2 12 672 3472 Alg. 2, τ = 10 − 2 2168 125 T ABLE I: Number of iterations needed to reach con v ergence with ρ = 1 . 5 · 10 − 3 , τ = 10 − 4 (top), and τ = ρ = 10 − 2 (bottom). W e report mean and standard deviation for 100 repetitions of the two algorithms, starting from random initial conditions in the simplex. V I I . C O N C L U S I O N S W e proposed the novel notion of inertial Nash equilibrium to model the cost incurred by agents when switching to an al- ternativ e action. While the set of inertial Nash equilibria can be characterized by means of a suitable variational inequality , the resulting operator is often non monotone. Thus, we proposed 8 a natural dynamics that is distributed, and prov ably conv erges to an inertial Nash equilibrium. As future research direction, it would be interesting to provide con ver gence rate guarantees for Algorithm 2, and more broadly to extend the notion of inertial equilibrium beyond the framew ork of population games. A P P E N D I X A P R E L I M I NA R I E S O N V A R I A T I O NA L I N E Q UA L I T I E S In the following we present those result on the theory of variational inequality that are used to characterize the equilibrium concepts introduced in Section II. Proposition 4 ([21, Prop. 2.3.3]) . Let X ⊂ R n be a compact, con ve x set and F : X → R n be continuous. Then VI ( X , F ) admits at least one solution. The next proposition introduces the KKT system of a variational inequality , which is analogous to the KKT system of an optimization program. Proposition 5 ([21, Prop. 1.3.4]) . Assume that the set X can be described as X = { x ∈ R n | g ( x ) ≤ 0 m , h ( x ) = 0 p } , and that it satisfies Slater’ s constraint qualification in [29, eq. (5.27)]. Then ¯ x solves VI( X , F ) if and only if ther e exist ¯ λ and ¯ µ such that ( ¯ x, ¯ λ, ¯ µ ) solves the KKT system (6) F ( x ) + ∇ x g ( x ) λ + ∇ x h ( x ) µ = 0 n (6a) 0 m ≤ λ ⊥ g ( x ) ≤ 0 m (6b) h ( x ) = 0 p . (6c) W e next recall the notion of monotonicity , which is a suffi- cient condition for conv ergence of a plethora of VI algorithms, see [21, Chapter 12]. Definition 6 (Monotonicity) . An operator F : X ⊆ R n → R n is monotone if for all x, y ∈ X . ( F ( x ) − F ( y )) > ( x − y ) ≥ 0 , Proposition 6. [30, Pr op. 2.1] Let X ⊆ R n be con ve x. An operator F is monotone in X if and only if for every x ∈ X each gener alized J acobian φ ∈ ∂ F ( x ) is positive semi-definite . The definition of generalized Jacobian ∂ F ( x ) can be found in [31, Definition 2.6.1]; we do not report it here because for our scope it suf fices to know that if F is differentiable in x , then the generalized Jacobian coincides with the Jacobian, i.e., ∂ F ( x ) = {∇ x F ( x ) } , with positi ve-definite interpreted as ( ∇ x F ( x ) + ∇ x F ( x ) > ) / 2  0 . W e conclude this section with a result on the con v exity of the VI solution set. Proposition 7 ([21, Thm. 2.3.5]) . Let X ⊆ R n be closed, con ve x and F : X → R n be continuous and monotone . Then the solution set of VI ( X , F ) is conve x. A P P E N D I X B P RO O F S Proof of Theorem 1 Pr oof: The proof consists in sho wing that the KKT system of VI( S , F ) is equi v alent to Definition 2 of inertial Nash. Since the set S satisfies Slater’ s constraint qualification, by Proposition 5 VI( S , F ) is equi valent to its KKT system F ( x ) + µ 1 n − λ = 0 n 0 m ≤ λ ⊥ x ≥ 0 m 1 > n x = 1 where µ ∈ R is the dual variable corresponding to the constraint 1 > n x = 1 and λ ∈ R n is the dual variable corresponding to the constraint x ≥ 0 n . The system (7) can be compactly rewritten as 0 n ≤ µ 1 n + F ( x ) ⊥ x ≥ 0 n , (8a) 1 > n x = 1 . (8b) Observe that for any x ∈ S there exists i ? ∈ { 1 , . . . , n } such that F i ? ( x ) = 0 . Indeed, setting i ? ∈ argmax i ∈{ 1 , . . . ,n } u i ( x ) , giv es F i ? ( x ) = 0 by the definition of F in (4). It follows that µ < 0 is not possible, otherwise the non- negati vity condition on µ 1 n + F ( x ) is violated. Moreover , since F ( x ) ≥ 0 n , µ > 0 is not possible, as by (8a) this would imply x = 0 n thus violating (8b). W e can conclude that µ = 0 and (8) becomes 0 n ≤ F ( x ) ⊥ x ≥ 0 n , (9a) 1 > n x = 1 . The system (9) is equi valent to x ∈ S , and x i > 0 ⇒ (9a) u i ( x ) ≥ u j ( x ) − c ij , ∀ i, j ∈ { 1 , . . . , n } . which coincides with Definition 2. Existence of an inertial equilibrium follows readily from Proposition 4 on the existence of VI solutions. The continuity of the VI operator therein required is satisfied because F is the point-wise maximum of continuous functions.  Proof of Theorem 2 Pr oof: The proof is composed of four parts. 1) W e first sho w that there exists ˜ x ∈ S + such that ˜ x is not an inertial equilibrium (by assumption ˆ x belongs to S and not necessarily to S + ). For the sake of contradiction, assume that each x ∈ S + is an inertial equilibrium. Since ˆ x belongs to the closure of S + , we can construct a sequence ( x ( m )) ∞ m =1 ∈ S + such that lim m →∞ x ( m ) = ˆ x . Since each x ( m ) is an inertial equilibrium and it is positi ve, then for all i, j it holds u i ( x ( m )) ≥ u j ( x ( m )) − c ij . T aking the limit and e xploiting continuity of { u i } n i =1 we obtain lim m →∞ u i ( x ( m )) ≥ lim m →∞ u j ( x ( m )) − c ij , ⇔ u i ( ˆ x ) ≥ u j ( ˆ x ) − c ij , (10) for all j, h ∈ { 1 , . . . , n } , hence ˆ x is an inertial equilibrium, against the assumption. 9 2) After establishing the existence of ˜ x ∈ S + which is not an inertial equilibrium, we now show that there exists an open ball B ˜ ε ( ˜ x ) centered around ˜ x of radius ˜ ε > 0 such that none of the points in B ˜ ε ( ˜ x ) ∩ S + is an inertial equilibrium. Let us reason again for the sake of contradiction. If for each ε > 0 there exists an inertial equilibrium in B ε ( ˜ x ) ∩ S + , then we can construct a sequence of inertial equilibria con ver ging to ˜ x . W ith the same continuity ar gument used in (10), we can conclude that ˜ x is an inertial equilibrium, which is false by assumption. This demonstrates the existence of ˜ ε > 0 such that none of the points in B ˜ ε ( ˜ x ) ∩ S + is an inertial equilibrium. By Rademacher’ s theorem [32, Thm. 2.14], Lipschitzianity of { u i } n i =1 guarantees 3 existence of x ? ∈ B ˜ ε ( ˜ x ) ∩ S + such that F is dif ferentiable at x ? . 3) The pre vious part guarantees dif ferentiability of F at a point x ? ∈ S + which is not an inertial equilibrium. This third part is dedicated to showing that there exist i ? , j ? ∈ { 1 , . . . , n } such that i ? ∈ A ( j ? , x ? ) and A ( i ? , x ? ) = { i ? } , where we denote A ( k , x ) : = argmax ` ∈{ 1 , . . . ,n } { u ` ( x ) − u k ( x ) − c k` } . Since x ? is not an inertial equilibrium, then there exist ` 1 , ` 2 such that u ` 1 ( x ? ) < u ` 2 ( x ? ) − c ` 1 ` 2 . (11) Condition (11) is equiv alent to ` 2 ∈ A ( ` 1 , x ? ) and ` 1 / ∈ A ( ` 1 , x ? ) . If A ( ` 2 , x ? ) = { ` 2 } then the statement is pro ven with j ? = ` 1 , i ? = ` 2 , otherwise there exists ` 3 ∈ A ( ` 2 , x ? ) \{ ` 2 } . Note that it cannot be ` 3 = ` 1 , because this means u ` 2 ( x ? ) ≤ u ` 1 ( x ? ) − c ` 2 ` 1 , which together with (11) results in u ` 1 ( x ? ) < u ` 1 ( x ? ) − c ` 2 ` 1 − c ` 1 ` 2 , which is not possible, because c ` 1 ` 2 , c ` 2 ` 1 ≥ 0 by assumption. Hence we established that ` 3 6 = ` 1 . If A ( ` 3 , x ? ) = { ` 3 } then the statement is proven with j ? = ` 2 , i ? = ` 3 , otherwise there exists ` 4 / ∈ { ` 1 , ` 2 , ` 3 } such that ` 4 ∈ A ( ` 3 , x ? ) . Since there are only n dif ferent actions, by continuing the chain of reasoning we conclude that there exists k ∈ { 2 , . . . , n } such that ` k ∈ A ( ` k − 1 , x ? ) and A ( ` k , x ? ) = { ` k } , thus proving the statement with j ? = ` k − 1 and i ? = ` k . W e no w proceed to sho w that not only i ? ∈ A ( j ? , x ? ) , but actually A ( j ? , x ? ) = { i ? } . For the sake of contradiction, assume that there exists ` 6 = i ? such that ` ∈ A ( j ? , x ? ) . This means that F j ? ( x ? ) = u i ? ( x ? ) − u j ? ( x ? ) − c j ? i ? = u ` ( x ? ) − u j ? ( x ? ) − c j ? ` . Then consider the vector of the 3 Rademacher’ s theorem assumes F to be defined on an open subset of R n , b ut S + is not open in R n . Indeed, one just needs to define F on the n − 1 dimensional open set { x ∈ R n − 1 > 0 | 1 > n − 1 x < 1 } , by using x n = 1 − P n − 1 j =1 x j and then apply the Rademacher’ s Theorem to conclude existence of a differentiable point in { x ∈ R n − 1 > 0 | 1 > n − 1 x < 1 } which implies existence of a differentiable point in the original S + . canonical basis e i ? ∈ R n and compute lim t → 0 + F j ? ( x ? + t e i ? ) − F j ? ( x ? ) t = lim t → 0 + [ u ` ( x ? ) − u j ? ( x ? ) − c j ? ` ] − [ u ` ( x ? ) − u j ? ( x ? ) − c j ? ` ] t = 0 , (12) where the first equality holds because for t > 0 we have u i ? ( x ? + t e i ? ) − u j ? ( x ? ) − c j ? i ? < u i ? ( x ? ) − u j ? ( x ? ) − c j ? i ? = u ` ( x ? ) − u j ? ( x ? ) − c j ? ` , due to ∇ x i ? u i ? ( x ? ) < 0 by assumption. Moreo ver , lim t → 0 − F j ? ( x ? + t e i ? ) − F j ? ( x ? ) t = lim t → 0 − [ u i ? ( x ? + t e i ? ) − u j ? ( x ? ) − c j ? i ? ] − [ u i ? ( x ? ) − u j ? ( x ? ) − c j ? i ? ] t = lim t → 0 − u i ? ( x ? + t e i ? ) − u i ? ( x ? ) t = ∇ x i ? u i ? ( x ? ) < 0 , (13) where the first equality holds because for t < 0 we have u i ? ( x ? + t e i ? ) − u j ? ( x ? ) − c j ? i ? > u i ? ( x ? ) − u j ? ( x ? ) − c j ? i ? = u ` ( x ? ) − u j ? ( x ? ) − c j ? ` , due to ∇ x i ? u i ? ( x ? ) < 0 by assumption. From (12) and (13) we obtain that F j ? is not differentiable at x ? , against what prov ed in the second part. Hence we must conclude that there cannot exist ` 6 = i ? such that ` ∈ A ( j ? , x ? ) , thus A ( j ? , x ? ) = { i ? } . 4) Since F is differentiable in x ? by the second part of the proof, then ∂ F ( x ? ) = {∇ x F ( x ? ) } is a singleton. As A ( j ? , x ? ) = A ( i ? , x ? ) = { i ? } by the third part of the proof, then u i ? ( x ? ) − c j ? i ? > u ` ( x ? ) − c j ? ` , ∀ ` 6 = i ? , u i ? ( x ? ) − c i ? i ? > u ` ( x ? ) − c i ? ` , ∀ ` 6 = i ? . (14) As a consequence of (14) there exists a small enough open ball around x ? where F i ? ( x ? ) = u i ? ( x ? ) − u i ? ( x ? ) − c i ? i ? = 0 and F j ? ( x ? ) = u i ? ( x ? ) − u j ? ( x ? ) − c j ? i ? . Thus [ ∇ x F ( x ? )] i ? j ? × i ? j ? = " ∂ F i ? ( x ? ) ∂ x i ? ∂ F i ? ( x ? ) ∂ x j ? ∂ F j ? ( x ? ) ∂ x i ? ∂ F j ? ( x ? ) ∂ x j ? # =  0 0 ∇ x i ? u i ? ( x ? ) −∇ x j ? u j ? ( x ? )  , whose symmetric part has determinant 0 · ∇ x j ? u j ? ( x ? ) − ( ∇ x i ? u i ? ( x ? )) 2 / 4 < 0 , which makes [ ∇ x F ( x ? )] i ? j ? × i ? j ? indefinite. Thus ∇ x F ( x ? ) itself is indefinite and F is not monotone in S due to Proposition 6.  Proof of Proposition 2 Pr oof: Algorithm 1 is the projection algorithm in [21, Alg. 12.1.1], applied to VI( S , − u ). A solution of VI( S , − u ) exists by Proposition 4. The operator − u is monotone in S , be- cause θ is conca ve [33, eq. (12)]. Moreo ver , due to existence of θ , L -Lipschitzianity is equiv alent to (1 /L ) -cocoerciti vity [34, Thm. 18.15]. Then, for ρ < 2 /L , Algorithm 1 is guaranteed to 10 con ver ge to a solution of VI( S , − u ) by [21, Thm. 12.1.8]. The final claim follows by observing that any W ardrop equilibrium is also an inertial W ardrop equilibrium (Lemma 1).  Proof of Theorem 3 Pr oof: First, observ e that if x (0) ∈ S , then x ( k ) remains in S for all k ≥ 1 . This is consequence of the two following observations. i) At every fixed time-step k , and for every pair i, j with j ∈ E out i ( k ) , the mass x i → j ( k ) is remov ed from node i and simultaneously added to node j (see Algorithm 2). Therefore, the total mass must be conserved at each iteration, and so it must be P i ∈{ 1 ...,n } x i ( k ) = P i ∈{ 1 ...,n } x i (0) = 1 . ii) For every node i ∈ { 1 , . . . , n } , the ev olution of x i ( k ) , as dictated by Algorithm 2, can be compactly written as x i ( k + 1) = x i ( k ) − X j ∈E out i ( k ) x i → j ( k ) + X ` s.t. i ∈E out ` ( k ) x ` → i ( k ) . Since by assumption P j ∈E out i ( k ) x i → j ( k ) ≤ x i ( k ) for ev ery time-step k , we have that x i ( k + 1) ≥ P ` s.t. i ∈E out ` ( k ) x ` → i ( k ) ≥ 0 , where the last inequality follows from x ` → i ( k ) ≥ 0 . Repeating the reasoning for ev ery k ensures that x i ( k ) ≥ 0 at e very time- step. Finally , since P j ∈E out i ( k ) x i → j ( k ) ≤ x i ( k ) , it must be that x ` → i ( k ) ≤ x ` ( k ) . Therefore P ` s.t. i ∈E out ` ( k ) x ` → i ( k ) ≤ P ` s.t. i ∈E out ` ( k ) x ` ( k ) ≤ P ` 6 = i x ` ( k ) . Hence x i ( k + 1) ≤ P ` ∈{ 1 ,...,n } x l ( k ) − P j ∈E out i ( k ) x i → j ( k ) ≤ 1 , where the last inequality follows from the fact that P ` ∈{ 1 ,...,n } x l ( k ) = 1 (as sho wn abov e) and from the fact that x i → j ( k ) ≥ 0 . W e no w mo ve our attention to proving the desired con- ver gence statement. T o do so, we will sho w that x ( k ) → ¯ x such that E out i ( ¯ x ) = ∅ for all i ∈ { 1 , . . . , n } , thanks to the equiv alence in Proposition 3. Let us denote for brevity u i ( k ) : = u i ( x i ( k )) and define µ ( k ) = min i ∈{ 1 , . . . ,n } u i ( k ) . W e show in the following that µ ( k ) is a non-decreasing sequence. First, for an y action i we hav e x i ( k + 1) − x i ( k ) ≤ c min /L − ε due to (5b). Then we can bound the maximum utility decrease u i ( k + 1) − u i ( k ) ≥ − L | x i ( k + 1) − x i ( k ) | ≥ − L ( c min /L − ε ) = − c min + Lε = : − β c min , (15) where the first inequality follo ws by Lipschitz continuity and we define β : = 1 − ( Lε ) /c min ∈ ]0 , 1[ . Secondly , note that if some action i faces a utility decrease, that is, if u i ( k + 1) < u i ( k ) , then it must be x i ( k + 1) > x i ( k ) , because u i is non-increasing. Then there exists j such that i ∈ E out j ( x ( k )) . It follo ws that i faces utility decrease at step k ⇒ u i ( k ) > u j ( k ) + c j i ≥ µ ( k ) + c min . (16) Combining (15) with (16) we obtain i faces utility decrease at step k ⇒ u i ( k + 1) > µ ( k ) + (1 − β ) c min , which implies µ ( k + 1) ≥ µ ( k ) . Since µ ( k ) is non-decreasing and bounded ( { u i } n i =1 are continuous functions in a compact set), there exists a value µ ? such that lim k →∞ µ ( k ) = µ ? . (17) W e sho w in the following that there exists an action i ? such that lim k →∞ u i ? ( k ) = µ ? . (18) As lim k →∞ µ ( k ) = µ ? , there exists ˆ k such that µ ( k ) > µ ? − c min (1 − β ) / 2 , ∀ k ≥ ˆ k . (19) Then i faces utility decrease at step k ≥ ˆ k ⇒ u i ( k ) ≥ µ ? − c min (1 − β ) / 2 + c min = µ ? + c min (1 + β ) / 2 , (20) where the first inequality follows from combining (16) and (19). Combining (15) and (20) we obtain i faces utility decrease at step k ≥ ˆ k ⇒ u i ( k + 1) ≥ µ ? − c min (1 − β ) / 2 + c min (1 − β ) = µ ? + c min (1 − β ) / 2 . (21) Figure 3 illustrates inequalities (20) and (21). no decrease possibl e possible utility after decrease µ ( k ) µ ? µ ? + c min 2 (1   ) µ ? + c min 2 (1 +  ) Fig. 3: Illustration of µ ( k ) → µ ? from belo w and of inequalities (20) and (21) after iteration ˆ k (with β = 0 . 5 ). Combining inequalities (20) and (21) we obtain that ∃ k 1 ≥ ˆ k such that u i ( k 1 ) ≥ µ ? + ρ > µ ? ⇒ u i ( k ) ≥ min { µ ? + ρ, µ ? + c min (1 − β ) / 2 } for all k ≥ k 1 . (22) It then follows ∃ k 1 ≥ ˆ k such that u i ( k 1 ) > µ ? ⇒ lim k →∞ u i ( k ) 6 = µ ? . (23) By (23) and (17) it follows that there exists at least an action i ? such that u i ? ( k ) ≤ µ ? for all k ≥ ˆ k . Using again (17) and the “squeeze theorem” [35, Thm. 3.3.6], we can conclude that i ? satisfies (18). Upon defining E in j ( x ) = { i ∈ { 1 , . . . , n } s.t. j ∈ E out i ( x ) } , for any j ∈ { 1 , . . . , n } and x ∈ S , we note that the set E in i ? ( x ( k )) is empty for k ≥ ˆ k due to (16) and u i ? ( k ) ≤ µ ? . In words, no other action can en vy i ? after step ˆ k . This implies that u i ? ( k ) is a non-decreasing sequence, and in turn x i ? ( k ) is a non-increasing sequence. As a consequence lim k →∞ x i ? ( k ) = ¯ x i ? ≥ 0 . (24) If ¯ x i ? = 0 , then clearly E out i ? ( ¯ x i ? , x − i ? ) = ∅ by definition, for any x − i ? . If instead ¯ x i ? > 0 , since x i ? ( k + 1) ≤ (1 − τ ) x i ? ( k ) due to (5a), then con ver gence is achiev ed in a finite number of 11 steps. In other words, there exists ˜ k such that x i ? ( k ) = ¯ x i ? for all k ≥ ˜ k . In this case, for k ≥ ˜ k not only E in i ? ( x ( k )) = ∅ , but also E out i ? ( x ( k )) = ∅ , because otherwise i ? would encounter a mass decrease. Having concluded that there exists i ? ∈ { 1 , . . . , n } such that its mass conv erges (in a finite number of steps if ¯ x i ? > 0 ), we propose a last argument to sho w that there exists j ? ∈ { 1 , . . . , n }\{ i ? } such that its mass con ver ges to ¯ x j ? (in a finite number of steps if ¯ x i ? , ¯ x j ? > 0 ). Applying the same argument recursiv ely to { 1 , . . . , n }\{ i ? , j ? } concludes the proof. The last argument distinguishes two cases: ¯ x i ? > 0 and ¯ x i ? = 0 . In the first case ¯ x i ? > 0 , we already sho wed that there exists ˜ k such that E in i ? ( x ( k )) = E out i ? ( x ( k )) = ∅ for all k > ˜ k . Then action i ? has no interaction with any the other action and considering k ≥ ˜ k we apply to { 1 , . . . , n }\ i ? the previous reasoning until equation (24) to show that there is an action j ? ∈ { 1 , . . . , n }\{ i ? } with mass that con verges to ¯ x j ? (in a finite number of steps if ¯ x j ? > 0 ). In the second case ¯ x i ? = 0 . Ev en though E out i ? does not become the empty set at any finite iteration k , the mass x i ? becomes so small that transferring mass to the other n − 1 actions does not have an influence on their conv ergence. Proving this requires a cumbersome analysis that does not add much to the intuition already provided. Let us denote η ( k ) = min j ∈{ 1 , . . . ,n }\{ i ? } u j ( k ) . Contrary to µ ( k ) , the sequence η ( k ) is not non-decreasing in general because the analogous of (16) does not hold, as action i ? could transfer some of its mass to { 1 , . . . , n }\{ i ? } thus making their utilities decrease. Nonetheless, we show that there exists η ? such that lim k →∞ η ( k ) = η ? . (25) T o this end, we fix  > 0 and we show that there exists k ? such that | η ( k ) − η ? | <  for all k ≥ k ? . By definition of lim k →∞ x i ? ( k ) = 0 , there exists k ∞ such that x i ? ( k ) < / (2 L ) , ∀ k ≥ k ∞ . (26) Let us now construct the sequence η 0 ( k ) = η ( k ) + δ ( k ) , δ ( k + 1) = δ ( k ) + max { 0 , η ( k ) − η ( k + 1) } , δ ( k ∞ ) = 0 . In words, the sequence δ ( k ) accumulates the (absolute v alue of the) decreases of η ( k ) due to i ? , and summing it to η ( k ) results in a sequence η 0 ( k ) which is non-decreasing and bounded from abov e, hence it admits a limit η ? . By definition, there exists k 0 such that η 0 ( k ) > η ? − / 2 for all k ≥ k 0 . Moreover , δ ( k + 1) − δ ( k ) = max { 0 , η ( k ) − η ( k + 1) } > 0 only if E out i ? ( x ( k )) 6 = ∅ and in this case max { 0 , η ( k ) − η ( k + 1) } ≤ L · P j 6 = i ? x i ? → j ( k ) . In words, the only way η ( k ) can decrease is if action i ? transfers some mass to the others, and ev en then we have a bound on the utility decrease that this can cause. Summing up lim k →∞ δ ( k ) = ∞ X k = k ∞ max { 0 , η ( k ) − η ( k + 1) } ≤ Lx i ? ( k ∞ ) < (26) / 2 , hence, since δ ( k ) is non-decreasing, δ ( k ) < / 2 for all k ≥ k ∞ . Then for k ≥ max { k ∞ , k 0 } it holds η ? − η ( k ) = η ? − η 0 ( k ) + η 0 ( k ) − η ( k ) = η ? − η 0 ( k ) </ 2 + δ ( k ) </ 2 <  which proves (25). Finally , we want to sho w that there exists j ? ∈ { 1 , . . . , n }\{ i ? } such that lim k →∞ u j ? ( k ) = η ? . (27) Consider an action ` 6 = i ? such that lim k →∞ u ` ( x ` ( k )) 6 = η ? . (28) Since η ( k ) → η ? , then max { 0 , η ( k ) − η ( k + 1) } → 0 as k → ∞ . This, together with η ( k ) → η ? , implies that condition (28) is equiv alent to the existence of θ > 0 such that for all k 0 ≥ 0 there exists k 00 ≥ k 0 such that u ` ( k 00 ) > η ? + θ . (29) There are two possibilities in which ` can face a utility decrease after k 00 , namely through a mass transfer from some action { 1 , . . . , n }\{ i ? , ` } or through a mass transfer from action i ? . If the mass transfer happens through some action { 1 , . . . , n }\{ i ? , ` } , we can use the same argument of Figure 3 and in particular of implication (22) to conclude from (29) that u ` ( k ) ≥ min { η ? + θ , η ? + c min (1 − β ) / 2 } , ∀ k ≥ k 00 . (30) If instead the mass transfer happens through i ? , by x i ? ( k ) → 0 one can take k 0 such that x i ? ( k ) < θ/ (2 L ) , ∀ k ≥ k 0 (31) and take k 00 such that (29) holds. Then u ` ( k ) ≥ u ` ( k 00 ) − L θ 2 L > η ? + θ − θ 2 = η ? + θ 2 . (32) for all k ≥ k 00 , where the first inequality holds due to Lipschitz continuity and to (31), while the second inequality holds due to (29). W e can conclude that if (28) holds for action ` , then either (30) or (32) holds. Consequently , after k 00 action ` does not attain the minimum η ( k ) . If (28) holds for all ` ∈ { 1 , . . . , n }\ i ? , then the minimum η ( k ) is not attained by any action after k 00 , which is a contradiction. Then there must exist j ? such that (27) holds. With the same argument that led to (24), we can conclude that there exists ¯ x j ? ≥ 0 such that lim k →∞ x j ? ( k ) = ¯ x j ? ≥ 0 . As done for i ? , we can conclude that E out j ? = ∅ .  R E F E R E N C E S [1] Z. Ma, D. S. Calla way , and I. A. Hiskens, “Decentralized charging con- trol of large populations of plug-in electric vehicles, ” IEEE T ransactions on Contr ol Systems T echnology , v ol. 21, no. 1, pp. 67–78, 2013. [2] T . Alpcan, T . Bas ¸ar, R. Srikant, and E. Altman, “CDMA uplink power control as a noncooperativ e game, ” W ir eless Networks , v ol. 8, no. 6, pp. 659–670, 2002. [3] R. Johari and J. N. Tsitsiklis, “Efficiency loss in a network resource allocation game, ” Mathematics of Operations Researc h , vol. 29, no. 3, pp. 407–435, 2004. 12 [4] J. G. W ardrop, “Some theoretical aspects of road traf fic research. ” Pr oceedings of the institution of civil engineers , v ol. 1, no. 3, pp. 325– 362, 1952. [5] N. Nisan, T . Roughgarden, E. T ardos, and V . V . V azirani, Algorithmic game theory . Cambridge University Press Cambridge, 2007. [6] F . Dieleman, Households and housing: Choice and outcomes in the housing market . Routledge, 2017. [7] S. Schulmeister , “ A general financial transaction tax motives, re venues, feasibility and effects, ” WIFO, Marc h , 2008. [8] L. Coch and J. R. French Jr , “Overcoming resistance to change, ” Human r elations , vol. 1, no. 4, pp. 512–532, 1948. [9] S. Oreg, “Resistance to change: Developing an individual differences measure. ” Journal of applied psychology , vol. 88, no. 4, p. 680, 2003. [10] O. Oyeniyi and A. Abiodun, “Switching cost and customers loyalty in the mobile phone market: The nigerian experience, ” Business Intelli- gence Journal , v ol. 3, no. 1, pp. 111–121, 2010. [11] W . H. Sandholm, “Potential games with continuous player sets, ” Journal of Economic Theory , vol. 97, no. 1, pp. 81–108, 2001. [12] A. Nagurne y , “Migration equilibrium and variational inequalities, ” Eco- nomics letters , vol. 31, no. 1, pp. 109–112, 1989. [13] W . H. Sandholm, P opulation games and evolutionary dynamics . The MIT press, 2010. [14] J. H. Nachbar, “Evolutionary selection dynamics in games: Con vergence and limit properties, ” International journal of game theory , v ol. 19, no. 1, pp. 59–89, 1990. [15] R. Cressman and Y . T ao, “The replicator equation and other game dynamics, ” Pr oceedings of the National Academy of Sciences , vol. 111, no. Supplement 3, pp. 10 810–10 817, 2014. [16] L. Zino, G. Como, and F . F agnani, “On imitation dynamics in potential population games, ” in 2017 IEEE 56th Annual Confer ence on Decision and Contr ol (CDC) . IEEE, 2017, pp. 757–762. [17] A. Nagurney , J. Pan, and L. Zhao, “Human migration networks, ” Eur opean journal of operational r esear ch , vol. 59, no. 2, pp. 262–274, 1992. [18] ——, “Human migration networks with class transformations, ” in Struc- tur e and Change in the Space Economy . Springer, 1993, pp. 239–258. [19] P . Hartman and G. Stampacchia, “On some non-linear elliptic differential-functional equations, ” Acta mathematica , vol. 115, no. 1, pp. 271–310, 1966. [20] R. W . Cottle, “Nonlinear programs with positively bounded Jacobians, ” SIAM Journal on Applied Mathematics , vol. 14, no. 1, pp. 147–158, 1966. [21] F . Facchinei and J.-S. Pang, Finite-dimensional variational inequalities and complementarity pr oblems . Springer Science & Business Media, 2007. [22] P . S. Castro, D. Zhang, and S. Li, “Urban traf fic modelling and prediction using large scale taxi GPS traces, ” in International Conference on P ervasive Computing . Springer, 2012, pp. 57–72. [23] B. Li, D. Zhang, L. Sun, C. Chen, S. Li, G. Qi, and Q. Y ang, “Hunting or waiting? discovering passenger-finding strategies from a large-scale real-world taxi dataset, ” in P ervasive Computing and Communications W orkshops (PERCOM W orkshops), 2011 IEEE International Conference on . IEEE, 2011, pp. 63–68. [24] G. Pan, G. Qi, Z. Wu, D. Zhang, and S. Li, “Land-use classification using taxi GPS traces, ” IEEE T ransactions on Intelligent Tr ansportation Systems , vol. 14, no. 1, pp. 113–123, 2013. [25] P . S. Castro, D. Zhang, C. Chen, S. Li, and G. Pan, “From taxi GPS traces to social and community dynamics: A survey , ” ACM Computing Surveys (CSUR) , vol. 46, no. 2, p. 17, 2013. [26] R. C. P . W ong, W . Y . Szeto, S. W ong, and H. Y ang, “Modelling multi- period customer-searching behaviour of taxi drivers, ” Tr ansportmetrica B: T ransport Dynamics , vol. 2, no. 1, pp. 40–59, 2014. [27] (2017) V iamichelin. [Online]. A v ailable: https://www .viamichelin.com [28] N. Buchholz, “Spatial equilibrium, search frictions and ef ficient regula- tion in the taxi industry , ” W orking paper , T ech. Rep., 2015. [29] S. Boyd and L. V andenberghe, Con vex optimization . Cambridge univ ersity press, 2004. [30] S. Dinh The Luc, Schaible, “Generalized monotone nonsmooth maps, ” Journal of Con vex Analysis , vol. 3, pp. 195–206, 1996. [31] F . H. Clarke, Optimization and nonsmooth analysis . SIAM, 1990. [32] L. Ambrosio, N. Fusco, and D. Pallara, Functions of bounded variation and free discontinuity problems . Clarendon Press Oxford, 2000, vol. 254. [33] G. Scutari, D. P . Palomar , F . Facchinei, and J.-S. Pang, “Con ve x optimization, game theory , and variational inequality theory , ” IEEE Signal Pr ocessing Magazine , vol. 27, no. 3, pp. 35–49, 2010. [34] H. H. Bauschke and P . L. Combettes, Con ve x analysis and monotone operator theory in Hilbert spaces . Springer, 2010. [35] H. H. Sohrab, Basic real analysis . Springer, 2003. Basilio Gentile completed his PhD at the Auto- matic Control Laboratory at ETH Z ¨ urich in 2018. He received his Bachelor’ s degree in Information Engineering and Master’ s degree in Automation En- gineering from the University of Pado va, as well as a Master’ s degree in Mathematical Modeling and Computation from the T echnical University of Denmark. In 2013 he spent se ven months in the Motion Lab at the Univ ersity of California Santa Barbara to work at his Master’s Thesis. His research focuses on aggregativ e games and network games with applications to traffic networks and to smart charging of electric vehicles. Dario Paccagnan is a Postdoctoral researcher at the Center for Control, Dynamical Systems, and Com- putation, U.C. Santa Barbara, USA. He completed his PhD at the Automatic Control Laboratory , ETH Zurich, in December 2018. He recei ved his B.Sc. and M.Sc. in Aerospace Engineering from the University of Padov a, Italy , in 2011 and 2014. In the same year he receiv ed the M.Sc. in Mathematical Modelling from the T echnical Uni versity of Denmark, all with Honours. His Master’ s Thesis was prepared when visiting Imperial College of London, UK, in 2014. From March to August 2017 he has been a visiting scholar at the University of California, Santa Barbara. Dario’s research interests are at the interface between distributed control and game theory . Applications include multiagent systems, smart cities and traffic networks. Bolutife Ogunsula is a software engineer at Bloomberg LP , who completed a master’ s degree in Robotics, Systems and Control Engineering from ETH Zurich. Prior to that, he got his Bachelor’ s degree in Electrical and Electronics Engineering form the University of Lagos, and he worked as a software engineer for Codility . John L ygeros completed a B.Eng. degree in elec- trical engineering in 1990 and an M.Sc. degree in Systems Control in 1991, both at Imperial Colle ge of Science T echnology and Medicine, London, UK. In 1996 he obtained a Ph.D. degree from the Electrical Engineering and Computer Sciences Department, Univ ersity of California, Berkeley . During the period 1996-2000 he held a series of research appointments. Between 2000 and 2003 he was a Univ ersity Lec- turer at the Department of Engineering, Uni versity of Cambridge, UK. Between 2003 and 2006 he was an Assistant Professor at the Department of Electrical and Computer Engineering, Univ ersity of Patras, Greece. In July 2006 he joined the Automatic Control Laboratory at ETH Z ¨ urich, first as an Associate Professor, and since January 2010 as a Full Professor . Since 2009 he is serving as the Head of the Automatic Control Laboratory and since 2015 as the Head of the Department of Information T echnology and Electrical Engineering. His research interests include modelling, analysis, and control of hierarchical, hybrid, and stochastic systems, with applications to biochemical networks, automated highway systems, air traffic management, po wer grids and camera networks. John L ygeros is a Fellow of the IEEE, and a member of the IET and the T echnical Chamber of Greece.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment