A System Level Approach to Controller Synthesis

Biological and advanced cyberphysical control systems often have limited, sparse, uncertain, and distributed communication and computing in addition to sensing and actuation. Fortunately, the corresponding plants and performance requirements are also…

Authors: Yuh-Shyang Wang, Nikolai Matni, John C. Doyle

A System Level Approach to Controller Synthesis
TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, V OL. XX, NO. XX, OCTOBER 2019 1 A System Le v el Approach to Controller Synthesis Y uh-Shyang W ang, Member , IEEE, Nikolai Matni, Member , IEEE, and John C. Doyle Abstract —Biological and adv anced cyberph ysical contr ol sys- tems often hav e limited, sparse, uncertain, and distributed com- munication and computing in addition to sensing and actuation. Fortunately , the corresponding plants and performance require- ments are also sparse and structured, and this must be exploited to make constrained controller design feasible and tractable. W e introduce a new “system level” (SL) approach inv olving three complementary SL elements. System Level P arameterizations (SLPs) provide an alter native to the Y oula parameterization of all stabilizing controllers and the responses they achieve, and combine with System Level Constraints (SLCs) to parameterize the largest kno wn class of constrained stabilizing controllers that admit a con vex characterization, generalizing quadratic in variance (QI). SLPs also lead to a generalization of detectability and stabilizability , suggesting the existence of a rich separation structure, that when combined with SLCs, is naturally applicable to structurally constrained controllers and systems. W e further pro vide a catalog of useful SLCs, most importantly including sparsity , delay , and locality constraints on both communication and computing internal to the controller , and external system performance. Finally , we formulate System Level Synthesis (SLS) problems, which define the broadest known class of constrained optimal control pr oblems that can be solved using con vex programming . Index T erms —constrained & structured optimal control, de- centralized control, large-scale systems Pr eliminaries & Notation: W e use lo wer and upper case Latin letters such as x and A to denote vectors and matrices, respectiv ely , and lower and upper case boldface Latin letters such as x and G to denote signals and transfer matrices, respectiv ely . W e use calligraphic letters such as S to denote sets. In the interest of clarity , we work with discrete time linear time inv ariant systems, but unless stated otherwise, all results extend naturally to the continuous time setting. W e use standard definitions of the Hardy spaces H 2 and H ∞ , and denote their restriction to the set of real-rational proper transfer matrices by RH 2 and RH ∞ . W e use G [ i ] to denote the i th spectral component of a transfer function G , i.e., G ( z ) = P ∞ i =0 1 z i G [ i ] for | z | > 1 . Finally , we use F T to denote the This paper was presented in part at IEEE American Control Conference, June 4 - 6, 2014; in part at 52nd Annual Allerton Conference on Communica- tion, Control, and Computing, September 30 - October 3, 2014; in part at 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, December 15 - 17, 2014; in part at 5th IF AC W orkshop on Distributed Estimation and Control in Networked Systems, September 10 - 11, 2015; in part at IEEE American Control Conference, July 6 - 8, 2016; and in part at IEEE American Control Conference, May 24 - 26, 2017 [1]–[7]. This work was supported by Air Force Office of Scientific Research and National Science Foundation and gifts from Huawei and Google. Y .-S. W ang is with the Control and Optimization Group, GE Global Research Center , Niskayuna, NY 12309 USA (e-mail: yuh- shyang.wang@ge.com). N. Matni is with the Department of Electrical Engineering and Computer Sciences, UC Berkeley , Berkeley , CA 94720 USA (e-mail: nmatni@berkeley .edu). J. C. Doyle is with the Department of Control and Dynamical Sys- tems, California Institute of T echnology , Pasadena, CA 91125 USA (e-mail: doyle@caltech.edu). space of finite impulse response (FIR) transfer matrices with horizon T , i.e., F T := { G ∈ RH ∞ | G = P T i =0 1 z i G [ i ] } . I . I N T RO D U C T I O N T HE foundation of many optimal controller synthesis procedures is a parameterization of all internally stabi- lizing controllers, and the responses that they achieve, ov er which relev ant performance measures can be easily optimized. For finite dimensional linear-time-in variant (L TI) systems, the class of internally stabilizing L TI feedback controllers is characterized by the celebrated Y oula parameterization [8] and the closely related factorization approach [9]. In [8], the authors showed that the Y oula parameterization defines an isomorphism between a stabilizing controller and the resulting closed loop system response from sensors to actuators – therefore rather than synthesizing the controller itself, this system response (or Y oula parameter) could be directly op- timized. This allowed for the incorporation of customized design specifications on the closed loop system into the controller design process via con ve x optimization [10] or interpolation [11]. Subsequently , analogous parameterizations of stabilizing controllers for more general classes of systems were developed: notable examples include the polynomial ap- proach [12] for generalized Rosenbrock systems [13] and the behavioral approach [14]–[17] for linear dif ferential systems. These results illustrate the power and generality of Y oula pa- rameterization and factorization approaches to optimal control in the centralized setting. T ogether with state-space methods, they played a major role in shifting controller synthesis from an ad hoc, loop-at-a-time tuning process to a principled one with well defined notions of optimality , and in the L TI setting, pav ed the way for the foundational results of robust and optimal control that would follow [18]. Howe ver , as control engineers shifted their attention from centralized to distributed optimal control, it was observed that the parameterization approaches that were so fruitful in the centralized setting were no longer directly applicable. In contrast to centralized systems, modern c yber -physical systems (CPS) are large-scale, physically distributed, and intercon- nected. Rather than a logically centralized controller , these sys- tems are composed of sev eral sub-controllers, each equipped with their own sensors and actuators – these sub-controllers then exchange locally av ailable information (such as sensor measurements or applied control actions) via a communica- tion network. These information sharing constraints make the corresponding distributed optimal controller synthesis problem challenging to solve [19]–[24]. In particular , imposing such structural constraints on the controller can lead to optimal control problems that are NP-hard [25], [26]. Despite these technical and conceptual challenges, a body of work [20]–[24], [27], [28] that began in the early 2000s, and TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 2 that culminated with the introduction of quadratic in variance (QI) in the seminal paper [21], sho wed that for a large class of practically relev ant L TI systems, such internal structure could be integrated with the Y oula parameterization and still preserve the conv exity of the optimal controller synthesis task. Informally , a system is quadratically in v ariant if sub- controllers are able to exchange information with each other faster than their control actions propagate through the CPS [29]. Even more remarkable is that this condition is tight, in the sense that QI is a necessary [30] and suf ficient [21] condition for subspace constraints (defined by , for example, communication delays) on the controller to be enforceable via con vex constraints on the Y oula parameter . The identification of QI triggered an explosion of results in distributed optimal controller synthesis [31]–[39] – these results showed that the robust and optimal control methods that proved so po werful for centralized systems could be ported to the distributed setting. As far as we are aw are, no such results exist for the more general classes of systems considered in [12], [14]–[17]. Howe ver , a fact that is not emphasized in the distributed optimal control literature is that distributed controllers are actually more complex to synthesize and implement than their centralized counterparts. 1 In particular , a major limitation of the QI framew ork is that, for strongly connected systems, 2 it cannot provide a con vex characterization of localized con- trollers, in which local sub-controllers only access a subset of system-wide measurements (c.f., Section II-D and IV -D). This need for global exchange of information between sub- controllers is a limiting factor in the scalability of the synthesis and implementation of these distributed optimal controllers. Motiv ated by this issue, we propose a novel parameteriza- tion of internally stabilizing controllers and the closed loop responses that they achie ve, providing an alternativ e to the QI framework for constrained optimal controller synthesis. Specifically , rather than directly designing only the feedback loop between sensors and actuators, as in the Y oula framework, we propose directly designing the entire closed loop response of the system, as captured by the maps from process and measurement disturbances to control actions and states. As such, we call the proposed method a System Lev el Approach (SLA) to controller synthesis, which is composed of three elements: System Le vel Parameterizations (SLPs), System Lev el Constraints (SLCs) and System Le vel Synthesis (SLS) problems. Further , in contrast to the QI framework, which seeks to impose structure on the input/output map between sensor measurements and control actions, the SLA imposes structural constraints on the system response itself, and shows that this structure carries ov er to the internal r ealization of the corresponding controller . It is this conceptual shift from structure on the input/output map to the internal realization of the controller that allows us to expand the class of structured controllers that admit a con ve x characterization, and in doing so, vastly increase the scalability of distributed optimal control methods. W e summarize our main contributions below . 1 For example, see the solutions presented in [31]–[39] and the message passing implementation suggested in [39]. 2 W e say that a plant is strongly connected if the state of any subsystem can eventually alter the state of all other subsystems. A. Contributions This paper presents novel theoretical and computational contributions to the area of constrained optimal controller synthesis. In particular , we • define and analyze the system level approach to controller synthesis, which is built around novel SLPs of all stabi- lizing controllers and the closed loop responses that they achiev e; • show that SLPs allow us to constrain the closed loop response of the system to lie in arbitrary sets: we call such constraints on the system SLCs. If these SLCs admit a con vex representation, then the resulting set of con- strained system responses admits a con vex representation as well; • show that such constrained system responses can be used to directly implement a controller achieving them – in particular , any SLC imposed on the system response imposes a corresponding SLC on the internal structure of the resulting controller; • show that the set of constrained stabilizing controllers that admit a con vex parameterization using SLPs and SLCs is a strict superset of those that can be parameterized using quadratic inv ariance – hence we provide a generalization of the QI framew ork, characterizing the broadest known class of constrained controllers that admit a con vex parameterization; • formulate and analyze the SLS problem, which exploits SLPs and SLCs to define the broadest kno wn class of constrained optimal control problems that can be solved using con vex programming. W e show that the optimal control problems considered in the QI literature [20], as well as the recently defined localized optimal control framew ork [4] are all special cases of SLS problems. B. P aper Structur e In Section II, we define the system model considered in this paper, and revie w relev ant results from the distributed optimal control and QI literature. In Section III we define and analyze SLPs for state and output feedback problems, and provide a novel characterization of stable closed loop system responses and the controllers that achieve them – the corresponding controller realization makes clear that SLCs imposed on the system responses carry o ver to the internal structure of the controller that achie ves them. In Section IV, we provide a catalog of SLCs that can be imposed on the system responses parameterized by the SLPs described in the previous section – in particular , we show that by appropriately selecting these SLCs, we can provide conv ex characterizations of all stabilizing controllers satisfying QI subspace constraints, con vex constraints on the Y oula parameter , finite impulse response (FIR) constraints, sparsity constraints, spatiotemporal constraints [1]–[4], [7], controller architecture constraints [5], [40], [41], and any combination thereof. In Section V, we define and analyze the SLS problem, which incorporates SLPs and SLCs into an optimal control problem, and show that the distributed optimal control problem ((5) in Section II-C) is a special case of SLS. W e end with conclusions in Section VI. TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 3 I I . P R E L I M I NA R I E S A. System Model W e consider discrete time linear time inv ariant (L TI) sys- tems of the form x [ t + 1] = Ax [ t ] + B 1 w [ t ] + B 2 u [ t ] (1a) ¯ z [ t ] = C 1 x [ t ] + D 11 w [ t ] + D 12 u [ t ] (1b) y [ t ] = C 2 x [ t ] + D 21 w [ t ] + D 22 u [ t ] (1c) where x , u , w , y , ¯ z are the state vector , control action, e xternal disturbance, measurement, and regulated output, respectiv ely . Equation (1) can be written in state space form as P =   A B 1 B 2 C 1 D 11 D 12 C 2 D 21 D 22   =  P 11 P 12 P 21 P 22  where P ij = C i ( z I − A ) − 1 B j + D ij . W e refer to P as the open loop plant model. Consider a dynamic output feedback control law u = Ky . The controller K is assumed to have the state space realization ξ [ t + 1] = A k ξ [ t ] + B k y [ t ] (2a) u [ t ] = C k ξ [ t ] + D k y [ t ] , (2b) where ξ is the internal state of the controller . W e hav e K = C k ( z I − A k ) − 1 B k + D k . A schematic diagram of the interconnection of the plant P and the controller K is shown in Figure 1. P 11 P 12 P 21 P 22 K y u w ¯z Fig. 1. Interconnection of the plant P and controller K . The following assumptions are made throughout the paper . Assumption 1: The interconnection in Figure 1 is well-posed – the matrix ( I − D 22 D k ) is in vertible. Assumption 2: Both the plant and the controller realizations are stabilizable and detectable; i.e., ( A, B 2 ) and ( A k , B k ) are stabilizable, and ( A, C 2 ) and ( A k , C k ) are detectable. The goal of the optimal control problem is to find a controller K to stabilize the plant P and minimize a suit- ably chosen norm 3 of the closed loop transfer matrix from external disturbance w to regulated output ¯ z . This leads to the following centralized optimal control formulation: minimize K || P 11 + P 12 K ( I − P 22 K ) − 1 P 21 || subject to K internally stabilizes P . (3) 3 T ypical choices for the norm include H 2 and H ∞ . B. Y oula P arameterization A common technique to solve the optimal control problem (3) is via the Y oula parameterization, which is based on a doubly co-prime factorization of the plant, defined as follows. Definition 1: A collection of stable transfer matrices, U r , V r , X r , Y r , U l , V l , X l , Y l ∈ RH ∞ defines a doubly co- prime factorization of P 22 if P 22 = V r U − 1 r = U − 1 l V l and  X l − Y l − V l U l   U r Y r V r X r  = I . Such doubly co-prime factorizations can always be computed if P 22 is stabilizable and detectable [42]. Let Q be the Y oula parameter . From [42], problem (3) can be reformulated in terms of the Y oula parameter as minimize Q || T 11 + T 12 QT 21 || subject to Q ∈ RH ∞ (4) with T 11 = P 11 + P 12 Y r U l P 21 , T 12 = − P 12 U r , and T 21 = U l P 21 . The benefit of optimizing over the Y oula parameter Q , rather than the controller K , is that (4) is con vex with respect to the Y oula parameter . One can then incorporate various conv ex design specifications [10] in (4) to customize the controller synthesis task. Once the optimal Y oula parameter Q , or a suitable approximation thereof, is found in (4), we reconstruct the controller by setting K = ( Y r − U r Q )( X r − V r Q ) − 1 . C. Structured Contr oller Synthesis and QI W e now move our discussion to the distributed optimal control problem. W e follow the paradigm adopted in [21], [31]–[38], and focus on information asymmetry introduced by delays in the communication network – this is a reasonable modeling assumption when one has dedicated physical com- munication channels (e.g., fiber optic channels), but may not be valid under wireless settings. In the references cited above, locally acquired measurements are exchanged between sub- controllers subject to delays imposed by the communication network, 4 which manifest as subspace constraints on the controller itself. 5 Let C be a subspace enforcing the information sharing constraints imposed on the controller K . A distributed optimal control problem can then be formulated as [21], [30], [43], [44]: minimize K k P 11 + P 12 K ( I − P 22 K ) − 1 P 21 k subject to K internally stabilizes P , K ∈ C . (5) A summary of the main results from the distributed optimal control literature [21], [31]–[38] can be giv en as follo ws: if the subspace C is quadratically in v ariant with respect to P 22 ( KP 22 K ∈ C , ∀ K ∈ C ) [21], then the set of all stabilizing controllers lying in subspace C can be parameterized by those 4 Note that this delay may range from 0, modeling instantaneous com- munication between sub-controllers, to infinite, modeling no communication between sub-controllers. 5 For continuous time systems, the delays can be encoded via subspaces that may reside within H ∞ as opposed RH ∞ . TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 4 stable transfer matrices Q ∈ RH ∞ satisfying M ( Q ) ∈ C , for M ( Q ) := K ( I − P 22 K ) − 1 = ( Y r − U r Q ) U l . 6 Further , these conditions can be viewed as tight, in the sense that quadratic inv ariance is also a necessary condition [30], [43] for a subspace constraint C on the controller K to be enforced via a con vex constraint on the Y oula parameter Q . This allows the optimal control problem (5) to be recast as the following con vex model matching problem: minimize Q k T 11 + T 12 QT 21 k subject to Q ∈ RH ∞ , M ( Q ) ∈ C . (6) D. QI imposes limitations on controller sparsity When working with large-scale systems, it is natural to impose that sub-controllers only collect information from a local subset of all other sub-controllers. This can be enforced by setting the subspace constraint C in problem (5) to encode a suitable sparsity pattern K ij = 0 , 7 for some i, j . Howe ver , if the plant P 22 is dense (i.e., if the underlying system is strongly connected), which may occur ev en if the system matrices ( A, B 2 , C 2 ) are sparse, then any such sparsity constraint is not quadratically inv ariant with respect to the plant P 22 : this follows immediately from the algebraic definition of QI KP 22 K ∈ C , ∀ K ∈ C . As QI is a necessary and sufficient condition for the subspace constraint K ∈ C to be enforced via a con ve x constraint on the Y oula parameter Q , we conclude that for strongly connected systems, any sparsity constraint imposed on the controller K can only be enforced via a non- con vex constraint on Y oula parameter . A major motiv ation for the SLA developed in this paper was to circumvent this limitation of the QI framework – we revisit this discussion in Section IV -C, and show , through the use of a simple example, that the SLA does indeed allow for these limitations to be ov ercome. I I I . S Y S T E M L E V E L P A R A M E T E R I Z AT I O N In this section, we propose a nov el parameterization of inter - nally stabilizing controllers centered around system r esponses , which are defined by the closed loop maps from process and measurement disturbances to state and control action. W e show that for a giv en system, the set of stable closed loop system responses that are achiev able by an internally stabilizing L TI controller is an affine subspace of RH ∞ , and that the corresponding internally stabilizing controller achieving the desired system response admits a particularly simple and transparent realization. W e begin by analyzing the state feedback case, as it has a simpler characterization and allows us to provide intuition about the construction of a controller that achie ves a desired system response. With this intuition in hand, we present our results for the output feedback setting, which is the main focus of this paper . W e conclude the section with a comparison of the pros and cons of using the SL and Y oula parameterizations. 6 By definition, we have P 22 = V r U − 1 r = U − 1 l V l . This implies that the transfer matrices U r and U l are both inv ertible. Therefore, M is an invertible affine map of the Y oula parameter Q . 7 K ij denotes the ( i, j ) -entry of the transfer matrix K . A. State F eedback Consider a state feedback model giv en by P =   A B 1 B 2 C 1 D 11 D 12 I 0 0   . (7) The z -transform of the state dynamics (1a) is given by ( z I − A ) x = B 2 u + δ x , (8) where we let δ x := B 1 w denote the disturbance affecting the state. W e define R to be the system response mapping the external disturbance δ x to the state x , and M to be the system response mapping the disturbance δ x to the control action u . For a giv en dynamic state feedback control rule u = Kx into (8), we define the system response { R , M } achiev ed by the controller K to be R = ( z I − A − B 2 K ) − 1 M = K ( z I − A − B 2 K ) − 1 , (9) from which it follows that x = R δ x and u = M δ x . Similarly , gi ven transfer matrices { R , M } , we say that they define an achiev able system response for the system (7) if there exists a L TI controller K such that x = R δ x and u = M δ x , for { R , M } as defined in equation (9). The main result of this subsection is an algebraic character- ization of the set { R , M } of state-feedback system responses that are achie vable by an internally stabilizing controller K , as stated in the following theorem. Theor em 1: For the state feedback system (7), the following are true: (a) The affine subspace defined by  z I − A − B 2   R M  = I (10a) R , M ∈ 1 z RH ∞ (10b) parameterizes all system responses (9) achiev able by an internally stabilizing state feedback controller K . (b) For any transfer matrices { R , M } satisfying (10), the controller K = MR − 1 achiev es the desired system response (9). 8 . Further , if the controller K = MR − 1 is implemented as in Fig. 2, then it is internally stabilizing. The rest of this subsection is dev oted to the proof of Theorem 1. Necessity: The necessity of a stable and achiev able system response { R , M } lying in the affine subspace (10) is shown in the following lemma. Lemma 1 (Necessity of conditions (10) ): Consider the state feedback system (7). Let { R , M } be the system response achiev ed by an internally stabilizing controller K . Then, { R , M } is a solution of (10). Pr oof: Equation (10a) follows directly from (8), which holds for the system response achieved by any controller . 8 Note that for any transfer matrices { R , M } satisfying (10), the transfer matrix R is always in vertible because its leading spectral component 1 z I is in vertible. This is also true for the transfer matrices defined in equation (9). TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 5 For an internally stabilizing controller, the system response { R , M } is in RH ∞ by definition of internal stability . From (9) and the properness of K , the system response is strictly proper , implying equation (10b) and completing the proof. Remark 1: W e show in Lemma 6 in Appendix A that the feasibility of (10) is equiv alent to the stabilizability of the pair ( A, B 2 ) . In this sense, the conditions described in (10) provides an alternativ e definition of the stabilizability of a system. A dual argument is also provided to characterize the detectability of the pair ( A, C 2 ) . Sufficiency: Here we sho w that for any system response { R , M } lying in the affine subspace (10), we can construct an internally stabilizing controller K that leads to the desired system response (9). B 2 1 /z A K ˜ R ˜ M y  y  x  u x u  ˆx P 22 ˆ  x Fig. 2. The proposed state feedback controller structure, with ˜ R = I − z R and ˜ M = z M . Consider the block diagram shown in Figure 2, where here ˜ R = I − z R and ˜ M = z M . It can be checked that z ˜ R , ˜ M ∈ RH ∞ , and hence the internal feedback loop between ˆ δ x and the reference state trajectory ˆ x is well defined. As is standard, we introduce external perturbations δ x , δ y , and δ u into the system and note that the perturbations entering other links of the block diagram can be e xpressed as a combination of ( δ x , δ y , δ u ) being acted upon by some stable transfer matrices. 9 Hence the standard definition of internal stability applies, and we can use a bounded-input bounded- output argument (e.g., Lemma 5 . 3 in [42]) to conclude that it suffices to check the stability of the nine closed loop trans- fer matrices from perturbations ( δ x , δ y , δ u ) to the internal variables ( x , u , ˆ δ x ) to determine the internal stability of the structure as a whole. With this in mind, we can prov e the sufficienc y of Theorem 1 via the following lemma. Lemma 2 (Sufficiency of conditions (10) ): Consider the state feedback system (7). Giv en any system response { R , M } lying in the affine subspace described by (10), the state feedback controller K = MR − 1 , with structure shown in Figure 2, internally stabilizes the plant. In addition, the desired system response, as specified by x = R δ x and u = M δ x , is achiev ed. Pr oof: W e first note that from Figure 2, we can express the state feedback controller K as K = ˜ M ( I − ˜ R ) − 1 = ( z M )( z R ) − 1 = MR − 1 . Now , for any system response 9 The matrix A may define an unstable system, but viewed as an element of F 0 , defines a stable (FIR) transfer matrix. { R , M } lying in the affine subspace described by (10), we construct a controller using the structure given in Figure 2. T o show that the constructed controller internally stabilizes the plant, we list the following equations from Figure 2: z x = A x + B 2 u + δ x u = ˜ M ˆ δ x + δ u ˆ δ x = x + δ y + ˜ R ˆ δ x . Routine calculations show that the closed loop transfer matri- ces from ( δ x , δ y , δ u ) to ( x , u , ˆ δ x ) are giv en by   x u ˆ δ x   =   R − ˜ R − R A R B 2 M ˜ M − M A I + M B 2 1 z I I − 1 z A 1 z B 2     δ x δ y δ u   . (11) As all nine transfer matrices in (11) are stable, the implemen- tation in Figure 2 is internally stable. Furthermore, the desired system response { R , M } , from δ x to ( x , u ) , is achieved. Remark 2: The controller parameterization K = MR − 1 can also be deriv ed by re writing (10) as  I − 1 z A − 1 z B 2   z R z M  = I , z R , z M ∈ RH ∞ . Note that  I − 1 z A − 1 z B 2  is a left coprime factorization of the plant model. Classical methods therefore allow for the controller K = ( z M )( z R ) − 1 = MR − 1 to be obtained via the Y oula parameterization. Although the controller can be implemented via the dynamic feedback gain K = MR − 1 , we show in Section IV that the proposed realization in Figure 2 has significant advantages. Specifically , this implementation allows us to connect constraints imposed on the system response to constraints on the internal blocks of the controller implementation. Summary: Theorem 1 provides a necessary and sufficient condition for the system response { R , M } to be stable and achiev able, in that elements of the affine subspace defined by (10) parameterize all stable system responses achiev able via state-feedback, as well as the internally stabilizing controllers that achieve them. Further , Figure 2 provides an internally stabilizing realization for a controller achie ving the desired response. B. Output F eedback with D 22 = 0 W e now extend the arguments of the previous subsection to the output feedback setting, and begin by considering the case of a strictly proper plant P =   A B 1 B 2 C 1 D 11 D 12 C 2 D 21 0   . (12) Letting δ x [ t ] = B 1 w [ t ] denote the disturbance on the state, and δ y [ t ] = D 21 w [ t ] denote the disturbance on the measurement, the dynamics defined by plant (12) can be written as x [ t + 1] = Ax [ t ] + B 2 u [ t ] + δ x [ t ] y [ t ] = C 2 x [ t ] + δ y [ t ] . (13) TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 6 Analogous to the state-feedback case, we define a system response { R , M , N , L } from perturbations ( δ x , δ y ) to state and control inputs ( x , u ) via the following relation:  x u  =  R N M L   δ x δ y  . (14) Substituting the output feedback control law u = Ky into the z-transform of system equation (13), we obtain ( z I − A − B 2 K C 2 ) x = δ x + B 2 K δ y . For a proper controller K , the transfer matrix ( z I − A − B 2 K C 2 ) is always in vertible, hence we obtain the following equiv alent expressions for the system response (14) in terms of an output feedback controller K : R = ( z I − A − B 2 K C 2 ) − 1 M = K C 2 R N = R B 2 K L = K + K C 2 R B 2 K . (15) W e no w present one of the main results of the paper: an algebraic characterization of the set { R , M , N , L } of output- feedback system responses that are achiev able by an internally stabilizing controller K . Theor em 2: For the output feedback system (12), the fol- lowing are true: (a) The affine subspace described by:  z I − A − B 2   R N M L  =  I 0  (16a)  R N M L   z I − A − C 2  =  I 0  (16b) R , M , N ∈ 1 z RH ∞ , L ∈ RH ∞ (16c) parameterizes all system responses (15) achiev able by an internally stabilizing controller K . (b) For any transfer matrices { R , M , N , L } satisfying (16), the controller K = L − MR − 1 N achieves the desired response (15). 10 Further , if the controller is implemented as in Fig. 3, then it is internally stabilizing. Necessity: The necessity of a stable and achiev able system response { R , M , N , L } lying in the af fine subspace (16) is shown in the following lemma. Lemma 3 (Necessity of conditions (16) ): Consider the output feedback system (12). Let { R , M , N , L } , with x = R δ x + N δ y and u = M δ x + L δ y , be the system response achiev ed by an internally stabilizing control la w u = Ky . Then { R , M , N , L } lies in the affine subspace described by (16). Pr oof: Consider an internally stabilizing controller K with state space realization (2). Combining (2) with the system equation (13), we obtain the closed loop dynamics  z x z ξ  =  A + B 2 D k C 2 B 2 C k B k C 2 A k   x ξ  +  I B 2 D k 0 B k   δ x δ y  . 10 Note that for any transfer matrices { R , M , N , L } satisfying (16), the transfer matrix R is always inv ertible because its leading spectral component 1 z I is invertible. The same holds true for the transfer matrices defined in equation (15). From the assumption that K is internally stabilizing, we know that the state matrix of the abov e equation is a stable matrix (Lemma 5 . 2 in [42]). The system response achie ved by u = Ky is given by  R N M L  =     A + B 2 D k C 2 B 2 C k I B 2 D k B k C 2 A k 0 B k I 0 0 0 D k C 2 C k 0 D k     , (17) which satisfies (16c). In addition, it can be shown by routine calculation that (17) satisfies both (16a) and (16b) for arbitrary ( A k , B k , C k , D k ) . This completes the proof. Remark 3: W e show in Lemma 7 of Appendix A that the feasibility of (16) is equiv alent to the stabilizability and detectability of the triple ( A, B 2 , C 2 ) . In this sense, the con- ditions described in (16) provide an alternative definition of stabilizability and detectability . B 2 1 /z A 1 /z C 2    y  x  u  x u y ˜ M L ˜ N ˜ R + Fig. 3. The proposed output feedback controller structure, with ˜ R + = z ˜ R = z ( I − z R ) , ˜ M = z M , and ˜ N = − z N . Sufficiency: Here we sho w that for any system response { R , M , N , L } lying in the affine subspace (16), there exists an internally stabilizing controller K that leads to the desired system response (15). From the relations in (15), we notice the identity K = L − K C 2 R B 2 K = L − MR − 1 N . This relation leads to the controller structure given in Figure 3, with ˜ R + = z ˜ R = z ( I − z R ) , ˜ M = z M , and ˜ N = − z N . As was the case for the state feedback setting, it can be verified that ˜ R + , ˜ M , and ˜ N are all in RH ∞ . Therefore, the structure giv en in Figure 3 is well defined. In addition, all of the blocks in Figure 3 are stable filters – thus, as long as the origin ( x, β ) = (0 , 0) is asymptotically stable, all signals internal to the block diagram will decay to zero. T o check the internal stability of the structure, we introduce external perturbations δ x , δ y , δ u , and δ β to the system. The perturbations appearing on other links of the block diagram can all be expressed as a combination of the perturbations ( δ x , δ y , δ u , δ β ) being acted upon by some stable transfer matrices, and so it suffices to check the input-output stability of the closed loop trans- fer matrices from perturbations ( δ x , δ y , δ u , δ β ) to controller signals ( x , u , y , β ) to determine the internal stability of the structure [42]. W ith this in mind, we can prove the sufficiency of Theorem 2 via the following lemma. TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 7 T ABLE I C L O S E D L O O P M A P S F R O M P E RT U R BAT I O N S T O I N T E R N A L V A R I A B L E S δ x δ y δ u δ β x R N R B 2 1 z N C 2 u M L I + M B 2 1 z L C 2 y C 2 R I + C 2 N C 2 R B 2 1 z C 2 N C 2 β − 1 z B 2 M − 1 z B 2 L − 1 z B 2 M B 2 1 z I − 1 z 2 ( A + B 2 L C 2 ) Lemma 4 (Sufficiency of conditions (16) ): Consider the output feedback system (12). F or any system response { R , M , N , L } lying in the affine subspace defined by (16), the controller K = L − MR − 1 N (with structure shown in Figure 3) internally stabilizes the plant. In addition, the desired system response, as specified by x = R δ x + N δ y and u = M δ x + L δ y , is achiev ed. Pr oof: For any system response { R , M , N , L } lying in the affine subspace defined by (16), we construct a controller using the structure giv en in Figure 3. W e no w check the stabil- ity of the closed loop transfer matrices from the perturbations ( δ x , δ y , δ u , δ β ) to the internal variables ( x , u , y , β ) . W e hav e the following equations from Figure 3: z x = A x + B 2 u + δ x y = C 2 x + δ y z β = ˜ R + β + ˜ Ny + δ β u = ˜ M β + Ly + δ u . Combining these equations with the relations in (16a) - (16b), we summarize the closed loop transfer matrices from ( δ x , δ y , δ u , δ β ) to ( x , u , y , β ) in T able I. Equation (16c) implies that all sixteen transfer matrices in T able I are stable, so the implementation in Figure 3 is internally stable. Furthermore, the desired system response from ( δ x , δ y ) to ( x , u ) is achie ved. The controller implementation of Figure 3 is gov erned by the following equations: z β = ˜ R + β + ˜ Ny u = ˜ M β + Ly , (18) which can be informally interpreted as an extension of the state-space realization (2) of a controller K . In par- ticular , the realization equations (18) can be viewed as a state-space like implementation where the constant matri- ces A K , B K , C K , D K of the state-space realization (2) are replaced with stable proper transfer matrices ˜ R + , ˜ M , ˜ N , L . The benefit of this implementation is that arbitrary con ve x constraints imposed on the transfer matrices ˜ R + , ˜ M , ˜ N , L carry over directly to the controller implementation. W e show in Section IV that this allows for a class of structural (locality) constraints to be imposed on the system response (and hence the controller) that are crucial for extending controller syn- thesis methods to large-scale systems. In contrast, we recall that imposing general con vex constraints on the controller K or directly on its state-space realization A K , B K , C K , D K do not lead to con ve x optimal control problems. Remark 4: The controller implementation (18) admits the following equiv alent representation  R N M L   z β y  =  0 u  , (19) allowing for an interesting interpretation of the controller K = L − MR − 1 N in terms of Rosenbrock system matrix representations [13]. In particular , the system response (14) specifies a Rosenbrock system matrix representation of the controller that achiev es it. Summary: Theorem 2 provides a necessary and sufficient condition for the system response { R , M , N , L } to be stable and achie v able, in that elements of the affine subspace defined by (16) parameterize all stable achiev able system responses, as well as all internally stabilizing controllers that achieve them. Further , Figure 3 provides an internally stabilizing realization for a controller achieving the desired response. C. Specialized Implementations for Open-loop Stable Systems In this subsection, we propose two specializations of the controller implementation in Figure 3 for open loop stable systems. From T able I, if we set δ u and δ β to 0 , it follows that β = − 1 z B 2 u . This leads to a simpler controller imple- mentation giv en by u = Ly − M B 2 u , with the corresponding controller structure sho wn in Figure 4(b). This implementation can also be obtained from the identity K = ( I + M B 2 ) − 1 L , which follows from the relations in (15). Unfortunately , as shown below , this implementation is internally stable only when the open loop plant is stable. For the controller implementation and structure shown in Figure 4(b), the closed loop transfer matrices from perturba- tions to the internal variables are giv en by  x u  =  R N R B 2 ( z I − A ) − 1 B 2 M L I + M B 2 I      δ x δ y δ u δ β     . (20) When A defines a stable system, the implementation in Figure 4(b) is internally stable. Howe ver , when the open loop plant is unstable (and the realization ( A, B 2 ) is stabilizable), the transfer matrix ( z I − A ) − 1 B 2 is unstable. From (20), the effect of the perturbation δ β can lead to instability of the closed loop system. This structure thus shows the necessity of introducing and analyzing the effects of perturbations δ β on the controller internal state. Alternativ ely , if we start with the identity K = L ( I + C 2 N ) − 1 , which also follo ws from (15), we obtain the con- troller structure sho wn in Figure 4(c). The closed loop map from perturbations to internal signals is then given by   x u β   =   R N R B 2 M L I + M B 2 C 2 ( z I − A ) − 1 I C 2 ( z I − A ) − 1 B 2     δ x δ y δ u   . TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 8 As can be seen, the controller implementation is once again internally stable only when the open loop plant is stable (if the realization ( A, C 2 ) is detectable). This structure thus shows the necessity of introducing and analyzing the effects of perturbations on the controller internal state β . Of course, when the open loop system is stable, the con- troller structures illustrated belo w may be appealing as they are simpler and easier to implement. In fact, we can show that the controller structure in Figure 4(b) is an alternativ e realization of the internal model control principle (IMC) [45], [46] as applied to the Y oula parameterization. Specifically , for open loop stable systems, the Y oula parameter is giv en by Q = K ( I − P 22 K ) − 1 . As we show in Lemma 5 of Section IV -A, the Y oula parameter Q is equal to the system response L for open loop stable systems. W e then have u = Ly − M B 2 u (21a) = Qy − L C 2 ( z I − A ) − 1 B 2 u (21b) = Qy − QP 22 u (21c) = Q ( y − P 22 u ) , (21d) where (21b) is obtained by substituting M = L C 2 ( z I − A ) − 1 from (16b) into (21a). Equation (21d) is exactly IMC. Thus, we see that IMC is equi v alent to our proposed parameterization (and the simplified representation sho wn in Figure 4(b)) for open loop stable systems. u y P 22 Q +  (a) Internal Model Control    y  u u y L  M B 2 (b) Structure 1  y  u  u y L  C 2 N (c) Structure 2 Fig. 4. Alternative controller structures for stable systems. D. Output F eedback with D 22 6 = 0 Finally , for a general proper plant model (1) with D 22 6 = 0 , we define a new measurement ¯ y [ t ] = y [ t ] − D 22 u [ t ] . This leads to the controller structure shown in Figure 5. In this case, the closed loop transfer matrices from δ u to the internal v ariables become     x u y β     =     R B 2 + N D 22 I + M B 2 + L D 22 C 2 R B 2 + D 22 + C 2 N D 22 − 1 z B 2 ( M B 2 + L D 22 )     δ u . The remaining entries of T able I remain the same. Therefore, the controller structure shown in Figure 5 internally stabilizes the plant. 1 /z    ˜ M L ˜ N ˜ R +  y  u u y Fig. 5. The proposed output feedback controller structure for D 22 6 = 0 . E. System Level and Y oula P arameterizations A ke y difference between the SL and Y oula parameteriza- tions is the manner in which they characterize the achiev able closed loop responses of a system. The Y oula parameterization provides an image space representation of the achiev able system responses, parameterized explicitly by the free Y oula parameter . This parameterization lends itself naturally to effi- cient computation via the standard and theoretically supported approach [10] of restricting the Y oula parameter and objective function to be FIR. Howe ver , despite this ease of computation, as alluded to earlier and discussed in detail in Section IV -D, imposing sparsity constraints on the controller via the Y oula parameter is in general intractable. In contrast, the proposed SL parameterization specifies a kernel space representation of achiev able system responses, parameterized implicitly by the affine space (16a) - (16b). While our discussion highlights the benefits and flexibility of the SL approach, there is the important cav eat that the affine constraints (16a) - (16b) are in general infinite dimensional. Hence, although the parameterization is a con vex one, it does not immediately lend itself to efficient computation. In Section IV -E we show that imposing FIR constraints on the system responses leads to a finite-dimensional optimization problem, and further show that such constraints are feasible if the system is controllable and observable. I V . S Y S T E M L E V E L C O N S T R A I N T S An advantage of the parameterizations described in the previous section is that they allow us to impose additional constraints on the system response and the corresponding internal structure of the controller. These constraints may be in the form of structural (subspace) constraints on the response, or may capture a suitable measure of system performance: in this section, we provide a catalog of useful SLCs that can be naturally incorporated into the SLPs described in the previous section. In addition to all of the performance specifications described in [10], we also show that QI subspace constraints are a special case of SLCs. W e then provide an example as to why one may wish to go beyond QI subspace constraints to localized (sparse) subspace constraints on the system response, TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 9 and show that such constraints can be trivially imposed in our framew ork. As far as we are aw are, no other parameter- izations [9], [12], [15]–[17], [21] allow for such constraints to be tractably enforced for general (i.e., strongly connected) systems. As such, we provide here a description of the largest known class of constrained stabilizing controllers that admit a con ve x parameterization. Further, as we show in our companion paper [47], it is this ability to impose locality constraints on the controller structure via conv ex constraints that allows us to scale the methods proposed in [10], [21] to large-scale systems. Before proceeding, we emphasize that although the Y oula parameterization and co-prime factors are needed to prove the results presented in Sections IV -A and IV -B, these are only used for the purposes of establishing connections between the Y oula/QI parameterizations and the SLA. The SLPs presented in the previous section require neither the Y oula parameteri- zation nor co-prime factors. A. Constraints on the Y oula P arameter W e show that any constraint imposed on the Y oula parame- ter can be translated into a SLC, and vice versa. In particular , if this constraint is con vex, then so is the corresponding SLC. Consider the following modification of the standard Y oula parameterization, which characterizes a set of constrained internally stabilizing controllers K for a plant (12): K = ( Y r − U r Q )( X r − V r Q ) − 1 , Q ∈ Q ∩ RH ∞ . (22) Here the expression for K is in terms of the co-prime factors defined in Section II-B, and Q is an arbitrary set – if we take Q = RH ∞ , we recov er the standard Y oula parameterization. Similarly , if we take Q to be a QI subspace constraints, we recov er a distributed optimal control problem that admits a con vex parameterization: we discuss the connection between QI and SLCs in more detail in the next subsection. Further , if the plant is open-loop stable or has special structure, it may be desirable to enforce non-QI constraints on the Y oula parameter . In general, one can use this expression to characterize all pos- sible constrained internally stabilizing controllers by suitably varying the set Q , 11 and hence this formulation is as general as possible. W e no w show that an equiv alent parameterization can be giv en in terms of a SLC. Theor em 3: The set of constrained internally stabilizing controllers described by (22) can be equiv alently expressed as K = L − MR − 1 N , where the system response { R , M , N , L } lies in the set { R , M , N , L   (16a) - (16c) hold, L ∈ M ( Q ) } , (23) for M ( Q ) := K ( I − P 22 K ) − 1 = ( Y r − U r Q ) U l the in vertible affine map as defined in Section II-C. Further , this parameterization is con vex if and only if Q is conv ex. In order to prove this result, we first need to understand the relationship between the controller K , the Y oula parameter Q , and the system response { R , M , N , L } . 11 In particular, to ensure that K ∈ C , it suffices to enforce that ( Y r − U r Q )( X r − V r Q ) − 1 ∈ C . Lemma 5: Let L be defined as in (15), and the in vertible affine map M be defined as in Section II-C. W e then have that L = K ( I − P 22 K ) − 1 = M ( Q ) . (24) Pr oof: From the equations u = Ky and y = P 21 w + P 22 u , we can eliminate u and express y as y = ( I − P 22 K ) − 1 P 21 w . W e then hav e that u = Ky = K ( I − P 22 K ) − 1 P 21 w . (25) Recall that we define δ x = B 1 w and δ y = D 21 w . As a result, we hav e P 21 w = C 2 ( z I − A ) − 1 δ x + δ y . Substituting this identity into (25) yields u = K ( I − P 22 K ) − 1 [ C 2 ( z I − A ) − 1 δ x + δ y ] . (26) By definition, L is the closed loop mapping from δ y to u . Equation (26) then implies that L = K ( I − P 22 K ) − 1 . From [44], [48] (c.f. Section II-C), we hav e K ( I − P 22 K ) − 1 = M ( Q ) , which completes the proof. Pr oof of Theorem 3: The equiv alence between the pa- rameterizations (22) and (23) is readily obtained from Lemma 5. As M is an in vertible affine mapping between L and Q , any con vex constraint imposed on the Y oula parameter Q can be equiv alently translated into a conv ex SLC imposed on L , and vice versa. B. Quadratically In variant Subspace Constraints Recall that for a subspace C that is quadratically in variant with respect to a plant P 22 , the set of internally stabilizing controllers K that lie within the subspace C can be expressed as the set of stable transfer matrices Q ∈ RH ∞ satisfying M ( Q ) ∈ C , for M the inv ertible affine map defined in Section II-C. W e therefore have the follo wing corollary to Theorem 3. Cor ollary 1: Let C be a subspace constraint that is quadrat- ically inv ariant with respect to P 22 . Then the set of internally stabilizing controllers satisfying K ∈ C can be parameterized as in Theorem 3 with L = M ( Q ) ∈ C . Pr oof: From Lemma 5, we hav e L = K ( I − P 22 K ) − 1 . In voking Theorem 14 of [21], we hav e that K ∈ C if and only if L = K ( I − P 22 K ) − 1 ∈ C . The claim then follows immediately from Theorem 3. Note that Corollary 1 holds true for stable and unstable plants P . Therefore, in order to parameterize the set of internally stabilizing controllers lying in C , we do not need to assume the existence of an initial strongly stabilizing controller as in [21] nor do we need to perform a doubly co- prime factorization as in [44]. Thus we see that QI subspace constraints are a special case of SLCs. Finally , we note that in [30] and [43], the authors show that QI is necessary for a subspace constraint C on the controller K to be enforceable via a con ve x constraint on the Y oula parameter Q . Howe ver , when C is not a subspace constraint, no general methods exist to determine whether the set of internally stabilizing controllers lying in C admits a con vex representation. In contrast, determining the con vexity of a SLC is straightforward. TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 10 C. Beyond QI Before introducing the class of localized SLCs, we present a simple example for which the QI framew ork fails to capture an “obvious” controller with localized structure, but for which the SLA can. This example also serves to illustrate the importance of locality in achieving scalability of controller implementation. Our companion paper [47] sho ws how locality further leads to scalability of controller synthesis. Example 1: Consider the optimal control problem: minimize u lim T →∞ 1 T P T t =0 E k x [ t ] k 2 2 subject to x [ t + 1] = Ax [ t ] + u [ t ] + w [ t ] , (27) with disturbance w [ t ] i . i . d ∼ N (0 , I ) . W e assume full state- feedback, i.e., the control action at time t can be expressed as u [ t ] = f ( x [0 : t ]) for some function f . An optimal control policy u ? for this LQR problem is easily seen to be given by u ? [ t ] = − Ax [ t ] . Further suppose that the state matrix A is sparse and let its support define the adjacency matrix of a graph G for which we identify the i th node with the corresponding state/control pair ( x i , u i ) . In this case, we hav e that the optimal control policy u ? can be implemented in a localized manner . In particular , in order to implement the state feedback policy for the i th actuator u i , only those states x j for which A ij 6 = 0 need to be collected – thus only those states corresponding to immediate neighbors of node i in the graph G , i.e., only local states, need to be collected to compute the corresponding control action, leading to a localized implementation. As we discuss in our companion paper [47], the idea of locality is essential to allowing controller synthesis and implementation to scale to arbitrarily lar ge systems, and hence such a structured controller is desirable. Now suppose that we naiv ely attempt to solve optimal control problem (27) by conv erting it to its equiv alent H 2 model matching problem (5) and constraining the controller K to have the same support as A , i.e., K = P ∞ t =0 1 z t K [ t ] , supp ( K [ t ]) ⊂ supp ( A ) . If the graph G is strongly connected, then any sparsity constraint in the form of K ij = 0 is not QI with respect to the plant P 22 = ( z I − A ) − 1 . T o see this, note that if the graph G is strongly connected, then P 22 is a dense transfer matrix: it then follows immediately that any subspace C enforcing sparsity constraints on K fails to satisfy KP 22 K ∈ C , ∀ K ∈ C , and hence is not QI with respect to P 22 . The results of [30] further allow us to conclude that computing such a structured controller can nev er be done using con vex programming when using the Y oula parameterization. In contrast, in the case of a full control ( B 2 = I ) problem, the condition (10) simplifies to ( z I − A ) R − M = I , R , M ∈ 1 z RH ∞ . Again, suppose that we wish to synthesize an optimal controller that has a communication topology given by the support of A – from the above implementation, it suffices to constrain the support of transfer matrices R and M to be a subset of that of A . It can be checked that R = 1 z I , and M = − 1 z A satisfy the above constraints, and recover the globally optimal controller K = − A . D. Subspace and Sparsity Constraints Motiv ated by the previous example, we consider here sub- space SLCs, with a particular emphasis on those that encode sparse structure in the system response and corresponding controller implementation. Let L be a subspace of RH ∞ . W e can parameterize all stable achiev able system responses that lie in this subspace by adding the following SLC to the parameterization of Theorem 2:  R N M L  ∈ L . (28) Of particular interest are subspaces L that define transfer matrices of sparse support. An immediate benefit of enforc- ing such sparsity constraints on the system response is that implementing the resulting controller (18) can be done in a localized way , i.e., each controller state β i and control action u i can be computed using a local subset (as defined by the support of the system response) of the global controller state β and sensor measurements y . For this reason, we refer to the constraint (28) as a localized SLC when it defines a subspace with sparse support. As we show in our companion paper [47], such localized constraints further allow for the resulting system response to be computed in a localized way , i.e., the global computation decomposes naturally into decoupled subproblems that depend only on local sub-matrices of the state-space representation (1). Clearly , both of these features are extremely desirable when computing controllers for large-scale systems. T o the best of our kno wledge, such constraints cannot be enforced using con ve x constraints using existing controller parameterizations [9], [12], [15]–[17], [21] for general systems. A caveat of our approach is that although arbitrary subspace structure can be enforced on the system response, it is possible that the intersection of the affine space described in Theorem 2 with the specified subspace is empty . Indeed, selecting an appropriate (feasible) localized SLC, as defined by the subspace L , is a subtle task: it depends on an interplay between actuator and sensor density , information exchange delay and disturbance propagation delay . Formally defining and analyzing a procedure for designing a localized SLC is beyond the scope of this paper: as such, we refer the reader to our recent paper [5], in which we present a method that allows for the joint design of an actuator architecture and corresponding feasible localized SLC. E. FIR Constraints Giv en the parameterization of stabilizing controllers of Theorem 2, it is straightforward to enforce that a system response be FIR with horizon T via the follo wing SLC R , M , N , L ∈ F T . (29) Whereas the pros and cons of deadbeat control in the centralized setting are well studied [49]–[51], we argue here that imposing an appropriately tuned FIR SLC has benefits that are specific to the distributed large-scale setting: (a) The controller achie ving the desired system response can be implemented using the FIR filter banks ˜ R + , ˜ M , ˜ N , L ∈ TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 11 F T , as illustrated in Figure 3. This simplicity of im- plementation is extremely helpful when applying these methods in practice. (b) When a FIR SLC is imposed, the resulting set of stable achie v able system responses and corresponding controllers admit a finite dimensional representation – specifically , the constraints specified in Theorem 2 only need to be applied to the impulse response elements { R [ t ] , M [ t ] , N [ t ] , L [ t ] } T t =0 . Remark 5: It should be noted that the computational benefits claimed above hold only for discrete time systems. For con- tinuous time systems, a FIR transfer matrix is still an infinite dimensional object, and hence the resulting parameterizations and constraints are in general infinite dimensional as well. Remark 6: The complexity of local implementations using FIR filter banks scales linearly with the horizon T – an interesting direction for future work is to determine if infinite impulse response (IIR) system responses lead to simpler controller implementations via state-space realizations. W e conclude this subsection by showing that such FIR constraints are always feasible, for suitably chosen horizons T , if the system is controllable and observable. Theor em 4: The SLP (16) admits a FIR solution if the triple ( A, B 2 , C 2 ) is controllable and observable. Pr oof: By definition, if ( A, B 2 ) is controllable, then there exists FIR transfer matrices ( R 1 , M 1 ) ∈ F T 1 satisfying (10) for some finite T 1 . Similarly , if ( A, C 2 ) is observable, then there exists FIR transfer matrices ( R 2 , N 2 ) ∈ F T 2 satisfying (37) for some finite T 2 . When ( A, B 2 , C 2 ) is controllable and observable, the following FIR transfer matrices can be verified to lie in the affine space (16) R = R 1 + R 2 − R 1 ( z I − A ) R 2 (30a) M = M 1 − M 1 ( z I − A ) R 2 (30b) N = N 2 − R 1 ( z I − A ) N 2 (30c) L = − M 1 ( z I − A ) N 2 . (30d) Finally , we note that recently dev eloped relaxations [52] of SLP can be used when such FIR constraints cannot be satisfied. This may occur, for instance, when the underlying system is only stabilizable and/or detectable. F . Intersections of SLCs and Spatiotemporal Constraints Another major benefit of SLCs is that several such con- straints can be imposed on the system response at once. Further , as con ve x sets are closed under intersection, conv ex SLCs are also closed under intersection. T o illustrate the usefulness of this property , consider the intersection of a QI subspace SLC (enforcing information exchange constraints between sub-controllers), a FIR SLC and a localized SLC. The resulting SLC can be interpreted as enforcing a spatiotemporal constraint on the system response and its corresponding con- troller , as we explain using the chain example shown below. Figure 6 shows a diagram of the system response to a particular disturbance ( δ x ) i . In this figure, the v ertical axis denotes the spatial coordinate of a state in the chain, and the horizontal axis denotes time: hence we refer to this figure as a space-time diagram. Depicted are the three components of the spatiotemporal constraint, namely the communication delay imposed on the controller via the QI subspace SLC, the deadbeat response of the system to the disturbance imposed by the FIR SLC, and the localized region affected by the disturbance ( δ x ) i imposed by the localized SLC. When the effect of each disturbance ( δ x ) i can be localized within such a spatiotemporal SLC, the system is said to be localizable (c.f., [2], [4]). It follows that the feasibility of a spatiotemporal constraint implies a more general notion of controllability (observability), wherein the system impulse response is constrained to be finite in both space and time, and the controller is subject to communication delays. Thus rather than the traditional computational test of verifying the rank of a suitable controllability (observability) matrix, localizability is verified by the feasibility of a set of affine constraints. Space Time Sparsity Constraint FIR T Communication delay  x i Affected Region t t +1 x i x i +1 x i  1 Fig. 6. Space time diagram for a single disturbance striking the chain described in Example 1. G. Closed Loop Specifications As in [10], our parameterization allo ws for arbitrary perfor- mance constraints to be imposed on the closed loop response. In contrast to the method proposed in [10], these performance constraints can be combined with structural (i.e., localized spatiotemporal) constraints on the controller realization, natu- rally extending their applicability to the large-scale distrib uted setting. In the interest of completeness, we highlight some particularly useful SLCs here. 1) System P erformance Constraints: Let g ( · ) be a func- tional of the system response – it then follo ws that all internally stabilizing controllers satisfying a performance lev el, as specified by a scalar γ , are giv en by transfer matrices { R , M , N , L } satisfying the conditions of Theorem 2 and the SLC g ( R , M , N , L ) ≤ γ . (31) Further , recall that the sublevel set of a con ve x functional is a con ve x set, and hence if g is con ve x, then so is the SLC (31). A particularly useful choice of con ve x functional is g ( R , M , N , L ) =      C 1 D 12   R N M L   B 1 D 21  + D 11     , (32) TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 12 for a system norm k · k , which is equi v alent to the objectiv e function of the decentralized optimal control problem (5). Thus by imposing sev eral performance SLCs (32) with different choices of norm, one can naturally formulate multi-objectiv e optimal control problems. 2) Controller Robustness Constraints: Suppose that the controller is to be implemented using limited hardware, thus introducing non-negligible quantization (or other errors) to the internally computed signals: this can be modeled via an internal additi ve noise δ β in the controller structure (c.f., Figure 3). In this case, we may wish to design a controller that further limits the effects of these perturbations on the system: to do so, we can impose a performance SLC on the closed loop transfer matrices specified in the rightmost column of T able I. 3) Controller Arc hitecture Constraints: The controller im- plementation (18) also allows us to naturally control the number of actuators and sensors used by a controller – this can be useful when designing controllers for large-scale systems that use a limited number of hardware resources (c.f., Section V -B3). In particular , assume that implementation (18) parameterizing stabilizing controllers that use all possible actuators and sensors. It then suffices to constrain the number of non-zero rows of the transfer matrix [ ˜ M , L ] to limit the number of actuators used by the controller , and similarly , the number of non-zero columns of the transfer matrix [ ˜ N > , L > ] > to limit the number of sensors used by the controller . As stated, these constraints are non-con vex, b ut recently proposed con ve x relaxations [40], [41] can be used in their stead to impose con vex SLCs on the controller architecture. 4) P ositivity Constraints: It has recently been observed that (internally) positiv e systems are amenable to efficient analysis and synthesis techniques (c.f., [53] and the references therein). Therefore it may be desirable to synthesize a controller that either preserves or enforces positivity of the resulting closed loop system. W e can enforce this condition via the SLC that the elements n  C 1 D 12   R [ t ] N [ t ] M [ t ] L [ t ]   B 1 D 21  o ∞ t =1 and the matrix ( D 12 L [0] D 21 + D 11 ) are all element-wise nonnegati ve matrices. This SLC is easily seen to be conv ex. V . S Y S T E M L E V E L S Y N T H E S I S W e build on the results of the pre vious sections to formulate the SLS problem. W e show that by combining appropriate SLPs and SLCs, the largest known class of con vex structured optimal control problems can be formulated. As a special case, we sho w that we recov er all possible structured optimal control problems of the form (5) that admit a conv ex representation in the Y oula domain. A. General F ormulation Let g ( · ) be a functional capturing a desired measure of the performance of the system (as described in Section IV -G1), and let S be a SLC. W e then pose the SLS problem as minimize { R , M , N , L } g ( R , M , N , L ) subject to (16a) − (16c)  R N M L  ∈ S . (33) For g a conv ex functional and S a conv ex set, 12 the resulting SLS problem is a con ve x optimization problem. Remark 7: For a state feedback problem, the SLS problem can be simplified to minimize { R , M } g ( R , M ) subject to (10a) − (10b)  R M  ∈ S . (34) B. Examples of Con vex SLS Here we highlight some conv ex SLS problems. A more extensi ve list can be found in [54], [55]. 1) Distributed Optimal Contr ol: The distributed optimal control problem (5) with a QI subspace constraint C can be formulated as a SLS problem as minimize (32) subject to (16a) − (16c) , L ∈ C . (35) Thus all distributed optimal control problems that can be formulated as con vex optimization problems in the Y oula domain are special cases of con ve x SLS problem (33). 2) Localized LQG Control: In [2], [4] we posed and solved a localized LQG optimal control problem. In the case of a state-feedback problem [2], the resulting SLS problem is of the form minimize { R , M } k C 1 R + D 12 M k 2 H 2 subject to (10a) − (10b)  R M  ∈ C ∩ L ∩ F T , (36) for C a QI subspace SLC, L a sparsity SLC, and F T a FIR SLC. The observation that we make in [2] (and extend to the output feedback setting in [4]), is that the localized SLS problem (36) can be decomposed into a set of independent sub- problems solving for the columns R i and M i of the transfer matrices R and M – as these problems are independent, they can be solved in parallel. Further , the sparsity constraint L restricts each sub-problem to a local subset of the system model and states, as specified by the nonzero components of the corresponding column of the transfer matrices R and M (e.g., as was described in Example 1), allowing each of these sub-problems to be expressed in terms of optimization variables (and corresponding sub-matrices of the state-space realization (16)) that are of significantly smaller dimension 12 More generally , we only need the intersection of the set S and the restriction of the functional g to the affine subspace described in (16) to be conve x. TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 13 than the global system response { R , M } . Thus for a given feasible spatiotemporal SLC, the localized SLS problem (36) can be solved for arbitrarily large-scale systems, assuming that each sub-controller can solve its corresponding sub-problem in parallel. 13 As far as we are aware, such constrained optimal control problems cannot be solved via con vex programming using existing controller parameterizations in the literature. In our companion paper [47], we generalize all of these con- cepts to the system lev el approach to controller synthesis, and show that appropriate notions of separability for SLCs can be defined which allow for optimal controllers to be synthesized and implemented with order constant complexity (assuming parallel computation is a vailable for each subproblem) relativ e to the global system size. 3) Regularization for Design: The regularization for design framew ork (RFD) [40], [41], [56], [57] explores tradeoffs between closed loop performance and architectural cost using con vex programming by augmenting the objective function with a suitable con ve x regularizer that penalizes the use of actuators, sensors and communication links. T o integrate RFD into the SLA, it suf fices to add a suitable con vex re gularizer , as mentioned in Section IV -G3 and described in [5], [40], to the objectiv e function of the SLS problem (33). W e demonstrate the usefulness of combining RFD, locality and SLS in our companion paper [47]. C. Computational Complexity and Non-con vex Optimization A final advantage of the SLS problem (33) is that it is transparent to determine the computational complexity of the optimization problem. Specifically , the complexity of solving (33) is determined by the type of the objectiv e function g ( · ) and the characterization of the intersection of the set S and the affine space (16a) - (16c). Further, when the SLS problem is non-con ve x, the direct nature of the formulation makes it straightforward to determine suitable con ve x relaxations or non-con ve x optimization techniques for the problem. In contrast, as discussed in [30], no general method exists to determine the computational complexity of the decentralized optimal control problem (5) for a general constraint set C . V I . C O N C L U S I O N In this paper , we defined and analyzed the system lev el approach to controller synthesis, which consists of three elements: System Le vel Parameterizations (SLPs), System Lev el Constraints (SLCs), and System Lev el Synthesis (SLS) problems. W e sho wed that all achie vable and stable system responses can be characterized via the SLPs gi ven in Theorems 1 and 2. W e further showed that these system responses could be used to parameterize internally stabilizing controllers that achieved them, and proposed a novel controller imple- mentation (18). W e then argued that this novel controller implementation had the important benefit of allowing for SLCs to be naturally imposed on it, and sho wed in Section IV that using this controller structure and SLCs, we can characterize 13 W e also show how to co-design an actuation architecture and feasible corresponding spatiotemporal constraint in [5], and so the assumption of a feasible spatiotemporal constraint is a reasonable one. the broadest known class of constrained internally stabilizing controllers that admit a con vex representation. Finally , we combined SLPs and SLCs to formulate the SLS problem, and showed that it recovered as a special case many well studied constrained optimal controller synthesis problems from the literature. In our companion paper [47], we show how to use the system lev el approach to controller synthesis to co- design controllers, system responses and actuation, sensing and communication architectures for large-scale networked systems. A P P E N D I X A S TA B I L I Z A B I L I T Y A N D D E T E C TA B I L I T Y Lemma 6: The pair ( A, B 2 ) is stabilizable if and only if the affine subspace defined by (10) is non-empty . Pr oof: W e first show that the stabilizability of ( A, B 2 ) implies that there exist transfer matrices R , M ∈ 1 z RH ∞ satisfying equation (10a). From the definition of stabilizability , there exists a matrix F such that A + B 2 F is a stable matrix. Substituting the state feedback control law u = F x into (8), we have x = ( z I − A − B 2 F ) − 1 δ x and u = F ( z I − A − B 2 F ) − 1 δ x . The system response is given by R = ( z I − A − B 2 F ) − 1 and M = F ( z I − A − B 2 F ) − 1 , which lie in 1 z RH ∞ and are a solution to (10a). For the opposite direction, we note that R , M ∈ RH ∞ implies that these transfer matrices do not have poles outside the unit circle | z | ≥ 1 . From (10a), we further observe that  z I − A − B 2  is right inv ertible in the region where R and M do not have poles, with  R > M >  > being its right in verse. This then implies that  z I − A − B 2  has full row rank for all | z | ≥ 1 . This is equiv alent to the PBH test [58] for stabilizability , proving the claim. W e note that the analysis for the state feedback problem in Section III-A can be applied to the state estimation problem by considering the dual to a full control system (c.f., § 16.5 in [42]). For instance, the following corollary to Lemma 6 giv es an alternative definition of the detectability of pair ( A, C 2 ) [6]. Cor ollary 2: The pair ( A, C 2 ) is detectable if and only if the following conditions are feasible:  R N   z I − A − C 2  = I (37a) R , N ∈ 1 z RH ∞ . (37b) A parameterization of all detectable observers can be con- structed using the affine subspace (37) in a manner analogous to that described above. Lemma 7: The triple ( A, B 2 , C 2 ) is stabilizable and de- tectable if and only if the affine subspace described by (16) is non-empty . Pr oof: This follo ws from an identical construction as that presented in the proof Theorem 4, b ut no w using stable transfer matrices with possibly infinite impulse responses. TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 14 R E F E R E N C E S [1] Y .-S. W ang, N. Matni, S. Y ou, and J. C. Doyle, “Localized distributed state feedback control with communication delays, ” in Pr oc. 2014 IEEE Amer . Control Conf. , June 2014, pp. 5748–5755. [2] Y .-S. W ang, N. Matni, and J. C. Doyle, “Localized LQR optimal control, ” in Proc. 2014 53rd IEEE Conf. Decision Control , 2014, pp. 1661–1668. [3] Y .-S. W ang and N. Matni, “Localized distributed optimal control with output feedback and communication delays, ” in IEEE 52nd Annual Allerton Conference on Communication, Contr ol, and Computing , 2014, pp. 605–612. [4] ——, “Localized LQG optimal control for large-scale systems, ” in Proc. 2016 IEEE Amer . Contr ol Conf. , 2016, pp. 1954–1961. [5] Y .-S. W ang, N. Matni, and J. C. Doyle, “Localized LQR control with actuator regularization, ” in Pr oc. 2016 IEEE Amer . Contr ol Conf. , 2016, pp. 5205–5212. [6] Y .-S. W ang, S. Y ou, and N. Matni, “Localized distributed Kalman filters for large-scale systems, ” in 5th IF AC W orkshop on Distributed Estimation and Control in Networked Systems , vol. 48, no. 22, 2015, pp. 52–57. [7] Y .-S. W ang, N. Matni, and J. C. Doyle, “System level parameterizations, constraints and synthesis, ” in Pr oc. 2017 Amer . Contr ol Conf. , May 2017, pp. 1308–1315. [8] D. C. Y oula, H. A. Jabr, and J. J. B. Jr ., “Modern wiener-hopf design of optimal controllers-part ii: The multiv ariable case, ” IEEE T rans. Autom. Contr ol , vol. 21, no. 3, pp. 319–338, 1976. [9] M. V idyasagar, Contr ol System Synthesis:A F actorization Appr oach, P art II . Morgan & Claypool, 2011. [10] S. Boyd and C. Barratt, Linear contr oller design: limits of performance . Prentice-Hall, 1991. [11] M. A. Dahleh and I. J. Diaz-Bobillo, Contr ol of uncertain systems: a linear progr amming approac h . Prentice-Hall, Inc., 1994. [12] J. Y . Ishihara and R. M. Sales, “Parametrization of admissible controllers for generalized rosenbrock systems, ” in Proc. 2000 39th IEEE Conf. Decision Control , vol. 5, 2000, pp. 5014–5019. [13] H. H. Rosenbrock, Computer aided contr ol system design . Academic Press, 1974. [14] J. C. W illems and J. W . Polderman, Intr oduction to mathematical systems theory: a behavioral appr oach . Springer Science & Business Media, 2013, vol. 26. [15] J. C. Willems and H. L. T rentelman, “Synthesis of dissipativ e systems using quadratic differential forms: Part i, ” IEEE Tr ans. Autom. Control , vol. 47, no. 1, pp. 53–69, 2002. [16] H. L. Trentelman and J. C. W illems, “Synthesis of dissipativ e systems using quadratic differential forms: Part ii, ” IEEE T rans. Autom. Contr ol , vol. 47, no. 1, pp. 70–86, 2002. [17] C. Praagman, H. L. Trentelman, and R. Z. Y oe, “On the parametrization of all regularly implementing and stabilizing controllers, ” SIAM Journal on Control and Optimization , vol. 45, no. 6, pp. 2035–2053, 2007. [18] J. C. Doyle, K. Glover , P . P . Khargonekar , and B. A. Francis, “State- space solutions to standard H 2 and H ∞ control problems, ” IEEE T rans. Autom. Contr ol , vol. 34, no. 8, pp. 831–847, Aug 1989. [19] Y .-C. Ho and K.-C. Chu, “T eam decision theory and information structures in optimal control problems–part i, ” IEEE T rans. Autom. Contr ol , vol. 17, no. 1, pp. 15–22, 1972. [20] A. Mahajan, N. Martins, M. Rotkowitz, and S. Y uksel, “Information structures in optimal decentralized control, ” in Proc. 2012 51st IEEE Conf. Decision Contr ol , 2012, pp. 1291–1306. [21] M. Rotkowitz and S. Lall, “ A characterization of con ve x problems in decentralized control, ” IEEE T rans. Autom. Contr ol , vol. 51, no. 2, pp. 274–286, 2006. [22] B. Bamieh, F . Paganini, and M. A. Dahleh, “Distributed control of spatially in variant systems, ” IEEE Tr ans. Autom. Control , vol. 47, no. 7, pp. 1091–1107, 2002. [23] B. Bamieh and P . G. V oulgaris, “ A conve x characterization of distributed control problems in spatially inv ariant systems with communication constraints, ” Systems & Contr ol Letters , vol. 54, no. 6, pp. 575–583, 2005. [24] A. Nayyar, A. Mahajan, and D. T eneketzis, “Decentralized stochastic control with partial history sharing: A common information approach, ” IEEE T rans. Autom. Control , vol. 58, no. 7, pp. 1644–1658, July 2013. [25] H. S. W itsenhausen, “ A counterexample in stochastic optimum control, ” SIAM Journal of Control , vol. 6, no. 1, pp. 131–147, 1968. [26] J. N. Tsitsiklis and M. Athans, “On the complexity of decentralized decision making and detection problems, ” in Proc. 1984 23r d IEEE Conf. Decision Contr ol , 1984, pp. 1638–1641. [27] X. Qi, M. V . Salapaka, P . G. V oulgaris, and M. Khammash, “Structured optimal and robust control with multiple criteria: A conve x solution, ” IEEE Tr ans. Autom. Control , vol. 49, no. 10, pp. 1623–1640, 2004. [28] G. E. Dullerud and R. D’Andrea, “Distributed control of heterogeneous systems, ” IEEE Tr ans. Autom. Control , vol. 49, no. 12, pp. 2113–2128, 2004. [29] M. Rotkowitz, R. Cogill, and S. Lall, “Con ve xity of optimal control over networks with delays and arbitrary topology , ” Int. J . Syst., Contr ol Commun. , vol. 2, no. 1/2/3, pp. 30–54, Jan. 2010. [30] L. Lessard and S. Lall, “Conv exity of decentralized controller synthesis, ” IEEE Tr ans. Autom. Control , vol. 61, no. 10, pp. 3122–3127, 2016. [31] ——, “Optimal controller synthesis for the decentralized two-player problem with output feedback, ” in Pr oc. 2012 IEEE Amer . Contr ol Conf. , June 2012, pp. 6314–6321. [32] P . Shah and P . A. Parrilo, “ H 2 -optimal decentralized control over posets: A state space solution for state-feedback, ” in Pr oc. 2010 49th IEEE Conf. Decision Control , 2010, pp. 6722–6727. [33] A. Lamperski and J. C. Do yle, “Output feedback H 2 model matching for decentralized systems with delays, ” in Pr oc. 2013 IEEE Amer . Control Conf. , June 2013, pp. 5778–5783. [34] L. Lessard, M. Kristalny , and A. Rantzer, “On structured realizability and stabilizability of linear systems, ” in Pr oc. 2013 IEEE Amer . Control Conf. , June 2013, pp. 5784–5790. [35] C. W . Scherer, “Structured H ∞ -optimal control for nested interconnec- tions: A state-space solution, ” Systems and Contr ol Letters , vol. 62, pp. 1105–1113, 2013. [36] L. Lessard, “State-space solution to a minimum-entropy H ∞ -optimal control problem with a nested information constraint, ” in Pr oc. 2014 53r d IEEE Conf. Decision Contr ol , 2014, pp. 4026–4031. [37] N. Matni, “Distributed control subject to delays satisfying an H ∞ norm bound, ” in Proc. 2014 53rd IEEE Conf. Decision Contr ol , 2014, pp. 4006–4013. [38] T . T anaka and P . A. Parrilo, “Optimal output feedback architecture for triangular LQG problems, ” in Proc. 2014 IEEE Amer . Control Conf. , June 2014, pp. 5730–5735. [39] A. Lamperski and L. Lessard, “Optimal decentralized state-feedback control with sparsity and delays, ” Automatica , vol. 58, pp. 143–151, 2015. [40] N. Matni and V . Chandrasekaran, “Regularization for design, ” IEEE T rans. A utom. Contr ol , vol. 61, no. 12, pp. 3991–4006, 2016. [41] ——, “Regularization for design, ” in Proc. 53r d IEEE Conf. Decision Contr ol , Dec 2014, pp. 1111–1118. [42] K. Zhou, J. C. Doyle, and K. Glover , Robust and optimal control . Prentice Hall New Jersey , 1996. [43] L. Lessard and S. Lall, “Quadratic inv ariance is necessary and sufficient for con ve xity , ” in Pr oc. 2011 IEEE Amer . Control Conf . , 2011, pp. 5360– 5362. [44] S ¸ . Sab ˘ au and N. C. Martins, “Y oula-like parametrizations subject to QI subspace constraints, ” IEEE T rans. Autom. Control , vol. 59, no. 6, pp. 1411–1422, 2014. [45] D. E. Riv era, M. Morari, and S. Skogestad, “Internal model control: Pid controller design, ” Industrial & engineering chemistry process design and development , vol. 25, no. 1, pp. 252–265, 1986. [46] C. E. Garcia and M. Morari, “Internal model control. a unifying review and some new results, ” Industrial & Engineering Chemistry Process Design and Development , vol. 21, no. 2, pp. 308–323, 1982. [47] Y .-S. W ang, N. Matni, and J. C. Doyle, “Separable and localized system lev el synthesis for large-scale systems, ” IEEE T rans. Autom. Control , vol. 63, no. 12, pp. 4234–4249, 2018. [48] A. Lamperski and J. C. Doyle, “The H 2 control problem for quadrat- ically inv ariant systems with delays, ” IEEE T rans. Autom. Control , vol. 60, no. 7, pp. 1945–1950, 2015. [49] B. Leden, “Multivariable dead-beat control, ” Automatica , vol. 13, no. 2, pp. 185 – 188, 1977. [50] H. Kwakernaak and R. Sivan, Linear optimal contr ol systems . W iley- Interscience New Y ork, 1972, vol. 1. [51] J. O’Reilly , “The discrete linear time inv ariant time-optimal control problem—an overvie w , ” Automatica , vol. 17, no. 2, pp. 363 – 370, 1981. [52] N. Matni, Y .-S. W ang, and J. Anderson, “Scalable system level synthesis for virtually localizable systems, ” in Proc. 2017 56th IEEE Conf. Decision Control , 2017, pp. 3473–3480. [53] A. Rantzer, “Scalable control of positiv e systems, ” European Journal of Contr ol , vol. 24, pp. 72–80, 2015. [54] Y .-S. W ang, “ A system level approach to optimal controller design for large-scale distributed systems, ” Ph.D. dissertation, California Institute of T echnology , 2016. TO APPEAR IN IEEE TRANSACTIONS ON A UTOMA TIC CONTROL, VOL. XX, NO. XX, OCTOBER 2019 15 [55] J. C. Doyle, N. Matni, Y .-S. W ang, J. Anderson, and S. Low , “System lev el synthesis: A tutorial, ” in Pr oc. 2017 56th IEEE Conf. Decision Contr ol , 2017, pp. 2856–2867. [56] N. Matni, “Communication delay co-design in H 2 distributed control using atomic norm minimization, ” IEEE T rans. Control Netw . Syst. , vol. 4, no. 2, pp. 267–278, 2017. [57] ——, “Communication delay co-design in H 2 decentralized control using atomic norm minimization, ” in Pr oc. 52nd 2013 IEEE Conf. Decision Control , Dec 2013, pp. 6522–6529. [58] G. E. Dullerud and F . Pag anini, A Course In Robust Control Theory: A Con vex Appr oach . Springer-V erlag, 2000. Y uh-Shyang W ang (M’10) received the B.S. degree in electrical engineering from National T aiwan Uni- versity , T aipei, T aiwan, in 2011, and the Ph.D. de- gree in control and dynamical systems from Caltech, Pasadena, CA, USA, in 2016 under the advisement of John C. Doyle. He is currently a Research Engineer at GE Global Research Center, Niskayuna, NY , USA. His research interests include optimization, control, and machine learning for industrial cyber-physical systems and renew able energy systems. Dr . W ang was the recipient of the 2017 A CC Best Student Paper A ward. Nikolai Matni (M’08) received the B.A.Sc. and M.A.Sc. degrees in electrical engineering from the Univ ersity of British Columbia, V ancouver , BC, Canada, in 2008 and 2010, respectively , and the Ph.D. degree in control and dynamical systems from the California Institute of T echnology , Pasadena, CA, USA, in June 2016 under the advisement of John C. Doyle. He is currently a Postdoctoral Scholar at Electrical Engineering & Computer Sciences, UC Berkeley , Berkeley , CA, USA. His research interests include the use of learning, layering, dynamics, control and optimization in the design and analysis of complex cyber-ph ysical systems. Dr . Matni receiv ed the IEEE CDC 2013 Best Student Paper A ward, the IEEE A CC 2017 Best Student Paper A ward (as co-advisor), and was an Everhart Lecture Series speaker at Caltech. John C. Doyle recei ved the B.S. and M.S. degrees in electrical engineering from Massachusetts Institute of T echnology , Cambridge, MA, USA, in 1977, and the Ph.D. degree in mathematics from UC Berkeley , Berkeley , CA, USA, in 1984. He is currently the Jean-Lou Chameau Profes- sor of Control and Dynamical Systems, Electrical Engineer , and Bio-Engineering, Caltech, Pasadena, CA, USA. His research interests include mathe- matical foundations for complex networks with ap- plications inbiology , technology , medicine, ecology , neuroscience, and multiscale physics that integrates theory from control, computation, communication, optimization, statistics (e.g., machine learning). Dr . Doyle receiv ed the 1990 IEEE Baker Prize (for all IEEE publications), also listed in the world top 10 most important papers in mathematics 19811993, the IEEE Automatic Control Transactions A ward (twice 1998, 1999), the 1994 AACC American Control Conference Schuck A ward, the 2004 ACM Sigcomm Paper Prize and 2016 test of Time A ward, and inclusion in Best Writing on Mathematics 2010. His individual awards include 1977 IEEE Power Hickernell, 1983 AACC Eckman, 1984 UC Berkeley Friedman, 1984 IEEE Centennial Outstanding Y oung Engineer (a one-time award for IEEE 100th anniversary), and 2004 IEEE Control Systems Field A ward.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment