A Formal Approach to Physics-Based Attacks in Cyber-Physical Systems (Extended Version)

We apply formal methods to lay and streamline theoretical foundations to reason about Cyber-Physical Systems (CPSs) and physics-based attacks, i.e., attacks targeting physical devices. We focus on a formal treatment of both integrity and denial of se…

Authors: Ruggero Lanotte, Massimo Merro, Andrei Munteanu

A Formal Approach to Physics-Based Attacks in Cyber-Physical Systems   (Extended Version)
A F ormal Approac h to Ph ysics-Based A ttac ks in Cyb er-Ph ysical Systems (Extended V ersion) ∗ Ruggero Lanotte 1 , Massimo Merro 2 , Andrei Mun teanu 2 , and Luca Viganò 3 1 DISUIT, Univ ersità dell’Insubria, Italy , ruggero.lanotte@uninsubria.it 2 Dipartimen to di Informatica, Università degli Studi di V erona, Italy , {massimo.merro,andrei.m untean u}@univr.it 3 Departmen t of Informatics, King’s College London, UK, luca.vigano@kcl.ac.uk Ma y 25, 2021 Abstract W e apply formal methods to lay and streamline theoretical foundations to reason ab out Cyber-Physical Systems (CPSs) and ph ysics-based attacks, i.e., attac ks targeting physical devices. W e fo cus on a formal treatmen t of both integrit y and denial of service attacks to sensors and actuators of CPSs, and on the timing aspects of these attacks. Our con tributions are fourfold. (1) W e define a h ybrid process calculus to mo del b oth CPSs and physics-based attacks. (2) W e formalise a threat mo del that sp ecifies MITM attacks that can manipulate sensor readings or con trol commands in order to drive a CPS into an undesired state; w e group these attacks in to classes, and provide the means to assess attack tolerance/vulnerabilit y with respect to a given class of attac ks, based on a proper notion of most p o w erful physics-based attac k. (3) W e formalise how to estimate the impact of a successful attack on a CPS and inv estigate possible quan tifications of the success chances of an attack. (4) W e illustrate our definitions and results by formalising a non-trivial running example in Upp aal SMC, the statistical extension of the Upp aal mo del c heck er; we use Upp aal SMC as an automatic to ol for carrying out a static security analysis of our running example in isolation and when exp osed to three differen t physics-based attac ks with differen t impacts. 1 In tro duction 1.1 Con text and motiv ation Cyb er-Physic al Systems (CPSs) are integrations of net working and distributed computing systems with ph ysical pro cesses that monitor and control entities in a ph ysical environmen t, with feedbac k lo ops where ph ysical pro cesses affect computations and vice versa. F or example, in real-time control systems, a hierarch y of sensors , actuators and c ontr ol c omp onents are connected to control stations. In recent years there has b een a dramatic increase in the num ber of attacks to the security of CPSs, e.g., manipulating sensor readings and, in general, influencing physical pro cesses to bring the system into a state desired by the attack er. Some notorious examples are: (i) the STUXnet w orm, which reprogrammed PLCs of n uclear centrifuges in Iran [ 35 ]; (ii) the attack on a sew age treatment facility in Queensland, Australia, which ∗ This document extends the paper “A F ormal Approach to Physics-Based A ttacks in Cyber-Physical Systems” that will appear in ACM T r ansactions on Privacy and Se curity b y pro viding proofs that are work ed out in full details. 1 manipulated the SCADA system to release ra w sewage into lo cal rivers and parks [ 53 ]; (iii) the BlackEner gy cyb er-attac k on the Ukrainian p o w er grid, again compromising the SCADA system [31]. A common asp ect of these attacks is that they all compromised safety critic al systems , i.e., systems whose failures may cause catastrophic consequences. Thus, as stated in [ 24 , 25 ], the concern for consequences at the ph ysical level puts CPS se curity apart from standard information se curity , and demands for ad ho c solutions to prop erly address such nov el research challenges. These ad ho c solutions must explicitly take in to consideration a n umber of specific issues of attacks tailored for CPSs. One main critical issue is the timing of the attack : the physical state of a system c hanges con tinuously ov er time and, as the system evolv es, some states migh t b e more vulnerable to attacks than others [ 33 ]. F or example, an attack launched when the target state v ariable reaches a lo cal maximum (or minim um) may ha ve a great impact on the whole system b eha viour, whereas the system might be able to tolerate the same attack if launc hed when that v ariable is far from its lo cal maxim um or minim um [ 34 ]. F urthermore, not only the timing of the attack but also the dur ation of the attack is an imp ortan t parameter to b e taken into consideration in order to achiev e a successful attac k. F or example, it may tak e minutes for a c hemical reactor to rupture [ 56 ], hours to heat a tank of w ater or burn out a motor, and days to destroy cen trifuges [35]. Muc h progress has b een done in the last y ears in dev eloping formal approac hes to aid the safety verific ation of CPSs (e.g., [ 28 , 18 , 19 , 49 , 48 , 4 ], to name a few). How ever, there is still a relatively small n umber of works that use formal metho ds in the con text of the se curity analysis of CPSs (e.g., [ 11 , 10 , 60 , 50 , 46 , 1 , 8 , 58 ]). In this resp ect, to the b est of our knowledge, a systematic formal approach to study physics-b ase d attacks , that is, attacks targeting the physical devices (sensors and actuators) of CPSs, is still to b e fully developed. Our pap er mov es in this direction by relying on a pro cess calculus approach. 1.2 Bac kground The dynamic b eha viour of the physic al plant of a CPS is often represented by means of a discr ete-time state-sp ac e mo del 1 consisting of tw o equations of the form x k +1 = Ax k + B u k + w k y k = C x k + e k where x k ∈ R n is the curren t (physic al) state , u k ∈ R m is the input (i.e., the control actions implemen ted through actuators) and y k ∈ R p is the output (i.e., the measuremen ts from the sensors). The unc ertainty w k ∈ R n and the me asur ement err or e k ∈ R p represen t p erturbation and sensor noise, resp ectiv ely , and A , B , and C are matrices mo delling the dynamics of the physical system. Here, the next state x k +1 dep ends on the current state x k and the corresp onding control actions u k , at the sampling instan t k ∈ N . The state x k cannot b e directly observed: only its measurements y k can b e observed. The physical plan t is supp orted b y a communication net work through whic h the sensor measurements and actuator data are exc hanged with con troller(s) and supervisor(s) (e.g., IDSs), which are the cyb er comp onen ts (also called lo gics ) of a CPS. 1.3 Con tributions In this pap er, we fo cus on a formal treatment of b oth inte grity and Denial of Servic e (DoS) attacks to physic al devic es (sensors and actuators) of CPSs, paying particular atten tion to the timing asp e cts of these attacks. The ov erall goal of the pap er is to apply formal metho dolo gies to lay the or etic al foundations to reason ab out and formally detect attac ks to physical devices of CPSs. A straightforw ard utilisation of these metho dologies is for mo del-che cking (as, e.g., in [ 19 ]) or monitoring (as, e.g., in [ 4 ]) in order to b e able to verify securit y prop erties of CPSs either b efore system deploymen t or, when static analysis is not feasible, at runtime to promptly detect undesired b eha viours. In other words, we aim at providing an essential stepping stone for 1 See [62, 63] for a taxonom y of the time-scale models used to represent CPSs. 2 Plan t Logics Sensors A ctuators x k y a k y k u k u a k w k e k Figure 1: MITM attac ks to sensor readings and control commands formal and automated analysis tec hniques for chec king the securit y of CPSs (rather than for providing defence tec hniques, i.e., mitigation [45]). Our contribution is fourfold. The first c ontribution is the definition of a hybrid pr o c ess c alculus , called CCPSA , to formally sp ecify b oth CPSs and physics-b ase d attacks . In CCPSA , CPSs ha ve tw o comp onen ts: • a physic al c omp onent denoting the physic al plant (also called en vironmen t) of the system, and containing information on state v ariables, actuators, sensors, evolution la w, etc., and • a cyb er c omp onent that gov erns access to sensors and actuators, and channel-based comm unication with other cyb er comp onen ts. Th us, channels are used for logical interactions betw een cyb er components, whereas sensors and actuators mak e p ossible the interaction b et w een cyb er and physical comp onen ts. CCPSA adopts a discr ete notion of time [ 27 ] and it is equipp ed wi th a lab el le d tr ansition semantics (L TS) that allows us to observe b oth physic al events (system deadlo c k and violations of safety conditions) and cyb er events (c hannel communications). Based on our L TS, we define tw o c omp ositional trace-based system preorders: a deadlo ck-sensitiv e tr ac e pr e or der , v , and a time d variant , v m..n , for m ∈ N + and n ∈ N + ∪ {∞} , whic h takes into accoun t discrepancies of execution traces within the discrete time interv al m..n . Intuitiv ely , giv en tw o CPSs Sys 1 and Sys 2 , w e write Sys 1 v m..n Sys 2 if Sys 2 sim ulates the execution traces of Sys 1 , except for the time in terv al m..n ; if n = ∞ then the sim ulation only holds for the first m − 1 time slots. As a se c ond c ontribution , we formalise a thr e at mo del that sp ecifies man-in-the-midd le (MITM) attacks that can manipulate sensor r e adings or c ontr ol c ommands in order to drive a CPS into an undesired state [ 55 ]. 2 Without loss of generality , MITM attacks targeting physical devices (sensors or actuators) can b e assimilated to physic al attacks , i.e., those attacks that directly compromise physical devices (e.g., electromagnetic attac ks). As depicted in Figure 1, our attacks may affect directly the sensor measurements or the controller commands: • Attacks on sensors consist of reading and even tually replacing y k (the sensor measurements) with y a k . • A ttacks on actuators consist of reading, dropping and even tually replacing the controller commands u k with u a k , affecting directly the actions the actuators may execute. W e group attacks in to classes. A class of attacks takes in to accoun t b oth the p oten tial malicious activities I on physical devices and the timing p ar ameters m and n of the attack: begin and end of the attack. W e represen t a class C as a total function C ∈ [ I → P ( m..n )] . Intuitiv ely , for ι ∈ I , C ( ι ) ⊆ m..n denotes the set of time instants when an attack of class C may tamp er with the device ι . In order to make securit y assessments on our CPSs, we adopt a well-kno wn approach called Gener alize d Non De ducibility on Comp osition (GNDC) [ 17 ]. Thus, in our calculus CCPSA , we sa y that a CPS Sys toler ates an attack A if Sys k A v Sys . In this case, the presence of the attack A , do es not change the (ph ysical and logical) observ able b eha viour of the system Sys , and the attack can b e considered harmless. 2 Note that w e fo cus on attack ers who hav e already en tered the CPS, and w e do not consider ho w they gained access to the system, e.g., by attac king an Internet-accessible controller or one of the communication protocols as a Dolev-Y ao-style attack er [16] w ould do. 3 On the other hand, we say that a CPS Sys is vulner able to an attack A of class C ∈ [ I → P ( m..n )] if there is a time interv al m 0 ..n 0 in which the attack b ecomes observ able (ob viously , m 0 ≥ m ). F ormally , we write: Sys k A v m 0 ..n 0 Sys . W e provide sufficient criteria to pro ve attack tolerance/vulnerabilit y to attacks of an arbitrary class C . W e define a notion of most p owerful physics-b ase d attack of a given class C , T op ( C ) , and prov e that if a CPS tolerates T op ( C ) then it tolerates all attacks A of class C (and of any weak er class 3 ). Similarly , if a CPS is vulnerable to T op ( C ) , in the time interv al m 0 ..n 0 , then no attacks of class C (or weak er) can affect the system out of that time interv al. This is v ery useful when chec king for attack tolerance/vulnerability with resp ect to all attacks of a giv en class C . As a thir d c ontribution , w e formalise how to estimate the imp act of a suc c essful attack on a CPS. As exp ected, risk assessment in industrial CPSs is a crucial phase preceding any defence strategy implementa- tion [ 54 ]. The ob jective of this phase is to prioritise among vulnerabilities; this is done based on the likelihoo d that vulnerabilities are exploited, and the impact on the system under attack if exploitation o ccurs. In this manner, the resources can then b e fo cused on preven ting the most critical vulnerabilities [ 44 ]. W e pro vide a metric to estimate the maximum p erturb ation introduced in the system under attack with resp ect to its gen uine b eha viour, according to its evolution la w and the uncertaint y of the mo del. Then, we prov e that the impact of the most p o werful attack T op ( C ) represen ts an upp er b ound for the impact of any attack A of class C (or weak er). Finally , as a fourth c ontribution , we formalise a running example in Upp aal SMC [ 15 ], the statistical extension of the Upp aal mo del chec k er [ 5 ] supp orting the analysis of systems expressed as comp osition of time d and/or pr ob abilistic automata . Our goal is to test Upp aal SMC as an automatic to ol for the static se curity analysis of a simple but significant CPS exp osed to a num b er of different physics-based attacks with differen t impacts on the system under attack. Here, w e wish to remark that while we hav e kept our running example simple, it is actually non-trivial and designed to describ e a wide num b er of attacks, as will b ecome clear b elow. This pap er extends and sup ersedes a preliminary conference version that app eared in [39]. The Upp aal SMC mo dels of our system and the attac ks that we ha v e found are av ailable at the rep ository https://bitbucket.org/AndreiMunteanu/cps_smc/src/ . 1.4 Organisation In Section 2, w e give syn tax and seman tics of CCPSA . In Section 3, we provide our running example and its formalisation in Upp aal SMC. In Section 4, w e first define our threat mo del for physics-based attac ks, then w e use Upp aal SMC to carry out a security analysis of our running example when exp osed to three different attac ks, and, finally , we pro vide sufficient criteria for attack tolerance/vulnerability , based on a prop er notion of most p o werful attack. In Section 5, we estimate the impact of attacks on CPSs and prov e that the most p o w erful attack of a giv en class has the maximum impact with resp ect to all attacks of the same class (or of a weak er one). In Section 6, we draw conclusions and discuss related and future w ork. 2 The Calculus In this section, we introduce our Calculus of Cyb er-Physic al Systems and Attacks , CCPSA , which extends the Calculus of Cyb er-Physic al Systems , defined in our companion pap ers [ 37 , 41 ], with sp ecific features to formalise and study attacks to physical devices. Let us start with some preliminary notation. 3 Intuitiv ely , attac ks of classes weak er than C can do less with resp ect to attacks of class C . 4 2.1 Syn tax of CCPSA Notation 1. W e use x, x k for state v ariables (asso ciate d to physic al states of systems), c, d for communication c hannels , a, a k for actuator devices , and s, s k or sensors devices . A ctuator names ar e metavariables for actuator devic es like valve , light , etc. Similarly, sensor names ar e metavariables for sensor devic es, e.g., a sensor thermometer that me asur es a state variable c al le d temp er atur e , with a given pr e cision. V alues , r ange d over by v , v 0 , w , ar e built fr om b asic values, such as Bo ole ans, inte gers and r e al numb ers; they also include names. Given a generic set of names N , we write R N to denote the set of functions assigning a r e al value to e ach name in N . F or ξ ∈ R N , n ∈ N and v ∈ R , we write ξ [ n 7→ v ] to denote the function ψ ∈ R N such that ψ ( m ) = ξ ( m ) , for any m 6 = n , and ψ ( n ) = v . Given two generic functions ξ 1 and ξ 2 with disjoint domains N 1 and N 2 , r esp e ctively, we denote with ξ 1 ∪ ξ 2 the function such that ( ξ 1 ∪ ξ 2 )( n ) = ξ 1 ( n ) , if n ∈ N 1 , and ( ξ 1 ∪ ξ 2 )( n ) = ξ 2 ( n ) , if n ∈ N 2 . In general, a cyb er-physical system consists of: (i) a physic al c omp onent (defining state v ariables, physical devices, physical evolution, etc.) and (ii) a cyb er (or lo gic al) c omp onent that interacts with the physical devices (sensors and actuators) and communicates with other cyb er comp onen ts of the same or of other CPSs. Ph ysical comp onen ts in CCPSA are giv en by tw o sub-comp onen ts: (i) the physic al state , which is supp osed to change at runtime, and (ii) the physic al envir onment , which con tains static information. 4 Definition 1 (Ph ysical state) . L et X b e a set of state variables, S b e a set of sensors, and A b e a set of actuators. A physical state S is a triple h ξ x , ξ s , ξ a i , wher e: • ξ x ∈ R X is the state function , • ξ s ∈ R S is the sensor function , • ξ a ∈ R A is the actuator function . A l l functions defining a physic al state ar e total . The state function ξ x returns the current v alue asso ciated to eac h v ariable in X , the sensor function ξ s returns the current v alue asso ciated to each sensor in S and the actuator function ξ a returns the current v alue asso ciated to each actuator in A . Definition 2 (Ph ysical environmen t) . L et X b e a set of state variables, S b e a set of sensors, and A b e a set of actuators. A physical en vironment E is a 6-tuple h evol , me as , inv , safe , ξ w , ξ e i , wher e: • evol : R X × R A × R X → 2 R X is the evolution map , • me as : R X × R S → 2 R S is the measurement map , • inv ∈ 2 R X is the inv ariant set , • safe ∈ 2 R X is the safety set , • ξ w ∈ R X is the uncertaint y function , • ξ e ∈ R S is the sensor-error function . A l l functions defining a physic al envir onment ar e total functions . 4 Actually , this information is p erio dically updated (sa y , every six mon ths) to tak e in to accoun t p ossible drifts of the system. 5 The evolution map evol mo dels the evolution law of the ph ysical system, where changes made on actuators ma y reflect on state v ariables. Given a state function, an actuator function, and an uncertaint y function, the evolution map evol returns the set of next admissible state functions . Since we assume an unc ertainty in our mo dels, evol do es not return a single state function but a set of p ossible state functions. The me asur ement map me as returns the set of next admissible sensor functions based on the current state function. Since we assume error-prone sensors, me as do es not return a single sensor function but a set of p ossible se nsor functions. The invariant set inv represen ts the set of state func tions that satisfy the inv arian t of the system. A CPS that gets into a physical state with a state function that do es not satisfy the inv ariant is in de ad lo ck . Similarly , the safety set safe represen ts the set of state functions that satisfy the safety conditions of the system. Intuitiv ely , if a CPS gets in to an unsafe state, then its functionalit y may get compromised. The unc ertainty function ξ w returns the uncertaint y (or accuracy) asso ciated to each state v ariable. Thus, giv en a state v ariable x ∈ X , ξ w ( x ) returns the maxim um distance b et w een the real v alue of x , in an arbitrary momen t in time, and its representation in the mo del. F or ξ w , ξ 0 w ∈ R X , w e will write ξ w ≤ ξ 0 w if ξ w ( x ) ≤ ξ 0 w ( x ) , for any x ∈ X . The evolution map evol is obviously monotone with resp ect to uncertaint y: if ξ w ≤ ξ 0 w then evol ( ξ x , ξ a , ξ w ) ⊆ evol ( ξ x , ξ a , ξ 0 w ) . Finally , the sensor-err or function ξ e returns the maximum error asso ciated to eac h sensor in S . Let us no w define formally the cyb er comp onent of a CPS in CCPSA . Our (logical) pro cesses build on Hennessy and Regan’s Time d Pr o c ess L anguage TPL [ 27 ], basically , CCS enriched with a discrete notion of time. W e extend TPL with tw o main ingredients: • tw o constructs to read v alues detected at sensors and write v alues on actuators, resp ectiv ely; • sp ecial constructs to represent malicious activities on physical devices. The remaining constructs are the same as those of TPL. Definition 3 (Pro cesses) . Pro cesses ar e define d as fol lows: P , Q ::= nil   tick .P   P k Q   π .P   φ.P   b µ.P c Q   if ( b ) { P } else { Q }   P \ c   H h ˜ w i π ::= rcv c ( x )   snd c h v i φ ::= read s ( x )   write a h v i µ ::= sniff s ( x )   drop a ( x )   fo rge p h v i . W e write nil for the terminate d pr o c ess . The process tick .P sleeps for one time unit and then con tin ues as P . W e write P k Q to denote the p ar al lel c omp osition of concurrent thr e ads P and Q . The pro cess π .P denotes channel tr ansmission . The construct φ.P denotes activities on physic al devic es , i.e., sensor r eading and actuator writing . The pro cess b µ.P c Q denotes MITM malicious activities under timeout targeting physical devices (sensors and actuators). More precisely , we supp ort sensor sniffing , dr op of actuator c ommands , and inte grity attacks on data c oming fr om sensors and addr esse d to actuators . Thus, for instance, b drop a ( x ) .P c Q drops a command on the actuator a supplied by the controller in the current time slot; otherwise, if there are no commands on a , it mov es to the next time slot and evolv es in to Q . The pro cess P \ c is the channel restriction op erator of CCS. W e sometimes write P \{ c 1 , c 2 , . . . , c n } to mean P \ c 1 \ c 2 · · · \ c n . The process if ( b ) { P } else { Q } is the standard conditional, where b is a decidable guard. In pro cesses of the form tick .Q and b µ.P c Q , the o ccurrence of Q is said to b e time-guar de d . The pro cess H h ˜ w i denotes (guarded) recursion. W e assume a set of pr o c ess identifiers ranged ov er by H , H 1 , H 2 . W e write H h w 1 , . . . , w k i to denote a recursiv e pro cess H defined via an equation H ( x 1 , . . . , x k ) = P , where (i) the tuple x 1 , . . . , x k con tains all the v ariables that app ear free in P , and (ii) P con tains only guarded o ccurrences of the pro cess identifiers, suc h as H itself. W e say that recursion is time-guar de d if P con tains only time-guarded o ccurrences of the pro cess identifiers. Unless explicitly stated our recursive pro cesses are alwa ys time-guarded. In the constructs rcv c ( x ) .P , read s ( x ) .P , b sniff s ( x ) .P c Q and b drop a ( x ) .P c Q the v ariable x is said to b e b ound . This gives rise to the standard notions of fr e e/b ound (pr o c ess) variables and α -c onversion . A term is 6 close d if it do es not contain free v ariables, and we assume to alwa ys work with closed pro cesses: the absence of free v ariables is preserved at run-time. As further notation, we write T { v / x } for the substitution of all o ccurrences of the free v ariable x in T with the v alue v . Ev erything is in place to provide the definition of cyb er-ph ysical systems expressed in CCPSA . Definition 4 (Cyb er-ph ysical system) . Fixe d a set of state variables X , a set of sensors S , and a set of actuators A , a cyb er-ph ysical system in CCPSA is given by two main c omp onents: • a physical comp onen t c onsisting of – a physical environmen t E define d on X , S , and A , and – a physical state S r e c or ding the curr ent values asso ciate d to the state variables in X , the sensors in S , and the actuators in A ; • a cyb er comp onen t P that inter acts with the sensors in S and the actuators A , and c an c ommunic ate, via channels, with other cyb er c omp onents of the same or of other CPSs. W e write E ; S o n P to denote the r esulting CPS, and use M and N to r ange over CPSs. Sometimes, when the physic al envir onment E is cle arly identifie d, we write S o n P inste ad of E ; S o n P . CPSs of the form S o n P ar e c al le d envir onment-fr e e CPSs. The syntax of our CPSs is sligh tly to o p ermissiv e as a pro cess might use sensors and/or actuators that are not defined in the physical state. T o rule out ill-formed CPSs, w e use the following definition. Definition 5 (W ell-formedness) . L et E = h evol , me as , inv , safe , ξ w , ξ e i b e a physic al envir onment, let S = h ξ x , ξ s , ξ a i b e a physic al state define d on a set of physic al variables X , a set of sensors S , and a set of actuators A , and let P b e a pr o c ess. The CPS E ; S o n P is said to b e well-formed if: (i) any sensor mentione d in P is in the domain of the function ξ s ; (ii) any actuator mentione d in P is in the domain of the function ξ a . In the rest of the pap er, we will alwa ys work with well-formed CPSs and use the following abbreviations. Notation 2. W e write µ.P for the pr o c ess define d via the e quation Q = b µ.P c Q , wher e Q do es not o c cur in P . F urther, we write • b µ c Q as an abbr eviation for b µ. nil c Q , • b µ.P c as an abbr eviation for b µ.P c nil , • snd c and rcv c , when channel c is use d for pur e synchr onisation, • tick k .P as a shorthand for tick . . . tick .P , wher e the pr efix tick app e ars k ≥ 0 c onse cutive times. Final ly, let M = E ; S o n P , we write M k Q for E ; S o n ( P k Q ) , and M \ c for E ; S o n ( P \ c ) . 2.2 Lab elled transition seman tics In this subsection, we provide the dynamics of CCPSA in terms of a lab el le d tr ansition system (L TS) in the SOS st yle of Plotkin. First, we give in T able 1 an L TS for logical pro cesses, then in T able 2 we lift transition rules from pro cesses to environmen t-free CPSs. In T able 1, the meta-v ariable λ ranges ov er lab els in the set { tick , τ , cv , cv , a ! v , s ? v , E p ! v , E p ? v } . Rules (Outp) , (Inpp) and (Com) serv e to model channel communication, on some channel c . Rules (Read) and (W rite) denote sensor reading and actuator writing, resp ectiv ely . The following three rules mo del three different MITM malicious activities: sensor sniffing, dropping of actuator commands, and in tegrity attacks on data coming from sensors or addressed to actuators. In particular, rule ( E A ctDrop E ) mo dels a DoS attack to the actuator a , where the up date request of the controller is dropp ed by the attack er and it never reaches the actuator, whereas rule ( E SensIn tegr E ) mo dels an inte grity attack on sensor s , as the con troller of s is supplied with a fak e v alue v forged by the attack. Rule (P ar) propagates untimed actions ov er parallel comp onen ts. Rules (Res) , (Rec) , (Then) and (Else) are standard. The following four rules (TimeNil) , (Sleep) , (TimeOut) and 7 T able 1: L TS for pro cesses (Inpp) − rcv c ( x ) .P cv − − − → P { v / x } (Outp) − snd c h v i .P cv − − − → P (Com) P cv − − − → P 0 Q cv − − − → Q 0 P k Q τ − − → P 0 k Q 0 (P ar) P λ − − → P 0 λ 6 = tick P k Q λ − − → P 0 k Q (Read) − read s ( x ) .P s ? v − − − → P { v / x } (W rite) − write a h v i .P a ! v − − − → P ( E Sniff E ) − b sniff s ( x ) .P c Q E s ? v − − − − → P { v / x } ( E Drop E ) − b drop a ( x ) .P c Q E a ? v − − − − → P { v / x } ( E F orge E ) p ∈ { s, a } b fo rge p h v i .P c Q E p ! v − − − − → P ( E A ctDrop E ) P a ! v − − − → P 0 Q E a ? v − − − − → Q 0 P k Q τ − − → P 0 k Q 0 ( E SensIn tegr E ) P E s ! v − − − − → P 0 Q s ? v − − − → Q 0 P k Q τ − − → P 0 k Q 0 (Res) P λ − − → P 0 λ 6∈ { cv , cv } P \ c λ − − → P 0 \ c (Rec) P { ˜ w / ˜ x } λ − − → Q H ( ˜ x ) = P H h ˜ w i λ − − → Q (Then) J b K = true P λ − − → P 0 if ( b ) { P } else { Q } λ − − → P 0 (Else) J b K = false Q λ − − → Q 0 if ( b ) { P } else { Q } λ − − → Q 0 (TimeNil) − nil tick − − − → nil (Sleep) − tick .P tick − − − → P (Timeout) − b µ.P c Q tick − − − → Q (TimeP ar) P tick − − − → P 0 Q tick − − − → Q 0 P k Q tick − − − → P 0 k Q 0 (TimeP ar) mo del the passage of time. F or simplicity , we omit the symmetric counterparts of the rules (Com) , ( E A ctDrop E ) , ( E SensIntegr E ) , and (Par) . In T able 2, we lift the transition rules from pro cesses to environmen t-free CPSs of the form S o n P for S = h ξ x , ξ s , ξ a i . The transition rules are parametric on a physical environmen t E . Except for rule (Deadlo c k) , all rules hav e a common premise ξ x ∈ inv : a system can evolv e only if the inv ariant is satisfied by the curren t physical state. Here, actions, ranged ov er by α , are in the set { τ , cv , cv , tick , deadlo ck , unsafe } . These actions denote: in ternal activities ( τ ); channel transmission ( cv and cv ); the passage of time ( tick ); and t wo sp ecific physical ev ents: system deadlo c k ( deadlo ck ) and the violation of the safety conditions ( unsafe ). Rules (Out) and (Inp) mo del transmission and reception, with an external system, on a channel c . Rule (SensRead) mo dels the reading of the current data detected at a sensor s ; here, the presence of a malicious action E s ! w w ould preven t the reading of the sensor. W e already said that rule ( E SensIn tegr E ) of T able 1 mo dels in tegrity attacks on a sensor s . How ev er, together with rule (SensRead) , it also serves to implicitly mo del DoS attacks on a sensor s , as the c on troller of s cannot read its correct v alue if the attack er is curren tly supplying a fake v alue for it. Rule ( E SensSniff E ) allo ws the attack er to read the confidential v alue detected at a sensor s . Rule (A ctW rite) mo dels the writing of a v alue v on an actuator a ; here, the presence of an attack capable of p erforming a drop action E a ? v prev ents the access to the actuator by the controller. Rule ( E A ctIntegr E ) mo dels a MITM inte grity attack to an actuator a , as the actuator is pro vided with a v alue forged by the attack. Rule (T au) lifts non-observ able actions from pro cesses to systems. This includes comm unications channels and attacks’ accesses to physical devices. A similar lifting o ccurs in rule (Time) for 8 T able 2: L TS for CPSs S o n P parametric on an environmen t E = h evol , me as , inv , safe , ξ w , ξ e i (Out) S = h ξ x , ξ s , ξ a i P cv − − − → P 0 ξ x ∈ inv S o n P cv − − − → S o n P 0 (Inp) S = h ξ x , ξ s , ξ a i P cv − − − → P 0 ξ x ∈ inv S o n P cv − − − → S o n P 0 (SensRead) P s ? v − − − → P 0 ξ s ( s ) = v P E s ! v − − − − → 6 ξ x ∈ inv h ξ x , ξ s , ξ a i o n P τ − − → h ξ x , ξ s , ξ a i o n P 0 ( E SensSniff E ) P E s ? v − − − − → P 0 ξ s ( s ) = v ξ x ∈ inv h ξ x , ξ s , ξ a i o n P τ − − → h ξ x , ξ s , ξ a i o n P 0 (A ctW rite) P a ! v − − − → P 0 ξ 0 a = ξ a [ a 7→ v ] P E a ? v − − − − → 6 ξ x ∈ inv h ξ x , ξ s , ξ a i o n P τ − − → h ξ x , ξ s , ξ 0 a i o n P 0 ( E A ctIntegr E ) P E a ! v − − − − → P 0 ξ 0 a = ξ a [ a 7→ v ] ξ x ∈ inv h ξ x , ξ s , ξ a i o n P τ − − → h ξ x , ξ s , ξ 0 a i o n P 0 (T au) P τ − − → P 0 ξ x ∈ inv h ξ x , ξ s , ξ a i o n P τ − − → h ξ x , ξ s , ξ a i o n P 0 (Deadlo c k) S = h ξ x , ξ s , ξ a i ξ x 6∈ inv S o n P deadlock − − − − − − → S o n P (Time) P tick − − − → P 0 S = h ξ x , ξ s , ξ a i S 0 ∈ next( E ; S ) ξ x ∈ inv S o n P tick − − − → S 0 o n P 0 (Safet y) S = h ξ x , ξ s , ξ a i ξ x 6∈ safe ξ x ∈ inv S o n P unsafe − − − − − → S o n P timed actions, where next ( E ; S ) returns the set of p ossible physical states for the next time slot. F ormally , for E = h evol , me as , inv , safe , ξ w , ξ e i and S = h ξ x , ξ s , ξ a i , we define: next( E ; S ) def =  h ξ 0 x , ξ 0 s , ξ 0 a i : ξ 0 x ∈ evol ( ξ x , ξ a , ξ w ) ∧ ξ 0 s ∈ me as ( ξ 0 x , ξ e ) ∧ ξ 0 a = ξ a  . Th us, b y an application of rule (Time) a CPS mo ves to the next physical state, in the next time slot. Rule (Deadlo c k) is introduced to signal the violation of the inv ariant. When the inv ariant is violated, a system deadlo c k o ccurs and then, in CCPSA , the system emits a sp ecial action deadlo ck , forever. Similarly , rule (Safet y) is introduced to detect the violation of safet y conditions. In this case, the system may emit a sp ecial action unsafe and then contin ue its evolution. Summarising, in the L TS of T able 2 we define transitions rules of the form S o n P α − − → S 0 o n P 0 , parametric on some ph ysical environmen t E . As physical environmen ts do not change at runtime, S o n P α − − → S 0 o n P 0 en tails E ; S o n P α − − → E ; S 0 o n P 0 , thus providing the L TS for all CPSs in CCPSA . Remark 1. Note that our op er ational semantics ensur es that malicious actions of the form E s ! v (inte grity/DoS attack on sensor s ) or E a ? v (DoS attack on actuator a ) have a pr e-emptive p ower. These attacks c an always pr event the r e gular ac c ess to a physic al devic e by its c ontr ol ler. 2.3 Beha vioural seman tics Ha ving defined the actions that can b e p erformed by a CPS of the form E ; S o n P , w e can easily concatenate these actions to define the p ossible exe cution tr ac es of the system. F ormally , given a trace t = α 1 . . . α n , we will write t − − → as an abbreviation for α 1 − − − → . . . α n − − − → , and we will use the function # tick ( t ) to get the num b er of o ccurrences of the action tick in t . 9 The notion of trace allows us to provide a formal definition of system soundness: a CPS is said to b e sound if it never deadlo c ks and nev er violates the safety conditions. Definition 6 (System soundness) . L et M b e a wel l-forme d CPS. W e say that M is sound if whenever M t − − → M 0 , for some t , the actions deadlo ck and unsafe never o c cur in t . In our security analysis, w e will alwa ys fo cus on sound CPSs. W e recall that the observable activities in CCPSA are: time passing, system deadlo c k, violation of safety conditions, and channel communication. Having defined a lab elled transition semantics, w e are ready to formalise our b eha vioural semantics, based on execution traces. W e adopt a standard notation for w eak transitions: w e write = ⇒ for ( τ − − → ) ∗ , whereas α = = ⇒ means = ⇒ α − − → = ⇒ , and finally ˆ α = ⇒ denotes = ⇒ if α = τ and α = ⇒ otherwise. Giv en a trace t = α 1 . . .α n , we write ˆ t = = ⇒ as an abbreviation for c α 1 = = = ⇒ . . . c α n = = = ⇒ . Definition 7 (T race preorder) . W e write M v N if whenever M t − − → M 0 , for some t , ther e is N 0 such that N ˆ t = = ⇒ N 0 . Remark 2. Unlike other pr o c ess c alculi, in CCPSA our tr ac e pr e or der is able to observe (physic al) de ad lo ck due to the pr esenc e of the rule (De ad lo ck) and the sp e cial action deadlo ck : whenever M v N then M eventual ly de ad lo cks if and only if N eventual ly de ad lo cks (se e L emma 1 in the app endix). Our trace preorder can b e used for c omp ositional r e asoning in those con texts that don’t in terfere on physical devices (sensors and actuators) while they ma y interfere on logical comp onen ts (via channel communication). In particular, trace preorder is preserved by parallel comp osition of physic al ly-disjoint CPSs, by parallel comp osition of pur e-lo gic al pro cesses, and by c hannel restriction. In tuitively , tw o CPSs are ph ysically-disjoint if they ha ve different plants but they may share logical channels for communication purp oses. More precisely , ph ysically-disjoint CPSs hav e disjoin t state v ariables and disjoint ph ysical devices (sensors and actuators). As we consider only well-formed CPSs (Definition 5), this ensures that a CPS cannot physically interfere with a parallel CPS b y acting on its physical devices. F ormally , let S i = h ξ i x , ξ i s , ξ i a i and E i = h evol i , me as i , inv i , safe i , ξ i w , ξ i e i b e physical states and physical en vironments, resp ectiv ely , asso ciated to sets of state v ariables X i , sets of sensors S i , and sets of actuators A i , for i ∈ { 1 , 2 } . F or X 1 ∩ X 2 = ∅ , S 1 ∩ S 2 = ∅ and A 1 ∩ A 2 = ∅ , we define: • the disjoint union of the physical states S 1 and S 2 , written S 1 ] S 2 , to b e the physical state h ξ x , ξ s , ξ a i suc h that: ξ x = ξ 1 x ∪ ξ 2 x , ξ s = ξ 1 s ∪ ξ 2 s , and ξ a = ξ 1 a ∪ ξ 2 a ; • the disjoint union of the physical en vironments E 1 and E 2 , written E 1 ] E 2 , to b e the physical en vironment h evol , me as , inv , safe , ξ w , ξ e i such that: 1. evol = evol 1 ∪ evol 2 2. me as = me as 1 ∪ me as 2 3. S 1 ] S 2 ∈ inv iff S 1 ∈ inv 1 and S 2 ∈ inv 2 4. S 1 ] S 2 ∈ safe iff S 1 ∈ safe 1 and S 2 ∈ safe 2 5. ξ w = ξ 1 w ∪ ξ 2 w 6. ξ e = ξ 1 e ∪ ξ 2 e . Definition 8 (Ph ysically-disjoint CPSs) . L et M i = E i ; S i o n P i , for i ∈ { 1 , 2 } . W e say that M 1 and M 2 ar e ph ysically-disjoint if S 1 and S 2 have disjoint sets of state variables, sensors and actuators. In this c ase, we write M 1 ] M 2 to denote the CPS define d as ( E 1 ] E 2 ); ( S 1 ] S 2 ) o n ( P 1 k P 2 ) . 10 A pur e-lo gic al pr o c ess is a pro cess that may interfere on communication channels but it never in terferes on physical devices as it never accesses sensors and/or actuators. Basically , a pure-logical pro cess is a TPL pro cess [ 27 ]. Thus, in a system M k Q , where M is an arbitrary CPS, a pure-logical pro cess Q cannot interfere with the physical evolution of M . A pro cess Q can, how ever, definitely in teract with M via communication c hannels, and hence affect its observ able b ehaviour. Definition 9 (Pure-logical pro cesses) . A pr o c ess P is c al le d pure-logical if it never acts on sensors and/or actuators. No w, w e can finally state the comp ositionalit y of our trace preorder v (the pro of can b e found in the app endix). Theorem 1 (Comp ositionalit y of v ) . L et M and N b e two arbitr ary CPSs in CCPSA . 1. M v N implies M ] O v N ] O , for any physic al ly-disjoint CPS O ; 2. M v N implies M k P v N k P , for any pur e-lo gic al pr o c ess P ; 3. M v N implies M \ c v N \ c , for any channel c . The reader may wonder whether our trace preorder v is preserved by more p ermissiv e contexts. The answ er is no. Supp ose that in the second item of Theorem 1 w e allow ed a pro cess P that can also read on sensors. In this case, ev en if M v N , the parallel pro cess P migh t read a different v alue in the t wo systems at the v ery same sensor s (due to the sensor error) and transmit these different v alues on a free c hannel, breaking the congruence. Activities on actuators may also lead to different b eha viours of the comp ound systems: M and N ma y hav e physical comp onents that are not exactly aligned. A similar reasoning applies when comp osing CPSs with non physically-disjoin t ones: noise on physical devices may break the comp ositionalit y result. As w e are interested in formalising timing asp ects of attacks, such as b eginning and duration, we prop ose a timed v arian t of v up to (a p ossibly infinite) discr ete time interval m..n , with m ∈ N + and n ∈ N + ∪ ∞ . In tuitively , we write M v m..n N if the CPS N sim ulates the execution traces of M in all time slots, except for those contained in the discrete time interv al m..n . Definition 10 (T race preorder up to a time interv al) . W e write M v m..n N , for m ∈ N + and n ∈ N + ∪ {∞} , with m ≤ n , if the fol lowing c onditions hold: • m is the minimum inte ger for which ther e is a tr ac e t , with # tick ( t )= m − 1 , s.t. M t − − → and N 6 ˆ t = = ⇒ ; • n is the infimum element of N + ∪ {∞} , n ≥ m , such that whenever M t 1 − − → M 0 , with # tick ( t 1 ) = n − 1 , ther e is t 2 , with # tick ( t 1 ) = # tick ( t 2 ) , such that N t 2 − − → N 0 , for some N 0 , and M 0 v N 0 . In Definition 10, the first item says that N can simulate the traces of M for at most m − 1 time slots; whereas the second item says tw o things: (i) in time interv al m..n the simulation do es not hold; (ii) starting from the time slot n + 1 the CPS N can simulate again the traces of M . Note that inf ( ∅ ) = ∞ . Thus, if M v m.. ∞ N , then N simulates M only in the first m − 1 time slots. Theorem 2 (Comp ositionalit y of v m..n ) . L et M and N b e two arbitr ary CPSs in CCPSA . 1. M v m..n N implies that for any physic al ly-disjoint CPS ther e ar e m 0 , n 0 ∈ N + ∪ ∞ , with m 0 ..n 0 ⊆ m..n such that M ] O v m 0 ..n 0 N ] O ; 2. M v m..n N implies that for any pur e-lo gic al pr o c ess P ther e ar e m 0 , n 0 ∈ N + ∪ ∞ , with m 0 ..n 0 ⊆ m..n such that M k P v m 0 ..n 0 N k P ; 3. M v m..n N implies that for any channel c ther e ar e m 0 , n 0 ∈ N + ∪ ∞ , with m 0 ..n 0 ⊆ m..n such that M \ c v m 0 ..n 0 N \ c . The pro of can b e found in the app endix. 11 Engine Ctrl IDS Sensor A ctuator s t cool sy nc Figure 2: The main structure of the CPS Sys 3 A Running Example In this section, we introduce a running example to illustrate how w e can precisely represent CPSs and a v ariet y of different physics-based attac ks. In practice, we formalise a relatively simple CPS Sys in which the temp erature of an engine is maintained within a sp ecific range by means of a co oling system. W e wish to remark here that while we ha ve kept the example simple, it is actually far from trivial and designed to describ e a wid e n umber of attacks. The main structure of the CPS Sys is shown in Figure 2. 3.1 The CPS Sys The physical state State of the engine is characterised b y: (i) a state v ariable temp con taining the current temp erature of the engine, and an integer state v ariable str ess k eeping track of the level of stress of the mec hanical parts of the engine due to high temp eratures (exceeding 9 . 9 degrees); this in teger v ariable ranges from 0 , meaning no stress, to 5 , for high stress; (ii) a sensor s t (suc h as a thermometer or a thermo couple) measuring the temp erature of the engine, (iii) an actuator c o ol to turn on/off the co oling system. The physical environmen t of the engine, Env , is constituted by: (i) a simple evolution la w evol that increases (resp ectiv ely , decreases) the v alue of temp b y one degree p er time unit, when the co oling system is inactiv e (resp ectiv ely , active), up to the uncertain ty of the system; the v ariable str ess is increased each time the current temp erature is ab ov e 9 . 9 degrees, and dropp ed to 0 otherwise; (ii) a measurement map me as returning the v alue detected by the sensor s t , up to the error asso ciated to the sensor; (iii) an inv ariant set sa ying that the system gets faulty when the temp erature of the engine gets out of the range [0 , 50] , (iv) a safet y set to express that the system mov es to an unsafe state when the lev el of stress reac hes the threshold 5 , (v) an uncertaint y function in whic h each state v ariable ma y evolv e with an uncertaint y δ = 0 . 4 degrees, (vi) a sensor-error function sa ying that the sensor s t has an accuracy  = 0 . 1 degrees. F ormally , State = h ξ x , ξ s , ξ a i where: • ξ x ∈ R { temp , str ess } and ξ x ( temp ) = 0 and ξ x ( str ess ) = 0 ; • ξ s ∈ R { s t } and ξ s ( s t ) = 0 ; • ξ a ∈ R { c o ol } and ξ a ( c o ol ) = off ; for the sak e of simplicity , we can assume ξ a to be a mapping { c o ol } → { on , off } such that ξ a ( c o ol ) = off if ξ a ( c o ol ) ≥ 0 , and ξ a ( c o ol ) = on if ξ a ( c o ol ) < 0 ; and Env = h evol , me as , inv , safe , ξ w , ξ e i with: • evol ( ξ i x , ξ i a , ξ w ) is the set of functions ξ ∈ R { temp , str ess } suc h that: – ξ ( temp ) = ξ i x ( temp ) + he at ( ξ i a , c o ol ) + γ , with γ ∈ [ − δ, + δ ] and he at ( ξ i a , c o ol ) = − 1 if ξ i a ( c o ol ) = on (activ e co oling), and he at ( ξ i a , c o ol ) = +1 if ξ i a ( c o ol ) = off (inactiv e co oling); – ξ ( str ess ) = min(5 , ξ i x ( str ess )+1) if ξ i x ( temp ) > 9 . 9 ; ξ ( str ess ) = 0 , otherwise; • me as ( ξ i x , ξ e ) =  ξ : ξ ( s t ) ∈ [ ξ i x ( temp ) −  , ξ i x ( temp ) +  ]  ; • inv = { ξ i x : 0 ≤ ξ i x ( temp ) ≤ 50 } ; 12 time 0 10 20 30 40 50 actual temperature (deg) 0 2 4 6 8 10 12 Figure 3: Three p ossible evolutions of the CPS Sys • safe = { ξ i x : ξ i x ( str ess ) < 5 } (we recall that the stress threshold is 5 ); • ξ w ∈ R { temp , str ess } , ξ w ( temp ) = 0 . 4 = δ and ξ w ( str ess ) = 0 ; • ξ e ∈ R { s t } and ξ e ( s t ) = 0 . 1 =  . F or the cyb er comp onen t of the CPS Sys , we define t wo parallel pro cesses: Ctrl and IDS . The former mo dels the c ontr ol ler activity , consisting in reading the temp erature sensor and in go verning the co oling system via its actuator, whereas the latter mo dels a simple intrusion dete ction system that attempts to detect and signal anomalies in the b eha viour of the system [ 23 ]. Intuitiv ely , Ctrl senses the temp erature of the engine at each time slot. When the sense d temp er atur e is ab o ve 10 degrees, the controller activ ates the co olan t. The co oling activit y is main tained for 5 consecutive time units. After that time, the con troller sync hronises with the IDS comp onen t via a priv ate channel sync , and then waits for instructions , via a c hannel ins . The IDS comp onen t c hecks whether the sense d temp er atur e is still ab o ve 10 . If this is the case, it sends an alarm of “high temp erature”, via a sp ecific channel, and then tells Ctrl to keep co oling for 5 more time units; otherwise, if the temp erature is not ab ov e 10 , the IDS comp onen t requires Ctrl to stop the co oling activity . Ctrl = read s t ( x ) . if ( x > 10) { Co oling } else { tick . Ctrl } Co oling = write c o ol h on i . tick 5 . Che ck Che ck = snd sync . rcv ins ( y ) . if ( y = k eep _ co oling ) { tick 5 . Che ck } else { write c o ol h off i . tick . Ctrl } IDS = rcv sync . read s t ( x ) . if ( x > 10) { snd alarm h high _ temp i . snd ins h k eep _ co oling i . tick . IDS } else { snd ins h stop i . tick . IDS } . Th us, the whole CPS is defined as: Sys = Env ; State o n ( Ctrl k IDS ) \{ sync , ins } F or the sake of simplicity , our IDS comp onen t is quite basic: for instance, it do es not chec k whether the temp erature is to o low. How ev er, it is straigh tforward to replace it with a more sophisticated one, containing more informative tests on sensor v alues and/or on actuators commands. Figure 3 shows three p ossible ev olutions in time of the state v ariable temp of Sys : (i) the firs t one (in red), in which the temp erature of the engine alw ays grows of 1 − δ = 0 . 6 degrees p er time unit, when the co oling is off, and alwa ys decrease of 1 + δ = 1 . 4 degrees p er time unit, when the co oling is on; (ii) the second 13 Figure 4: Upp aal SMC mo del for the physical comp onen t of Sys one (in blue), in which the temp erature alwa ys grows of 1 + δ = 1 . 4 degrees p er time unit, when the co oling is off, and alwa ys decreases of 1 − δ = 0 . 6 degrees p er time unit, when the co oling is on; (iii) and a third one (in y ellow), in which, dep ending on whether the co oling is off or on, at each time step the temp erature gro ws or decreases of an arbitrary offset lying in the interv al [1 − δ , 1 + δ ] . Our op erational semantics allows us to formally prov e a num b er of prop erties of our running example. F or instance, Prop osition 1 says that the Sys is sound and it never fires the alarm . Prop osition 1. If Sys t − − → for some tr ac e t = α 1 . . . α n , then α i ∈ { τ , tick } , for any i ∈ { 1 , . . . , n } . A ctually , we can b e quite precise on the temp erature reac hed by Sys b efore and after the co oling: in each of the 5 rounds of co oling, the temp erature will drop of a v alue lying in the real interv al [1 − δ, 1 + δ ] , where δ is the uncertaint y . Prop osition 2. F or any exe cution tr ac e of Sys , we have: • when Sys turns on the c o oling, the value of the state variable temp r anges over (9 . 9 , 11 . 5] ; • when Sys turns off the c o oling, the value of the variable temp r anges over (2 . 9 , 8 . 5] . The pro ofs of the Prop ositions 1 and 2 can b e found in the app endix. In the following section, we will v erify the safety prop erties stated in these tw o prop ositions relying on the statistical mo del chec ker Upp aal SMC [15]. 3.2 A formalisation of Sys in Upp aal SMC In this section, we formalise our running example in Upp aal SMC [ 15 ], the statistic al extension of the Upp aal mo del chec ker [ 5 ] supp orting the analysis of systems expressed as comp osition of time d and/or pr ob abilistic automata . In Upp aal SMC, the user must sp ecify tw o main statistical parameters α and  , ranging in the in terv al [0 , 1] , and represen ting the probability of false ne gatives and probabilistic unc ertainty , resp ectiv ely . Thus, giv en a CTL prop ert y of the system under inv estigation, the to ol returns a probability estimate for that prop erty , lying in a confidence interv al [ p −  , p +  ], for some probability p ∈ [0 , 1] , with an accuracy 1 − α . The num b er of necessary runs to ensure the required accuracy is then computed by the to ol relying on the Chernoff-Ho effding theory [12]. 14 Figure 5: Upp aal SMC mo del for the netw ork comp onent of Sys Figure 6: Upp aal SMC mo del for the logical comp onen t of Sys 3.2.1 Mo del The Upp aal SMC mo del of our use case Sys is given b y three main comp onen ts represented in terms of p ar al lel time d automata : the physic al c omp onent , the network , and the lo gic al c omp onent . The ph ysical comp onen t, whose mo del is shown in Figure 4, consists of four automata: (i) the _Engine_ automaton that gov erns the evolution of the v ariable temp by means of the he at and c o ol functions; (ii) the _Sensor_ automaton that updates the global v ariable sens at eac h measuremen t request; (iii) the _A ctuator_ automaton that activ ates/deactiv ates the co oling system; (iv) the _Safety_ automaton that handles the in teger v ariable str ess , via the up date_str ess function, and the Bo olean v ariables safe and de ad lo cks , associated to the safety set safe and the inv ariant set inv of Sys , resp ectiv ely . 5 W e also hav e a small automaton to mo del a discrete notion of time (via a synchronisation channel tick ) as the evolution of state v ariables is represen ted via difference equations. The network , whose mo del is given in Figure 5, consists of two pr oxies : a pro xy to relay actuator commands b et w een the actuator device and the controller, a second proxy to rela y measurement requests b et ween the sensor device and the logical comp onents (con troller and IDS). The lo gic al c omp onent , whose mo del is given in Figure 6, consists of t wo automata: _Ctrl_ and _IDS_ to mo del the controller and the Intrusion Detection System, resp ectiv ely; b oth of them sync hronise with their asso ciated pro xy cop ying a fresh v alue of sens into their local v ariables ( sens_ctrl and sens_ids , respectively). Under prop er conditions, the _ IDS _ automaton fires alarms by setting a Bo olean v ariable alarm . 5 In Section 6.2, w e explain why we need to implemen t an automaton to c heck for safety conditions rather than verifying a safety prop erty . 15 3.2.2 V erification W e conduct our safety verification using a noteb ook with the following set-up: (i) 2.8 GHz Intel i7 7700 HQ, with 16 GB memory , and Linux Ubuntu 16.04 op erating system; (ii) Upp aal SMC mo del-c heck er 64-bit, v ersion 4.1.19. The statistical parameters of false negatives ( α ) and probabilistic uncertaint y (  ) are b oth set to 0.01, leading to a confidence lev el of 99%. As a consequence, having fixed these parameters, for each of our exp erimen ts, Upp aal SMC run a n umber of runs that ma y v ary from a few hundreds to 26492 ( cf. Chernoff-Ho effding b ounds). W e basically use Upp aal SMC to verify prop erties expressed in terms of time b ounde d CTL formulae of the form  [ t 1 ,t 2 ] e prop and ♦ [0 ,t 2 ] e prop 6 , where t 1 and t 2 are time instants according to the discrete representation of time in Upp aal SMC. In practice, we use formulae of the form  [ t 1 ,t 2 ] e prop to compute the probability that a prop ert y e prop 7 holds in all time slots of the time interv al t 1 ..t 2 , whereas with formulae of the form ♦ [0 ,t 2 ] e prop w e calculate the probabilit y that a prop ert y e prop holds in a least one time slot of the time in terv al 0 ..t 2 . Th us, instead of pro ving Prop osition 1, we v erify , with a 99% accuracy , that in all p ossible executions that are at most 1000 time slots long, the system Sys results to b e sound and alarm free, with probability 0 . 99 . F ormally , we verify the following three prop erties: •  [1 , 1000] ( ¬ de ad lo cks ) , expressing that the system do es not deadlo ck; •  [1 , 1000] ( safe ) , expressing that the system do es not violate the safety conditions; •  [1 , 1000] ( ¬ alarm ) , expressing that the IDS do es not fire any alarm. F urthermore, instead of Prop osition 2, w e verify , with the same accuracy and for runs of the same length (up to a short initial transitory phase lasting 5 time instants) that if the co oling system is off, then the temp erature of the engine lies in the real interv al (2 . 9 , 8 . 5] , otherwise it ranges ov er the interv al (9 . 9 , 11 . 5] . F ormally , we verify the following t wo prop erties: •  [5 , 1000] ( Co oling _ off = ⇒ ( temp > 2 . 9 ∧ temp ≤ 8 . 5)) •  [5 , 1000] ( Co oling _ on = ⇒ ( temp > 9 . 9 ∧ temp ≤ 11 . 5)) . The verification of each of the five properties ab o v e requires around 15 minutes. The Upp aal SMC mo dels of our system and the attac ks discussed in the next section are a v ailable at the rep ository https://bitbucket.org/AndreiMunteanu/cps_smc/src/ . Remark 3. In our Upp aal SMC mo del we de cide d to r epr esent b oth unc ertainty of physic al evolution (in the functions heat and co ol of _Engine_ ) and me asur ement noise (in _Sensor_ ) in a pr ob abilistic manner via r andom extr actions. Her e, the r e ader may wonder whether it would have b e en enough to r estrict our SMC analysis by c onsidering only upp er and lower b ounds on these two quantities. A ctual ly, this is not the c ase b e c ause such a r estricte d analysis might miss admissible exe cution tr ac es. T o se e this, supp ose to work with a physic al unc ertainty that is always either 0 . 4 or − 0 . 4 . Then, the temp er atur e r e ache d by the system would always b e of the form n.k , for n, k ∈ N and k even. As a c onse quenc e, our analysis would miss al l exe cution tr ac es in which the system r e aches the maximum admissible temp er atur e of 11 . 5 de gr e es. 4 Ph ysics-based A ttac ks In this section, w e use CCPSA to formalise a thr e at mo del of physics-based attacks, i.e., attacks that can manipulate sensor and/or actuator signals in order to drive a sound CPS into an undesired state [ 55 ]. An attac k may hav e different levels of access to physical devices; for example, it might b e able to get read access 6 The 0 in the left-hand side of the time interv al is imposed by the syn tax of Upp aal SMC. 7 e pr op is a side-effect free expression o ver v ariables (e.g., clock v ariables, location names and primitive v ariables) [5]. 16 to the sensors but not write access; or it might get write-only access to the actuators but not read-access. This level of granularit y is very imp ortan t to mo del precisely how ph ysics-based attacks can affect a CPS [ 13 ]. In CCPSA , we hav e a syntactic wa y to distinguish malicious pro cesses from honest ones. Definition 11 (Honest system) . A CPS E ; S o n P is honest if P is honest, wher e P is honest if it do es not c ontain c onstructs of the form b µ.P 1 c P 2 . W e group physics-based attacks in classes that describ e b oth the malicious activities and the timing asp ects of the attack. Intuitiv ely , a class of attac ks pro vides information ab out which ph ysical devices are accessed b y the attacks of that class, ho w they are accessed (read and/or write), when the attac k b egins and when the attack ends. Thus, let I b e the set of all p ossible malicious activities on the physical devices of a system, m ∈ N + b e the time slot when an attack starts, and n ∈ N + ∪ {∞} b e the time slot when the attack ends. W e then sa y that an attack A is of class C ∈ [ I → P ( m..n )] if: 1. all p ossible m alicious activities of A coincide with those contained in I ; 2. the first of those activities may o ccur in the m th time slot but not b efore; 3. the last of those activities may o ccur in the n th time slot but not after; 4. for ι ∈ I , C ( ι ) returns a (p ossibly empt y) set of time slots when A ma y read/tamp er with the device ι (this set is contained in m..n ); 5. C is a total function, i.e., if no attacks of class C can ac hieve the malicious activity ι ∈ I , then C ( ι ) = ∅ . Definition 12 (Class of attacks) . L et I = { E p ? : p ∈ S ∪ A} ∪ { E p ! : p ∈ S ∪ A} b e the set of al l p ossible malicious activities on physic al devic es. L et m ∈ N + , n ∈ N + ∪ {∞} , with m ≤ n . A class of attacks C ∈ [ I → P ( m..n )] is a total function such that for any attack A of class C we have: (i) C ( ι ) = { k : A t − − → ιv − − − → A 0 ∧ k = # tick ( t ) + 1 } , for ι ∈ I , (ii) m = inf { k : k ∈ C ( ι ) ∧ ι ∈ I } , (iii) n = sup { k : k ∈ C ( ι ) ∧ ι ∈ I } . Along the lines of [ 17 ], w e can sa y that an attac k A affects a sound CPS M if the execution of the comp ound system M k A differs from that of the original system M , in an observ able manner. Basically , a ph ysics-based attack can influe nce the system under attack in at least tw o different wa ys: • The system M k A migh t deadlo c k when M ma y not; this means that the attack A affects the availability of the system. W e recall that in the context of CPSs, deadlo c k is a particular severe ph ysical even t. • The system M k A migh t hav e non-genuine execution traces containing observ ables (violations of safety conditions or communications on channels) that can’t b e repro duced by M ; here the attack affects the inte grity of the system b eha viour. Definition 13 (A ttack tolerance/vulnerability) . L et M b e an honest and sound CPS. W e say that M is toleran t to an attack A if M k A v M . W e say that M is vulnerable to an attack A if ther e is a time interval m..n , with m ∈ N + and n ∈ N + ∪ {∞} , such that M k A v m..n M . Th us, if a system M is vulnerable to an attack A of class C ∈ [ I → P ( m..n )] , during the time interv al m 0 ..n 0 , then the attack op erates during the in terv al m..n but it influences the system under attack in the time interv al m 0 ..n 0 (ob viously , m 0 ≥ m ). If n 0 is finite, then we ha ve a temp or ary attack , otherwise we ha ve a p ermanent attack . F urthermore, if m 0 − n is big enough and n − m is small, then w e hav e a quic k nasty attack that affects the system late enough to allow attack c amouflages [ 24 ]. On the other hand, if m 0 is significan tly smaller than n , then the attack affects the observ able b eha viour of the system well b efore its termination and the CPS has go od c hances of undertaking countermeasures to stop the attack. Finally , if M k A t − − → deadlock − − − − − − − → , 17 Figure 7: Upp aal SMC mo del for the attack er A m of Example 1 for some trace t , then w e say that the attack A is lethal , as it is capable to halt (deadlo ck) the CPS M . This is obviously a p ermanen t attac k. Note that, according to Definition 13, the tolerance (or vulnerability) of a CPS also dep ends on the capabilit y of the IDS comp onen t to detect and signal undesired ph ysical behaviours. In fact, the IDS comp onen t might b e designed to detect abnormal physical b eha viours going well further than deadlo c ks and violations of safety conditions. A ccording to the literature, we sa y that an attack is ste althy if it is able to drive the CPS under attack in to an incorrect physical state (either deadlo c k or violation of the safety conditions) without b eing noticed b y the IDS comp onen t. 4.1 Three different attac ks on the physical devices of the CPS Sys In this subsection, we presen t three different attac ks to the CPS Sys describ ed in Section 3. Here, we use Upp aal SMC to v erify the mo dels asso ciated to the system under attack in order to de tect deadlo c ks, violations of safety conditions, and IDS failures. Example 1. Consider the fol lowing DoS/Integrit y attac k on the the actuator c o ol , of class C ∈ [ I → P ( m..m )] with C ( E cool ?) = C ( E cool !) = { m } and C ( ι ) = ∅ , for ι 6∈ { E cool ? , E cool ! } : A m = tick m − 1 . b drop cool ( x ) . if ( x = off ) { fo rge cool h off i} else { nil }c . Her e, the attack A m op erates exclusively in the m th time slot , when it tries to dr op an eventual c o oling c ommand (on or off ) c oming fr om the c ontr ol ler, and fabric ates a fake c ommand to turn off the c o oling system. Thus, if the c ontr ol ler sends in the m th time slot a c ommand to turn off the c o olant, then nothing b ad happ ens as the attack wil l put the same message b ack. On the hand, if the c ontr ol ler sends a c ommand to turn the c o oling on, then the attack wil l dr op the c ommand. W e r e c al l that the c ontr ol ler wil l turn on the c o oling only if the sense d temp er atur e is gr e ater than 10 (and henc e temp > 9 . 9 ); this may happ en only if m > 8 . Sinc e the c ommand to turn the c o oling on is never r e-sent by Ctrl , the temp er atur e wil l c ontinue to rise, and after only 4 time units the system may violate the safety c onditions emitting an action unsafe , while the IDS c omp onent wil l start sending alarms every 5 time units, until the whole system de ad lo cks b e c ause the temp er atur e r e aches the thr eshold of 50 de gr e es. Her e, the IDS c omp onent of Sys is able to dete ct the attack with only one time unit delay. Prop osition 3. L et Sys b e our running example and A m b e the attack define d in Example 1. Then, • Sys k A m v Sys , for 1 ≤ m ≤ 8 , • Sys k A m v m +4 .. ∞ Sys , for m > 8 . In order to support the statement of Proposition 3 (see also proof in app endix) w e verify our Upp aal SMC mo del of Sys in whic h the communication netw ork used by the controller to access the actuator is compromised. More precis ely , we replace the _Pr oxy_A ctuator_ automaton of Figure 5 with a compromised one, provided in Figure 7, that implemen ts the malicious ac tivities of the MITM attack er A m of Example 1. 18 0 8 5 0 1 0 0 1 5 0 2 0 0 2 5 0 A t t a c k t i m e 0 . 0 0 . 2 0 . 4 0 . 6 0 . 8 1 . 0 Pr o b a b i l i t y Figure 8: Probability results of ♦ [0 ,m ] ( Co oling _ on ∧ g l obal _ clock ≥ m ) by v arying m in 1 .. 300 W e hav e done our analysis, with a 99% accuracy , for execution traces that are at most 1000 time units long and restricting the attack time m in the time interv al 1 .. 300 . The results of our analysis are: • when m ∈ 1 .. 8 , the attack is harmless as the system results to b e safe, deadlo c k free and alarm free, with probability 0 . 99 ; • when m ∈ 9 .. 300 , we hav e the following situation: – the probability that at the attack time m the controller sends a command to activ ate the co oling system (thus, triggering the attack er that will drop the command) can b e obtained by verifying the prop ert y ♦ [0 ,m ] ( Co oling _ on ∧ g lobal _ cl ock ≥ m ) ; as shown in Figure 8, when m gro ws in the time interv al 1 .. 300 , the resulting probability stabilises around the v alue 0 . 096 ; – up to the m + 3 th time slot the system under attac k remains safe, i.e., b oth properties  [1 ,m +3] ( safe ) and  [1 ,m +3] ( ¬ de ad lo ck ) hold with probability 0 . 99 ; – up to the m + 4 th time slot no alarms are fired, i.e., the prop ert y  [1 ,m +4] ( ¬ alarm ) holds with probabilit y 0 . 99 (no false p ositiv es); – in the m + 4 th time slot the system under attac k might b ecome unsafe as the probabilit y , for m ∈ 9 .. 300 , that the prop ert y ♦ [0 ,m +4] ( ¬ safe ) is satisfied stabilises around the v alue 0 . 095 ; 8 – in the m + 5 th time slot the IDS may fire an alarm as the probability , for m ∈ 9 .. 300 , that the prop ert y ♦ [0 ,m +5] ( alarm ) is satisfied stabilises around the v alue 0 . 094 ; 9 – the system under attack may deadlo c k as the property ♦ [0 , 1000] ( deadlock s ) is satisfied with probabilit y 0 . 096 . 10 8 Since this probability coincides with that of ♦ [0 ,m ] ( Co oling _ on ∧ g lobal _ cl ock ≥ m ) , it app ears very likely that the activ ation of the co oling system in the m th time slot triggers the attac ker whose activity drags the system into an unsafe state with a dela y of 4 time slots. 9 As the t wo probabilities are pretty muc h the same, and  [1 ,m +3] ( safe ) and  [1 ,m +4] ( ¬ alarm ) hold, the IDS seems to b e quite effective in detecting the violations of the safety conditions in the m +4 th time slot, with only one time slot dela y . 10 Since the probabilities are still the same, we argue that when the system reaches an unsafe state then it is not able to recov er and it is do omed to deadlo ck. 19 Figure 9: Upp aal SMC mo del for the attack er A m of Example 2 Example 2. Consider the fol lowing DoS/In tegrity attack to the sensor s t , of class C ∈ [ I → P (2 .. ∞ )] such that C ( E s t ?) = { 2 } , C ( E s t !) = 2 .. ∞ and C ( ι ) = ∅ , for ι 6∈ { E s t ! , E s t ? } . The attack begins is activity in the time slot m , with m > 8 , and then never stops: A m = tick m − 1 .A A = b sniff s t ( x ) . if ( x ≤ 10) { B h x i} else { tick .A }c B ( y ) = b fo rge s t h y i . tick .B h y ic B h y i . Her e, the attack A m b ehaves as fol lows. It sle eps for m − 1 time slots and then, in the fol lowing time slot, it sniffs the curr ent temp er atur e at sensor s t . If the sense d temp er atur e v is gr e ater than 10 , then it moves to the next time slot and r estarts sniffing; otherwise fr om that time on it wil l ke ep sending the same temp er atur e v to the lo gic al c omp onents (c ontr ol ler and IDS). A ctual ly, onc e the for gery activity starts, the pr o c ess Ctrl wil l always r e c eive a temp er atur e b elow 10 and wil l never activate the c o oling system (and c onse quently the IDS). As a c onse quenc e, the system under attack Sys k A wil l first move to an unsafe state until the invariant wil l b e violate d and the system wil l de ad lo ck. Inde e d, in the worst exe cution sc enario, alr e ady in the m + 1 th time slot the temp er atur e may exc e e d 10 de gr e es, and after 4 tick -actions, in the m + 5 th time slot, the system may violate the safety c onditions emitting an unsafe action. Sinc e the temp er atur e wil l ke ep gr owing without any c o oling activity, the de ad lo ck of the CPS c annot b e avoide d. This is a lethal attack, as it c auses a shut down of the system; it is also a stealthy attack as it r emains undete cte d b e c ause the IDS never gets into action. Prop osition 4. L et Sys b e our running example and A m , for m > 8 , b e the attack define d in Example 2. Then Sys k A m v m +5 .. ∞ Sys . Here, we verify the Upp aal SMC mo del of Sys in which we assume that its sensor device is compromised (w e recall that our MITM forgery attack on sensors or actuators can b e assimilated to device compromise). The interested reader may find the pro of in the app endix. In particular, we replace the _Sensor_ automaton of Figure 4 with a compromised one, provided in Figure 9, and implementing the malicious activities of the MITM attack er A m of Example 2. W e hav e done our analysis, with a 99% accuracy , for execution traces that are at most 1000 time units long and restricting the attack time m in the integer in terv al 9 .. 300 . The results of our analysis are: • up to the m + 4 th time slot the system under attack remains safe, deadlo c k free, and alarm free, i.e., all three prop erties  [1 ,m +4] ( safe ) ,  [1 ,m +4] ( ¬ de ad lo ck ) , and  [1 ,m +4] ( ¬ alarm ) hold with probability 0 . 99 ; • in the m + 5 th time slot the system under attack migh t b ecome unsafe as the probability , for m ∈ 9 .. 300 , that the prop ert y ♦ [0 ,m +5] ( ¬ safe ) is satisfied stabilises around 0 . 104 ; • the system under attac k will ev entually deadlo c k not later that 80 time slots after the attack time m , as the prop ert y  [ m +80 , 1000] ( deadlock s ) is satisfied with probability 0 . 99 ; • finally , the attack is stealthy as the prop ert y  [1 , 1000] ( ¬ alarm ) holds with probability 0 . 99 . 20 Figure 10: Upp aal SMC mo del for the attack er A n of Example 3 No w, let us examine a similar but less severe attack. Example 3. Consider the fol lowing DoS/Inte grity attack to sensor s t , of class C ∈ [ I → P (1 ..n )] , with C ( E s t !) = C ( E s t ?) = 1 ..n and C ( ι ) = ∅ , for ι 6∈ { E s t ! , E s t ? } : A n = b sniff s t ( x ) . b fo rge s t h x − 4 i . tick .A n − 1 c A n − 1 c A n − 1 , for n > 0 A 0 = nil . In this attack, for n c onse cutive time slots, A n sends to the lo gic al c omp onents (c ontr ol ler and IDS) the curr ent sense d temp er atur e de cr e ase d by an offset 4 . The effe ct of this attack on the system dep ends on the duration n of the attack itself: (i) for n ≤ 8 , the attack is harmless as the variable temp may not r e ach a (critic al) temp er atur e ab ove 9 . 9 ; (ii) for n = 9 , the variable temp might r e ach a temp er atur e ab ove 9 . 9 in the 9 th time slot, and the attack would delay the activation of the c o oling system of one time slot; as a c onse quenc e, the system might get into an unsafe state in the time interval 14 .. 15 , but no alarm wil l b e fir e d; (iii) for n ≥ 10 , the system may get into an unsafe state in the time slot 14 and in the fol lowing n + 11 time slots; in this c ase, this would not b e stealth y attac k as the IDS wil l fir e the alarm with a delay of at most two time slots later, r ather this is a temp orary attack that ends in the time slot n + 11 . Prop osition 5. L et Sys b e our use c ase and A n b e the attack define d in Example 3. Then: • Sys k A n v Sys , for n ≤ 8 , • Sys k A n v 14 .. 15 Sys , for n = 9 , • Sys k A n v 14 ..n +11 Sys , for n ≥ 10 . Here, we verify the Upp aal SMC mo del of Sys in which we replace the _Pr oxy_Sensor_ automaton of Figure 5 with a compromised one, provided in Figure 10, and implementing the MITM activities of the attac ker A n of Example 3. The interested reader may find the pro of in the app endix. W e hav e done our analysis, with a 99% accuracy , for execution traces that are at most 1000 time units long, and assuming that the dur ation of the attack n ma y v ary in the in teger interv al 1 .. 300 . The results of our analysis are: • when n ∈ 1 .. 8 , the system under attack remains safe, deadlo c k free, and alarm free, i.e., all three prop erties  [1 , 1000] ( safe ) ,  [1 , 1000] ( ¬ de ad lo ck ) , and  [1 , 1000] ( ¬ alarm ) hold with probability 0 . 99 ; • when n = 9 , we hav e the following situation: – the system under attack is deadlo c k free, i.e., the prop ert y  [1 , 1000] ( ¬ de ad lo ck ) holds with proba- bilit y 0 . 99 ; – the system remains safe and alarm free, except for the time interv al 14 .. 15 , i.e., all the follo w- ing prop erties  [1 , 13] ( safe ) ,  [1 , 13] ( ¬ alarm ) ,  [16 , 1000] ( safe ) , and  [16 , 1000] ( ¬ alarm ) hold with probabilit y 0 . 99 ; 21 – in the time interv al 14 .. 15 , we ma y ha v e violations of safet y conditions, as the property ♦ [0 , 14] ( ¬ safe ∧ g lobal _ clock ≥ 14) is satisfied with a probability 0 . 62 , while the prop ert y ♦ [0 , 15] ( ¬ safe ∧ g lobal _ clock ≥ 15) is satisfied with probability 0 . 21 ; b oth violations are ste althy as the prop ert y  [14 , 15] ( ¬ alarm ) holds with probability 0 . 99 ; • when n ≥ 10 , we hav e the following situation: – the system is deadlo ck free, i.e., the prop ert y  [1 , 1000] ( ¬ de ad lo ck ) holds with probability 0 . 99 ; – the system remains safe except for the time interv al 14 ..n + 11 , i.e., the t wo prop erties  [1 , 13] ( safe ) and  [ n +12 , 1000] ( safe ) hold with probability 0 . 99 ; – the system is alarm free except for the time interv al n + 1 ..n + 11 , i.e., the t wo prop erties  [0 ,n ] ( ¬ alarm ) and  [ n +12 , 1000] ( ¬ alarm ) hold with probability 0 . 99 ; – in the 14 th time slot the system under attack may reach an unsafe state as the probability , for n ∈ 10 .. 300 , that the prop erty ♦ [0 , 14] ( ¬ safe ∧ g lobal _ clock ≥ 14) is satisfied stabilises around 0 . 548 ; – once the attack has terminated, in the time interv al n + 1 ..n + 11 , the system under attac k has go od chances to reach an unsafe state as the probability , for n ∈ 10 .. 300 , that the prop erty ♦ [0 ,n +11] ( ¬ safe ∧ n +1 ≤ gl obal _ cl ock ≤ n +11) is satisfied stabilises around 0 . 672 ; – the violations of the safety conditions remain completely stealthy only up to the duration n of the attac k (we recall that  [0 ,n ] ( ¬ alarm ) is satisfied with probability 0 . 99 ); the probability , for n ∈ 10 .. 300 , that the prop ert y ♦ [0 ,n +11] ( alarm ) is satisfied stabilises around 0 . 13 ; thus, in the time in terv al n + 1 ..n + 11 , only a small p ortion of violations of safety conditions are detected b y the IDS while a great ma jority of them remains stealthy . 4.2 A technique for proving attac k tolerance/vulnerabilit y In this subsection, w e provide sufficien t criteria to prov e attac k tolerance/vulnerability to attac ks of an arbitrary class C . Actually , we do more than that: we pro vide sufficient criteria to prov e attac k tolerance/vulnerability to all attacks of an y class C 0 that is somehow “weak er” than a given class C . Definition 14. L et C i ∈ [ I → P ( m i ..n i )] , for i ∈ { 1 , 2 } , b e two classes of attacks, with m 1 ..n 1 ⊆ m 2 ..n 2 . W e say that C 1 is weak er than C 2 , written C 1  C 2 , if C 1 ( ι ) ⊆ C 2 ( ι ) for any ι ∈ I . In tuitively , if C 1  C 2 then: (i) the attacks of class C 1 migh t achiev e fewer malicious activities than an y attac k of class C 2 (formally , there may b e ι ∈ I suc h that C 1 ( ι ) = ∅ and C 2 ( ι ) 6 = ∅ ); (ii) for those malicious activities ι ∈ I ac hieved by the attacks of b oth classes C 1 and C 2 (i.e., C 1 ( ι ) 6 = ∅ and C 2 ( ι ) 6 = ∅ ), if they may b e p erpetrated b y the attacks of class C 1 at some time slot k ∈ m 1 ..n 1 (i.e., k ∈ C 1 ( ι ) ) then all attac ks of class C 2 ma y do the same activity ι at the same time k (i.e., k ∈ C 2 ( ι ) ). The next ob jective is to define a notion of most p owerful attack (also called top attacker ) of a given class C , such that, if a CPS M tolerates the most p o werful attack of class C then it also tolerates any attack of class C 0 , with C 0  C . W e will provide a similar condition for attack vulnerability: let M b e a CPS vulnerable to T op ( C ) in the time interv al m 1 ..n 1 ; then, for any attack A of class C 0 , with C 0  C , if M is vulnerable to A then it is so for a smaller time interv al m 2 ..n 2 ⊆ m 1 ..n 1 . Our notion of top attack er has tw o extra ingredients with resp ect to the physics-based attacks seen up to no w: (i) nondeterminism , and (ii) time-unguarded recursive pro cesses. This extra p o wer of the top attack er is not a problem as we are lo oking for sufficient criteria. With resp ect to nondeterminism, w e assume a generic pro cedure rnd () that given an arbitrary set Z returns an elemen t of Z c hosen in a nondeterministic manner. This procedure allows us to express nondeterministic choic e , P ⊕ Q , as an abbreviation for the pro cess if ( rnd ( { true , false } )) { P } else { Q } . Thus, let ι ∈ { E p ? : p ∈ S ∪ A} ∪ { E p ! : p ∈ S ∪ A} , m ∈ N + , n ∈ N + ∪ {∞} , with m ≤ n , and T ⊆ m..n , we define 22 the attac k pro cess Att ( ι, k , T ) 11 as the attack which may achiev e the malicious activity ι , at the time slot k , and which tries to do the same in all subsequent time slots of T . F ormally , Att ( E a ? , k , T ) = if ( k ∈ T ) { ( b drop a ( x ) . A tt ( E a ? , k , T ) c Att ( E a ? , k +1 , T )) ⊕ tick . Att ( E a ? , k +1 , T ) } else { if ( k < sup( T )) { tick . Att ( E a ? , k +1 , T ) } else { nil }} Att ( E s ? , k , T ) = if ( k ∈ T ) { ( b sniff s ( x ) . A tt ( E s ? , k , T ) c Att ( E s ? , k +1 , T )) ⊕ tick . Att ( E s ? , k +1 , T ) } else { if ( k < sup( T )) { tick . Att ( E s ? , k +1 , T ) } else { nil }} Att ( E p ! , k , T ) = if ( k ∈ T ) { ( b fo rge p h rnd ( R ) i . Att ( E p ! , k , T ) c A tt ( E p ! , k +1 , T )) ⊕ tick . Att ( E p ! , k +1 , T ) } else { if ( k < sup( T )) { tick . Att ( E p ! , k +1 , T ) } else { nil }} . Note that, for T = ∅ , we assume sup ( T ) = −∞ . W e can now use the definition ab ov e to formalise the notion of most p o werful attac k of a given class C . Definition 15 (T op attack er) . L et C ∈ [ I → P ( m..n )] b e a class of attacks. W e define T op ( C ) = Y ι ∈I A tt ( ι, 1 , C ( ι )) as the most p o werful attack , or top attac ker , of class C . The following theorem provides soundness criteria for attack tolerance and attack vulnerability . Theorem 3 (Soundness criteria) . L et M b e an honest and sound CPS, C an arbitr ary class of attacks, and A an attack of a class C 0 , with C 0  C . • If M k T op ( C ) v M then M k A v M . • If M k T op ( C ) v m 1 ..n 1 M then either M k A v M or M k A v m 2 ..n 2 M , with m 2 ..n 2 ⊆ m 1 ..n 1 . Corollary 1. L et M b e an honest and sound CPS, and C a class of attacks. If T op ( C ) is not lethal for M then any attack A of class C 0 , with C 0  C , is not lethal for M . If T op ( C ) is not a p ermanent attack for M , then any attack A of class C 0 , with C 0  C , is not a p ermanent attack for M . The following example illustrates how Theorem 3 could b e used to infer attac k tolerance/vulnerability with resp ect to an entire class of attacks. Example 4. Consider our running example Sys and a class of attacks C m , for m ∈ N , such that C m ( E cool ?) = C m ( E cool !) = { m } and C m ( ι ) = ∅ , for ι 6∈ { E cool ? , E cool ! } . Attacks of class C m may tamp er with the actuator c o ol only in the time slot m (i.e., in the time interval m..m ). The attack A m of Example 1 is of class C m . In the follo wing analysis in Upp aal SMC of the top attac ker T op ( C m ) , w e will sho w that b oth the vulnerabilit y window and the probability of successfully attacking the system represent an upp er b ound for the attac k A m of Example 1 of class C m . T echnically , we verify the Upp aal SMC mo del of Sys in whic h we replace the _Pr oxy_A ctuator_ automaton of Figure 5 with a compromised one, pro vided in Figure 11, and implemen ting the activities of the top attack er T op ( C m ) . W e carry out our analysis with a 99% accuracy , for execution traces that are at most 1000 time slots long, limiting the attack time m to the integer interv al 1 .. 300 . T o explain our analysis further, let us pro vide details on how T op ( C m ) affects Sys when compared to the attac ker A m of class C m seen in the Example 1. • In the time in terv al 1 ..m , the attack ed system remains safe, deadlock free, and alarm free. F ormally , the three prop erties  [1 ,m ] ( safe ) ,  [1 ,m ] ( ¬ de ad lo ck ) and  [1 ,m ] ( ¬ alarm ) hold with probability 0 . 99 . Thus, in this time interv al, the top attack er is harmless, as well as A m . 11 In case of sensor sniffing, we might a void to add this sp ecific attack pro cess as our top attack er pro cess can forge any p ossible v alue without need to read sensors. 23 Figure 11: Upp aal SMC mo del for the top attack er T op ( C m ) of Example 4 Figure 12: Results of ♦ [0 ,m +3] ( de ad lo ck ∧ glob al _ clo ck ≥ m + 1) by v arying the attack time m • In the time interv al m + 1 ..m + 3 , the system exp osed to the top attack er may deadlo c k when m ∈ 1 .. 8 ; for m > 8 the system under attack is deadlo ck free (see Figure 12). This is b ecause the top attack er, unlik e the attack er A m , can forge in the first 8 time slots co ol-on commands turning on the co oling and dropping the temp erature b elo w zero in the time interv al m + 1 ..m + 3 . Note that no alarms or unsafe b eha viours o ccur in this case, as neither the safet y pro cess nor the IDS chec k whether the temp erature drops b elo w a certain threshold. F ormally , the prop erties  [ m +1 ,m +3] ( safe ) and  [ m +1 ,m +3] ( ¬ alarm ) hold with probability 0 . 99 , as already seen for the attack er A m . • In the time in terv al m + 4 .. 1000 , the top attac ker has better chances to deadlo c k the system when compared with the attack er A m (see Figure 13). With resp ect to safet y and alarms, the top at- tac ker and the attac ker A m ha ve the same probabilit y of success (the prop erties  [ m +4 , 1000] ( safe ) and  [ m +4 , 1000] ( ¬ alarm ) return the same probability results). This example sho ws how the v erification of a top attack er T op ( C ) pro vides an upp er bound of the effectiv eness of the entire class of attac ks C , in terms of b oth vulnerabilit y window and probabilit y of successfully attack the system. Of course, the accuracy of such approximation cannot b e estimated a priori. 5 Impact of a ph ysics-based attac k In the previous section, we hav e group ed physics-based attacks b y fo cussing on the physical devices under attac k and the timing asp ects of the attack (Definition 12). Then, we hav e pro vided a formalisation of when 24 Figure 13: Results of ♦ [0 , 1000] ( de ad lo ck ∧ glob al _ clo ck ≥ m + 4) by v arying the attack time m a CPS should b e considered tolerant/vulnerable to an attac k (Definition 13). In this section, we show that these t wo formalisations are imp ortan t not only to demonstrate the tolerance (or vulnerability) of a CPS with resp ect to certain attacks, but also to ev aluate the disruptive impact of those attacks on the target CPS [21, 44]. The goal of this section is to provide a formal metric to estimate the impact of a suc c essful attack on the physic al b ehaviour of a CPS. In particular, we fo cus on the ability that an attac k may ha ve to drag a CPS out of the correct b eha viour mo delled by its evolution map, with the given uncertaint y . Recall that evol is monotone with respect to the uncertain ty . Thus, as stated in Prop osition 6, an increase of the uncertaint y may translate in to a widening of the range of the p ossible b eha viours of the CPS. In the following, given the physical environmen t E = h evol , me as , inv , safe , ξ w , ξ e i , we write E [ ξ w ← ξ 0 w ] as an abbreviation for h evol , me as , inv , safe , ξ 0 w , ξ e i ; similarly , for M = E ; S o n P w e write M [ ξ w ← ξ 0 w ] for E [ ξ w ← ξ 0 w ]; S o n P . Prop osition 6 (Monotonicit y) . L et M b e an honest and sound CPS with unc ertainty ξ w . If ξ w ≤ ξ 0 w and M t − − → M 0 then M [ ξ w ← ξ 0 w ] t − − → M 0 [ ξ w ← ξ 0 w ] . Ho wev er, a wider uncertaint y in the mo del do es not alw ays corresp ond to a widening of the p ossible b eha viours of the CPS. In fact, this depends on the intrinsic toler anc e of a CPS with resp ect to changes in the uncertaint y function. In the following, we will write ξ w + ξ 0 w to denote the function ξ 00 w ∈ R X suc h that ξ 00 w ( x ) = ξ w ( x ) + ξ 0 w ( x ) , for any x ∈ X . Definition 16 (System ξ -tolerance) . An honest and sound CPS M with unc ertainty ξ w is said ξ -toleran t , for ξ ∈ R X and ξ ≥ 0 , if ξ = sup  ξ 0 : M [ ξ w ← ξ w + η ] v M , for any 0 ≤ η ≤ ξ 0  . In tuitively , if a CPS M has b een designed with a given uncertaint y ξ w , but M is actually ξ -toleran t, with ξ > 0 , then the uncertaint y ξ w is somehow underestimated: the real uncertaint y of M is given by ξ w + ξ . This information is quite imp ortan t when trying to estimate the impact of an attack on a CPS. In fact, if a system M has b een designed with a given uncertaint y ξ w , but M is actually ξ -toleran t, with ξ > 0 , then an attac k has (at least) a “ro om for mano euvre” ξ to degrade the whole CPS without b eing observ ed (and hence detected). Let Sys b e our running example. In the rest of the section, with an abuse of notation, w e will write Sys [ δ ← γ ] to denote Sys where the uncertaint y δ of the v ariable temp has b een replaced with γ . 25 Example 5. The CPS Sys is 1 20 -toler ant as sup  ξ 0 : Sys [ δ ← δ + η ] v Sys , for 0 ≤ η ≤ ξ 0  is e qual to 1 20 . Sinc e δ + ξ = 8 20 + 1 20 = 9 20 , then this statement r elies on the fol lowing pr op osition whose pr o of c an b e found in the app endix. Prop osition 7. W e have • Sys [ δ ← γ ] v Sys , for γ ∈ ( 8 20 , 9 20 ) , • Sys [ δ ← γ ] 6v Sys , for γ > 9 20 . No w everything is in place to define our metric to estimate the impact of an attac k. Definition 17 (Impact) . L et M b e an honest and sound CPS with unc ertainty ξ w . W e say that an attack A has definitive impact ξ on the system M if ξ = inf  ξ 0 : ξ 0 ∈ R X ∧ ξ 0 > 0 ∧ M k A v M [ ξ w ← ξ w + ξ 0 ]  . It has p oin t wise impact ξ on the system M at time m if ξ = inf  ξ 0 : ξ 0 ∈ R X ∧ ξ 0 > 0 ∧ M k A v m..n M [ ξ w ← ξ w + ξ 0 ] , n ∈ N + ∪ {∞}  . In tuitively , the impact of an attack er A on a system M measures the p erturbation introduced by the presence of the attack er in the comp ound system M k A with resp ect to the original system M . With this definition, w e can establish either the definitive (and hence maximum) impact of the attack A on the system M , or the impact at a sp ecific time m . In the latter case, by definition of v m..n , there are tw o p ossibilities: either the impact of the attac k keeps gro wing after time m , or in the time in terv al m + 1 , the system under attac k deadlo c ks. The impact of T op ( C ) provides an upp er b ound for the impact of all attac ks of class C 0 , C 0  C , as sho wn in the following theorem (prov ed in the app endix). Theorem 4 (T op attack er’s impact) . L et M b e an honest and sound CPS, and C an arbitr ary class of attacks. L et A b e an arbitr ary attack of class C 0 , with C 0  C . • The definitive imp act of T op ( C ) on M is gr e ater than or e qual to the definitive imp act of A on M . • If T op ( C ) has p ointwise imp act ξ on M at time m , and A has p ointwise imp act ξ 0 on M at time m 0 , with m 0 ≤ m , then ξ 0 ≤ ξ . In order to help the intuition on the impact metric defined in Definition 17, we give a couple of examples. Here, we fo cus on the role pla yed by the size of the vulnerabilit y window. Example 6. L et us c onsider the attack A n of Example 3, for n ∈ { 8 , 9 , 10 } . Then, • A 8 has definitive imp act 0 on Sys , • A 9 has definitive imp act 0 . 23 on Sys , • A 10 has definitive imp act 0 . 4 on Sys . F ormal ly, the imp acts of these thr e e attacks ar e obtaine d by c alculating inf { ξ 0 : ξ 0 > 0 ∧ Sys k A n v Sys [ δ ← δ + ξ 0 ] } , for n ∈ { 8 , 9 , 10 } . Attack A 9 has a very low imp act on Sys as it may dr ag the system into a temporary unsafe state in the time interval 14 .. 15 , wher e as A 10 has a slightly str onger imp act as it may induc e a temp orary unsafe state during the lar ger time interval 14 .. 21 . T e chnic al ly, sinc e δ + ξ = 0 . 4 + 0 . 4 = 0 . 8 , the c alculation of the imp act of A 10 r elies on the fol lowing pr op osition whose pr o of c an b e found in the app endix. Prop osition 8. L et A 10 b e the attack define d in Example 3. Then: • Sys k A 10 6v Sys [ δ ← γ ] , for γ ∈ (0 . 4 , 0 . 8) , 26 • Sys k A 10 v Sys [ δ ← γ ] , for γ > 0 . 8 . On the other hand, the attac k pro vided in Example 2, driving the system to a ( p ermanent ) de ad lo ck state , has a muc h stronger impact on the CPS Sys than the attack of Example 3. Example 7. L et us c onsider the attack A m of Example 2, for m > 8 . As alr e ady discusse d, this is a ste althy lethal attack that has a very sever e and high imp act. In fact, it has a definitive imp act of 8 . 5 on the CPS Sys . F ormal ly, 8 . 5 = inf  ξ 0 : ξ 0 > 0 ∧ Sys k A m v Sys [ δ ← δ + ξ 0 ]  . T e chnic al ly, sinc e δ + ξ = 0 . 4 + 8 . 5 = 8 . 9 , what state d in this example r elies on the fol lowing pr op osition whose pr o of c an b e found in the app endix. Prop osition 9. L et A m b e the attack define d in Example 2. Then: • Sys k A m 6v Sys [ δ ← γ ] , for γ ∈ (0 . 4 , 8 . 9) , • Sys k A m v Sys [ δ ← γ ] , for γ > 8 . 9 . Th us, Definition 17 provides an instrumen t to estimate the impact of a suc c essful attack on a CPS in terms of the p erturbation introduced b oth on its physical and on its logical pro cesses. How ev er, there is at least another question that a CPS designer could ask: “Is there a wa y to estimate the chances that an attack will b e successful during the execution of my CPS?” T o paraphrase in a more op erational manner: how man y execution traces of my CPS are prone to b e attack ed by a sp ecific attack? As argued in the future work, we b eliev e that pr ob abilistic metrics might rev eal to b e very useful in this resp ect [40, 43]. 6 Conclusions, related and future w ork 6.1 Summary W e hav e provided the or etic al foundations to reason ab out and formal ly detect attacks to physical devices of CPSs. A straigh tforward utilisation of these metho dologies is for mo del-che cking or monitoring in order to b e able to formally analyse security prop erties of CPSs either b efore system deploymen t or, when static analysis is not feasible, at runtime to promptly detect undesired b ehaviours. T o that end, we hav e prop osed a hybrid pro cess calculus, called CCPSA , as a formal sp e cific ation language to mo del physical and cyber comp onen ts of CPSs as w ell as MITM ph ysics-based attacks. Note that our calculus is general enough to represen t Sup ervisory Contr ol And Data A c quisition (SCADA) systems as cyb er comp onen ts which can easily interact with controllers and IDSs via channel comm unications. SCADA systems are the main technology used by system engineers to sup ervise the activities of complex CPSs. Based on CCPSA and its lab elled transition semantics, we hav e formalised a threat mo del for CPSs by grouping physics-based attac ks in classes, according to the target physical devices and tw o timing parameters: b egin and duration of the attacks. Then, we developed tw o different c omp ositional trace semantics for CCPSA to assess attack toler anc e/vulner ability with respect to a given attac k. Suc h a tolerance may hold ad infinitum or for a limited amoun t of time. In the latter case, the CPS under attack is vulnerable and the attack affects the observ able b eha viour of the system only after a certain p oin t in time, when the attack itself may already b e achiev ed or still w orking. Along the lines of GNDC [ 17 ], we hav e defined a notion of top attacker , T op ( C ) , of a giv en class of attacks C , which has b een used to provide sufficient criteria to pro ve attack tolerance/vulnerability to all attacks of class C (and weak er ones). Then, we hav e provided a metric to estimate the maximum imp act introduced in the system under attack with resp ect to its genuine behaviour, according to its evolution law and the uncertaint y of the mo del. W e ha ve prov ed that the impact of the most p o werful attack T op ( C ) represents an upp er b ound for the impact of any attack A of class C (and weak er ones). Finally , we hav e formalised a running example in Upp aal SMC [ 15 ], the statistical extension of the Upp aal mo del chec ker [ 5 ]. Our goal was to test Upp aal SMC as an automatic to ol for the static se curity 27 analysis of a simple but significant CPS exp osed to a num ber of different physics-based attacks with different impacts on the system under attac k. Here, it is imp ortan t to note that, although we ha ve v erified most of the prop erties stated in the pap er, we hav e not b een able to capture time prop erties on the resp onsiv eness of the IDS to violations of the safety conditions. Examples of such prop erties are: (i) there are time slots m and k suc h that the system may ha ve an unsafe state at some time n > m , and the IDS detects this violation with a delay of at least k time slots ( k b eing a low er b ound of the reaction time of the IDS), or (ii) there is a time slot n in which the IDS fires an alarm but neither an unsafe state nor a deadlo c k o ccurs in the time interv al n − k ..n + k : this would provide a tolerance of the o ccurrence of false p ositive . F urthermore, Upp aal SMC do es not supp ort the verification of nested form ulae. Thus, although from a designer’s p oin t of view it would ha ve b een muc h more practical to verify a logic formula of the form ∃ ♦ (  [ t,t +5] temp > 9 . 9) to chec k safety and inv ariant conditions, in Upp aal SMC we had to implement a _Safety_ automaton that is not really part of our CPS (for more details see the discussion of related w ork). 6.2 Related work A num b er of approaches hav e b een prop osed for mo delling CPSs using hybrid pr o c ess algebr as [ 14 , 7 , 57 , 52 , 20 ]. Among these approaches, our calculus CCPSA shares some similarities with the φ -calculus [ 52 ]. How ever, unlik e CCPSA , in the φ -calculus, given a hybrid system ( E , P ) , the pro cess P can dynamically change the ev olution law in E . F urthermore, the φ -calculus do es not hav e a representation of physical devices and measuremen t law, whic h are instead crucial for us to mo del physics-based attacks that op erate in a timely fashion on sensors and actuators. More recently , Galpin et al. [ 20 ] hav e prop osed a pro cess algebra in which the contin uous part of the system is represen ted by appropriate v ariables whose c hanges are determined by activ e influences (i.e., commands on actuators). Man y go od surveys on the security of cyb er-physical systems hav e b een published recently (see, e.g., [ 23 , 62 , 2 , 63 ]), including a surv ey of surveys [ 22 ]. In particular, the surveys [ 63 , 62 ] pro vide a systematic categorisation of 138 selected pap ers on CPS security . Among those 138 pap ers, 65 adopt a discrete notion of time similar to ours, 26 a contin uous one, 55 a quasi-static time mo del, and the rest use a hybrid time mo del. This study encouraged us in adopting a discrete time mo del for physical pro cesses rather than a con tinuous one. Still, one might w onder what is actually lost when one adopts a discrete rather than a contin uous time mo del, in particular when the attack er has the p ossibility to mov e in a contin uous time setting. A contin uous time mo del is, of course, more expressiv e. F or instance, Kano vich et al. [ 32 ] identified a nov el vulnerability in the context of crypto gr aphic pr oto c ols for CPSs in whic h the attack er works in a contin uous-time setting to fo ol discrete-time verifiers. How ev er, we believe that, for physics-b ase d attacks , little is lost by adopting a discrete time mo del. In fact, sensor measuremen ts and actuator commands are elab orated within controllers, whic h are digital devices with an intrinsic discrete notion of time. In particular, with resp ect to dropping of actuator commands and forging of sensor measurements, there are no differences b et w een discrete-time and con tinuous-time attack ers given that to achiev e those malicious activities the attack er has to synchronise with the controller. Thus, there remain only t wo p oten tial malicious activities: sensor sniffing and forging of actuator commands. Can a contin uous-time attack er, able to carry out these tw o malicious activities, b e more disruptive than a similar attack er adopting a discrete-time mo del? This would only b e the case when dealing with very rare physical pro cesses changing their ph ysical state in an extremely fast wa y , faster than the con troller which is the one dictating the discrete time of the CPS. Ho wev er, we b eliev e that CPSs of this kind would be hardly controllable as they would pose serious safety issues ev en in the absence of an y attack er. The surv ey [ 23 ] pro vides an exhaustive review of pap ers on physics-based anomaly detection prop osing a unified taxonomy , whereas the survey [ 2 ] presents the main solutions in the estimation of the consequences of cyb er-attac ks, attac ks mo delling and detection, and the developmen t of securit y architecture (the main types of attacks and threats against CPSs are analysed and group ed in a tree structure). Huang et al. [ 30 ] were among the first to prop ose thr e at mo dels for CPSs. Along with [ 33 , 34 ], they stressed the role play ed by timing parameters on integrit y and DoS attacks. Gollmann et al. [ 24 ] discussed possible goals ( e quipment damage , pr o duction damage , c omplianc e violation ) and stages ( ac c ess , disc overy , c ontr ol , damage , cle anup ) of physics-based attacks. In this article, we fo cused on the “damage” stage, where the attack er already has a rough idea of the plant and the control arc hitecture 28 of the target CPS.As we remark ed in Section 1, here we focus on an attack er who has already entered the CPS, without considering ho w the attac ker gained access to the system, which could ha v e happ ened in sev eral w ays, for instance by attacking an Internet-accessible con troller or one of the communication proto cols. Almost all pap ers discussed in the surveys men tioned ab o ve [ 63 , 23 , 2 ] inv estigate attac ks on CPSs and their protection by relying on simulation test systems to v alidate the results, rather than formal metho dolo gies . W e are aw are of a n umber of works applying formal metho ds to CPS security , although they apply metho ds, and most of the time ha ve goals, that are quite different from ours. W e discuss the most significant ones on the following. Burmester et al. [ 11 ] employ ed hybrid time d automata to give a threat framework based on the traditional Byzan tine faults mo del for crypto-security . Ho wev er, as remark ed in [ 55 ], physics-based attacks and faults ha ve inherently distinct c haracteristics. F aults are considered as ph ysical even ts that affect the system b eha viour where simultaneous even ts don’t act in a coordinated wa y , whereas cyb er attac ks ma y be p erformed o ver a significant n umber of attac k p oin ts and in a co ordinated wa y . In [ 59 ], Vigo presented an attack scenario that addresses some of the p eculiarities of a cyb er-ph ysical adv ersary , and discussed how this scenario relates to other attack mo dels p opular in the security protocol literature. Then, in [ 60 ] Vigo et al. prop osed an untimed calculus of broadcasting processes equipp ed with notions of failed and unw an ted communication. These works differ quite considerably from ours, e.g., they fo cus on DoS attacks without taking into consideration timing asp ects or impact of the attac k. Cóm bita et al. [ 13 ] and Zhu and Basar [ 64 ] applied game the ory to capture the conflict of goals b etw een an attack er who seeks to maximise the damage inflicted to a CPS’s security and a defender who aims to minimise it [42]. Ro cc hetto and Tipp enhauer [ 51 ] in tro duced a taxonomy of the diverse attack er mo dels prop osed for CPS securit y and outline requirements for generalised attack er mo dels; in [ 50 ], they then prop osed an extended Dolev-Y ao attacker mo del suitable for CPSs. In their approach, physical lay er interactions are modelled as abstract in teractions b et ween logical comp onen ts to supp ort reasoning on the physical-la y er security of CPSs. This is done by in tro ducing additional orthogonal channels. Time is not represen ted. Nigam et al. [ 46 ] work ed around the notion of Time d Dolev-Y ao Intruder Mo dels for Cyb er-Physic al Se curity Pr oto c ols by b ounding the n umber of in truders required for the automated v erification of such proto cols. F ollo wing a tradition in security proto col analysis, they provide an answer to the question: How man y intruders are enough for verification and where should they b e placed? They also extend the strand space mo del to CPS proto cols by allo wing for the symbolic representation of time, so that they can use the to ol Maude [ 47 ] along with SMT supp ort. Their notion of time is how ever different from ours, as they fo cus on the time a message needs to tra v el from an agent to another. The pap er does not mention physical devices, suc h as sensors and/or actuators. There are a few approaches that carry out information flow se curity analysis on discrete/contin uous mo dels for CPSs. Akella et al. [ 1 ] prop osed an approach to p erform information flow analysis, including b oth trace-based analysis and automated analysis through pro cess algebra sp ecification. This approach has b een used to verify pro cess algebra mo dels of a gas pip eline system and a smart electric p o wer grid system. Bo dei et al. [ 9 ] prop osed a pro cess calculus supp orting a control flow analysis that safely approximates the abstract b ehaviour of IoT systems. Essentially , they track ho w data spread from sensors to the logics of the netw ork, and how physical data are manipulated. In [ 8 ], the same authors extend their work to infer quantitative me asur es to establish the cost of p ossibly security countermeasures, in terms of time and energy . Another discrete mo del has b een prop osed by W ang [ 61 ], where Petri-net models hav e b een used to verify non-de ducibility se curity pr op erties of a natural gas pip eline system. More recently , Bohrer and Platzer [ 10 ] in tro duced dHL, a hybrid logic for verifying cyber-physical h ybrid-dynamic information flo ws, comm unicating information through b oth discrete computation and physical dynamics, so security is ensured even when attac kers observe c ontinuously-changing values in contin uous time. Huang et al. [ 29 ] prop osed a risk assessment metho d that uses a Bay esian net work to mo del the attack propagation pro cess and infers the probabilities of sensors and actuators to b e compromised. These probabili- ties are fed into a stochastic h ybrid system (SHS) mo del to predict the evolution of the ph ysical pro cess being con trolled. Then, the security risk is quan tified by ev aluating the system a v ailability with the SHS mo del. 29 As regards tools for the formal v erification of CPSs, w e remark that w e tried to v erify our case study using mo del-che cking to ols for distributed systems suc h as PRISM [ 36 ], Upp aal [ 6 ], Real-Time Maude [ 47 ], and p rohver within the M ODEST T OOLSET [ 26 ]. In particular, as our example adopts a discrete notion of time, w e started lo oking at to ols supp orting discrete tim e. PRISM, for instance, relies on Marko v decision pro cesses or discrete-time Mark ov chains, dep ending on whether one is interested in mo delling nonde terminism or not. It supp orts the verification of both CTL and L TL prop erties (when dealing with nonprobabilistic systems). This allo wed us to express the form ula ∃ ♦ (  [ t,t +5] temp > 9 . 9) to v erify violations of the safety conditions, av oiding the implemen tation of the _Safety_ automaton. How ev er, using integer v ariables to represent state v ariables with a fixed precision requires the introduction of extra transitions (to deal with nondeterministic errors), whic h significantly complicates the PRISM mo del. In this resp ect, Upp aal app ears to b e more efficient than PRISM, as w e hav e b een able to concisely express the error o ccurring in integer state v ariables thanks to the sele ct() construct, in which the user can fix the granularit y adopted to approximate a dense in terv al. This discrete representation provides an under-appr oximation of the system b eha viour; thus, a finer granularit y translates into an exp onen tial increase of the complexity of the system, with obvious consequences on the v erification p erformance. Then, we tried to mo del our case study in Real-Time Maude, a completely differen t framew ork for real-time systems, based on r ewriting lo gic . The language supp orts ob ject-like inheritance features that are quite helpful to represent complex systems in a mo dular manner. W e used communication c hannels to implemen t our attac ks on the ph ysical devices. F urthermore, w e used rational v ariables for a more concise discrete representation of state v ariables. W e hav e b een able to verify L TL and T-CTL prop erties, although the verification pro cess resulted to b e quite slow due to a proliferation of rewriting rules when fixing a reasonable granularit y to approximate dense in terv als. As the verification logic is quite p o werful, there is no need to implement an ad ho c pro cess to chec k for safety . Finally , we also tried to mo del our case study in the safety mo del che cker prohver within the M ODEST T OOLSET (see [ 38 ]). W e sp ecified our case study in the high-level language HM ODEST , supp orting: (i) differen tial inclusion to mo del linear CPSs with constant b ounded deriv ativ es; (ii) linear formulae to express nondeterministic assignments within a dense in terv al; (iii) a comp ositional programming style inherited from pro cess algebra; (iv) shared actions to sync hronise parallel comp onen ts. Ho wev er, we faced the same p erformance limitations encountered in Upp aal . Thus, we decided to mov e to statistical mo del chec king. Finally , this article extends the preliminary conference v ersion [ 39 ] in the following asp ects: (i) the calculus has b een sligh tly redesigned by distinguishing physical state and physical environmen t, adding sp ecifying constructs to sniff, drop and forge pack ets, and removing, for simplicity , protected physical devices; (ii) the t wo trace semantics hav e b een prov en to b e comp ositional, i.e., preserv ed by prop erly defined contexts; (iii) both our running example Sys and the attacks prop osed in Examples 1, 2, 3 and 4 ha ve b een implemen ted and verified in Upp aal SMC. 6.3 F uture work While muc h is still to b e done, we b elieve that our pap er provides a stepping stone for the developmen t of formal and automated to ols to analyse the security of CPSs. W e will consider applying, p ossibly after prop er enhancemen ts, existing to ols and frameworks for automated security protocol analysis, resorting to the developmen t of a dedicated to ol if existing ones pro ve not up to the task. W e will also consider further securit y prop erties and concrete examples of CPSs, as well as other kinds of physics-based attacks,suc h as delays in the c ommunic ation of measurements and/or commands, and p erio dic attacks , i.e., attacks that op erate in a p erio dic fashion induci ng p eriodic physical effects on the targeted system that may b e easily confused by engineers with system malfunctions. This will allow us to refine the classes of attac ks we ha ve giv en here (e.g., by formalising a type system amenable to static analysis), and pro vide a formal definition of when a CPS is more secure than another so as to b e able to design, by progressiv e refinement, secure v ariants of a vulnerable CPSs. W e also aim to extend the behavioural theory of CCPSA b y developing suitable pr ob abilistic metrics to tak e in to consideration the probabilit y of a sp ecific trace to actually o ccur. W e ha ve already done some progress in this direction for a v ariant of CCPSA with no security features in it, by defining ad ho c comp ositional bisimulation metrics [ 41 ]. In this manner, we b eliev e that our notion of impact might b e refined by taking 30 in to account quan titative asp ects of an attack suc h as the probability of b eing successful when targeting a sp ecific CPS. A first attempt on a (muc h) simpler IoT setting can b e found in [40]. Finally , with resp ect to automatic approximations of the impact, while w e hav e not yet fully inv estigated the problem, we believe that w e can transform it in to a “minimum problem”. F or instance, if the environmen t uses linear functions, then, b y adapting tec hniques dev elop ed for linear h ybrid automata (see, e.g., [ 3 ]), the set of all traces with length at most n (for a fixed n ) can b e characterised b y a system of first degree inequalities, so the measure of the impact could b e translated into a linear programming problem. A c kno wledgemen ts W e thank the anonymous reviewers for their insightful and careful reviews. Massimo Merro and Andrei Mun teanu hav e b een partially supp orted by the pro ject “Dipartimenti di Eccellenza 2018–2022” funded by the Italian Ministry of Education, Universities and Research (MIUR). References [1] R. Akella, H. T ang, and B. M. McMillin. Analysis of information flow security in cyb er-ph ysical systems. International Journal of Critic al Infr astructur e Pr ote ction , 3(3–4):157–173, 2010. [2] R. Alguliyev, Y. Imamv erdiyev, and L. Sukhostat. Cyb er-ph ysical systems and their security issues. Computers in Industry , 100:212–223, 2018. [3] R. Alur, C. Courcoub etis, N. Halb wac hs, T. A. Henzinger, P .-H. Ho, X. Nicollin, A. Olivero, J. Sifakis, and S. Y ovine. The algorithmic analysis of hybrid systems. The or etic al Computer Scienc e , 138(1):3–34, 1995. [4] E. Barto cci, J. Deshmukh, A. Donzé, G. F ainekos, O. Maler, D. Ničk ović, and S. Sank aranaray anan. Sp ecification-Based Monitoring of Cyb er-Ph ysical Systems: A Survey on Theory , T o ols and Applications. In L e ctur es on Runtime V erific ation — Intr o ductory and A dvanc e d T opics , LNCS 10457, pages 135–175. Springer, 2018. [5] G. Behrmann, A. David, and K. G. Larsen. A T utorial on Uppaal. In F ormal Metho ds for the Design of R e al-Time Systems. SFM-R T 2004 , v olume 3185 of L e ctur e Notes in Computer Scienc e , pages 200–236. Springer, 2004. [6] G. Behrmann, A. Da vid, K. G. Larsen, J. Håk ansson, P . P ettersson, W. Yi, and M. Hendriks. UPP AAL 4.0. In Quantitative Evaluation of Systems , pages 125–126. IEEE Computer So ciet y , 2006. [7] J. A. Bergstra and C. A. Middelburg. Pro cess Algebra for Hybrid Systems. The or etic al Computer Scienc e , 335(2–3):215–280, 2005. [8] C. Bo dei, S. Chessa, and L. Galletta. Measuring security in iot communications. The or etic al Computer Scienc e , pages 100–124, 2019. [9] C. Bodei, P . Degano, G.-L. F errari, and L. Galletta. T racing where IoT data are collected and aggregated. L o gic al Metho ds in Computer Scienc e , 13(3:5):1–38, 2019. [10] B. Bohrer and A. Platzer. A Hybrid, Dynamic Logic for Hybrid-Dynamic Information Flow. In A CM/IEEE Symp osium on L o gic in Computer Scienc e , pages 115–124. ACM, 2018. [11] M. Burmester, E. Magk os, and V. Chrissikopoulos. Mo deling security in cyber-physical systems. International Journal of Critic al Infr astructur e Pr ote ction , 5(3–4):118–126, 2012. [12] H. Chernoff. A Measure of Asymptotic Efficiency for T ests of a Hyp othesis Based on the sum of Observ ations. The Annals of Mathematic al Statistics , 23(4):493–507, 1952. 31 [13] L. F. Cómbita, J. Giraldo, A. A. Cárdenas, and N. Quijano. Resp onse and reconfiguration of cyb er- ph ysical control systems: A survey . In Colombian Confer enc e on Automatic Contr ol , pages 1–6. IEEE, 2015. [14] P . J. L. Cuijp ers and M. A. Reniers. Hybrid pro cess algebra. The Journal of L o gic and Algebr aic Pr o gr amming , 62(2):191–245, 2005. [15] A. Da vid, K. G. Larsen, A. Legay , M. Mikučionis, and D. B. Poulsen. Uppaal SMC T utorial. International Journal on Softwar e T o ols for T e chnolo gy T r ansfer , 17(4):397–415, 2015. [16] D. Dolev and A. C. Y ao. On the security of public key proto cols. IEEE T r ansactions on information the ory , (2):198–208, 1983. [17] R. F o cardi and F. Martinelli. A Uniform Approach for the Definition of Security Prop erties. In F ormal Metho ds , volume 1708 of L e ctur e Notes in Computer Scienc e , pages 794–813. Springer, 1999. [18] G. F rehse. PHA V er: Algorithmic V erification of Hybrid Systems Past HyT ech. International Journal on Softwar e T o ols for T e chnolo gy T r ansfer , 10(3):263–279, 2008. [19] G. F rehse, C. Le Guernic, A. Donzé, S. Cotton, R. Ray , O. Leb eltel, R. Ripado, A. Girard, T. Dang, and O. Maler. SpaceEx: Scalable V erification of Hybrid Systems. In Computer Aide d V erific ation , v olume 6806 of L e ctur e Notes in Computer Scienc e , pages 379–395. Springer, 2011. [20] V. Galpin, L. Bortolussi, and J. Hills ton. HYPE: Hybrid mo delling b y comp osition of flows. F ormal Asp e cts of Computing , 25(4):503–541, 2013. [21] B. Genge, I. Kiss, and P . Haller. A system dynamics approac h for assessing the impact of cyb er attacks on critical infrastructures. International Journal of Critic al Infr astructur e Pr ote ction , 10:3–17, 2015. [22] J. Giraldo, E. Sark ar, A. A. Cárdenas, M. Maniatak os, and M. Kantarcioglu. Securit y and Priv acy in Cyb er-Ph ysical Systems: A Surv ey of Surveys. IEEE Design & T est , 34(4):7–17, 2017. [23] J. Giraldo, D. I. Urbina, A. A. Cárdenas, J. V alente, M. F aisal, J. Ruths, N. O. Tipp enhauer, H. Sandb erg, and R. Candell. A Surv ey of Physics-Based A ttack Detection in Cyb er-Ph ysical Systems. ACM Computing Surveys (CSUR) , 51(4):76:1–76:36, 2018. [24] D. Gollmann, P . Guriko v, A. Isako v, M. Krotofil, J. Larsen, and A. Winnic ki. Cyb er-Ph ysical Systems Securit y: Experimental Analysis of a Vinyl Acetate Monomer Plant. In Pr o c e e dings of the 1st ACM W orkshop on Cyb er-Physic al System Se curity , pages 1–12. ACM, 2015. [25] D. Gollmann and M. Krotofil. Cyb er-Ph ysical Systems Security. In The New Co debr e akers – Essays De dic ate d to David Kahn on the Oc c asion of His 85th Birthday , v olume 9100 of L e ctur e Notes in Computer Scienc e , pages 195–204. Springer, 2016. [26] A. Hartmanns and H. Hermanns. The Mo dest T o olset: An Integrated En vironment for Quan titative Mo delling and V erification. In T o ols and Algorithms for the Construction and Analysis of Systems , v olume 8413 of L e ctur e Notes in Computer Scienc e , pages 593–598. Springer, 2014. [27] M. Hennessy and T. Regan. A Pro cess Algebra for Timed Systems. Information and Computation , 117(2):221–239, 1995. [28] T. A. Henzinger, P .-H. Ho, and H. W ong-T oi. HYTECH: A Mo del Check er for Hybrid Systems. International Journal on Softwar e T o ols for T e chnolo gy T r ansfer , 1(1–2):110–122, 1997. [29] K. Huang, C. Zhou, Y.-C. Tian, S. Y ang, and Y. Qin. Assessing the Ph ysical Impact of Cyb erattac ks on Industrial Cyb er-Ph ysical Systems. IEEE T r ansactions on Industrial Ele ctr onics , 65(10):8153–8162, 2018. 32 [30] Y.-L. Huang, A. A. Cárdenas, S. Amin, Z.-S. Lin, H.-Y. T sai, and S. Sastry . Understanding the physical and economic consequences of attac ks on control systems. International Journal of Critic al Infr astructur e Pr ote ction , 2(3):73–83, 2009. [31] ICS-CER T. Cyb er-Attac k Against Ukrainian Critical Infrastructure. h ttps://ics-cert.us- cert.go v/alerts/IR-ALER T-H-16-056-01, 2015. [32] M. Kanovic h, T. B. Kirigin, V. Nigam, A. Scedrov, and C. T alcott. Discrete vs. Dense Times in the Analysis of Cyb er-Ph ysical Securit y Proto cols. In Principles of Se curity and T rust , volume 9036 of L e ctur e Notes in Computer Scienc e , pages 259–279. Springer, 2015. [33] M. Krotofil and A. A. Cárdenas. Resilience of Pro cess Control Systems to Cyb er-Physical A ttacks. In Nor dSe c 2013: Se cur e IT Systems , volume 8208 of L e ctur e Notes in Computer Scienc e , pages 166–182. Springer, 2013. [34] M. Krotofil, A. A. Cárdenas, J. Larsen, and D. Gollmann. V ulnerabilities of cyb er-physical systems to stale data – Determining the optimal time to launch attacks. International Journal of Critic al Infr astructur e Pr ote ction , 7(4):213–232, 2014. [35] D. Kushner. The real story of stuxnet. IEEE Sp e ctrum , 50(3):48–53, 2013. [36] M. Z. Kwiatk owsk a, G. Norman, and D. Park er. PRISM 4.0: V erification of Probabilistic Real-Time Systems. In Computer Aide d V erific ation , volume 6806 of L e ctur e Notes in Computer Scienc e , pages 585–591. Springer, 2011. [37] R. Lanotte and M. Merro. A Calculus of Cyb er-Ph ysical Systems. In L anguage and Automata The ory and Applic ations , volume 10168 of L e ctur e Note in Computer Scienc e , pages 115–127. Springer, 2017. [38] R. Lanotte, M. Merro, and A. Muntean u. A Mo dest Security Analysis of Cyb er-Ph ysical Systems: A Case Study. In F ormal T e chniques for Distribute d Obje cts, Comp onents, and Systems , volume 10854 of L e ctur e Notes in Computer Scienc e , pages 58–78. Springer, 2018. [39] R. Lanotte, M. Merro, R. Muradore, and L. Viganò. A F ormal Approach to Cyb er-Physical Attac ks. In Computer Se curity F oundations Symp osium , pages 436–450. IEEE Computer So ciet y , 2017. [40] R. Lanotte, M. Merro, and S. Tini. T ow ards a F ormal Notion of Impact Metric for Cyb er-Physical A ttacks. In Inte gr ate d F ormal Metho ds , volume 11023 of L e ctur e Notes in Computer Scienc e , pages 296–315. Springer, 2018. [41] R. Lanotte, M. Merro, and S. Tini. A Probabilistic Calculus of Cyb er-Ph ysical Systems. Information and Computation , 2020. [42] M. H. Manshaei, Q. Zhu, T. Alp can, T. Bacşar, and J.-P . Hubaux. Game theory meets netw ork security and priv acy . ACM Computer Surveys , 45(3):25, 2013. [43] M. Merro, J. Kleist, and U. Nestmann. Mobile ob jects as mobile processes. Information and Computation , 177(2):195–241, 2002. [44] J. Milošević, D. Umsonst, H. Sandb erg, and K. H. Johansson. Quantifying the Impact of Cyb er-A ttack Strategies for Control Systems Equipp ed With an Anomaly Detector. In Eur op e an Contr ol Confer enc e (ECC) , pages 331–337. IEEE, 2018. [45] A. F. Murillo Piedrahita, V. Gaur, J. Giraldo, A. A. Cárdenas, and S. J. Rueda. Virtual inciden t resp onse functions in control systems. Computer Networks , 135:147–159, 2018. [46] V. Nigam, C. T alcott, and A. A. Urquiza. T o wards the Automated V erification of Cyb er-Ph ysical Securit y Proto cols: Bounding the Number of Timed Intruders. In Computer Se curity - ESORICS 2016 , v olume 9879 of L e ctur e Notes in Computer Scienc e , pages 450–470. Springer, 2016. 33 [47] P . C. Ölv eczky and J. Meseguer. Semantics and pragmatics of Real-Time Maude. Higher-Or der and Symb olic Computation , 20(1–2):161–196, 2007. [48] A. Platzer. L o gic al F oundations of Cyb er-Physic al Systems . Springer, 2018. [49] J.-D. Quesel, S. Mitsch, S. M. Lo os, N. Aréchiga, and A. Platzer. How to mo del and prov e hybrid systems with KeYmaera: a tutorial on safety. International Journal on Softwar e T o ols for T e chnolo gy T r ansfer , 18(1):67–91, 2016. [50] M. Ro cchetto and N. O. Tipp enhauer. CPD Y: Extending the Dolev-Y ao Attac k er with Ph ysical-Lay er In teractions. In F ormal Metho ds and Softwar e Engine ering , volume 10009 of L e ctur e Notes in Computer Scienc e , pages 175–192. Springer, 2016. [51] M. Ro cc hetto and N. O. Tipp enhauer. On A ttack er Mo dels and Profiles for Cyb er-Ph ysical Systems. In Computer Se curity - ESORICS 2016 , v olume 9879 of L e ctur e Notes in Computer Scienc e , pages 427–449. Springer, 2016. [52] W. C. Rounds and H. Song. The φ -calculus: A Language for Distributed Control of Reconfigurable Em b edded Systems. In Hybrid Systems: Computation and Contr ol , volume 2623 of L e ctur e Notes in Computer Scienc e , pages 435–449. Springer, 2003. [53] J. Sla y and M. Miller. Lessons Learned from the Maro och y W ater Breach. In Critic al Infr astructur e Pr ote ction , IFIP 253, pages 73–82. Springer, 2007. [54] Sw edish Civil Contigencies Agency. Guide to increased securit y in industrial information and control systems. 2014. [55] A. T eixeira, I. Shames, H. Sandb erg, and K. H. Johansson. A secure con trol framew ork for resource-limited adv ersaries. Automatic a , 51:135–148, 2015. [56] U.S. Chemical Safety and Hazard Inv estigation Board, T2 Laboratories Inc. Reactiv e Chemical Explosion: Final Inv estigation Rep ort. Rep ort No. 2008-3-I-FL, 2009. [57] D. A. v an Beek, K. L. Man, M. A. Reniers, J. E. Ro oda, and R. R. Schiffelers. Syn tax and consistent equation semantics of hybrid c hi. The Journal of L o gic and Algebr aic Pr o gr amming , 68(1-2):129–210, 2006. [58] P . V asilik os, F. Nielson, and H. Riis Nielson. Secure Information Release in Timed Automata. In Principles of Se curity and T rust , volume 10804 of L e ctur e Notes in Computer Scienc e , pages 28–52. Springer, 2018. [59] R. Vigo. The Cyb er-Ph ysical A ttack er. In Computer Safety, R eliability, and Se curity , volume 7613 of L e ctur e Notes in Computer Scienc e , pages 347–356. Springer, 2012. [60] R. Vigo, F. Nielson, and H. Riis Nielson. Broadcast, Denial-of-Service, and Secure Communication. In Inte gr ate d F ormal Metho ds , v olume 7940 of L e ctur e Notes in Computer Scienc e , pages 412–427. Springer, 2013. [61] J. W ang and H. Y u. Analysis of the Comp osition of Non-Deducibilit y in Cyber-Physical Systems. Applie d Mathematics & Information Scienc es , 8:3137–3143, 2014. [62] Y. Zacchia Lun, A. D’Inno cenzo, I. Mala v olta, and M. D. Di Benedetto. Cyb er-Ph ysical Systems Securit y: a Systematic Mapping Study. CoRR , abs/1605.09641, 2016. [63] Y. Zacchia Lun, A. D’Inno cenzo, F. Smarra, I. Malav olta, and M. D. Di Benedetto. State of the art of cyb er-ph ysical systems security: An automatic control p erspective. Journal of Systems and Softwar e , 149:174–216, 2019. 34 [64] Q. Zhu and T. Basar. Game-Theoretic Methods for Robustness, Securit y , and Resilience of Cyb erph ysical Con trol Systems: Games-in-Games Principle for Optimal Cross-Lay er Resilient Control Systems. IEEE Contr ol Systems Magazine , 35(1):46–65, 2015. A Pro ofs A.1 Pro ofs of Section 2 As already stated in Remark 2, our trace preorder v is deadlo c k-sensitive. F ormally , Lemma 1. L et M and N b e two CPSs in CCPSA such that M v N . Then, M satisfies its system invariant if and only if N satisfies its system invariant. Pr o of. This is b ecause CPSs that don’t satisfy their inv ariant can only fire deadlo ck actions. Pr o of of The or em 1. W e prov e the three statements separately . 1. Let us prov e that M ] O t − − → M 0 ] O 0 en tails N ] O ˆ t = = ⇒ N 0 ] O 0 . The pro of is by induction on the length of the trace M ] O t − − → M 0 ] O 0 . As M v N , b y an application of Lemma 1 it follows that either b oth M and N satisfy their resp ectiv e in v ariants or they b oth don’t. In the latter case, the result would b e easy to prov e as the systems can only fire deadlo ck actions. Similarly , if the system inv ariant of O is not satisfied, then M ] O and N ] O can p erform only deadlo ck actions and again the result would follo w easily . Thus, let us supp ose that the system inv ariants of M , N and O are satisfied. Base c ase. W e supp ose M = E 1 ; S 1 o n P 1 , N = E 2 ; S 2 o n P 2 , and O = E 3 ; S 3 o n P 3 . W e pro ceed by case analysis on why M ] O α − − → M 0 ] O 0 , for some action α . • α = cv . Suppose M ] O cv − − − → M 0 ] O 0 is derived by an application of rule (Out) . W e ha ve tw o p ossible cases: – either P 1 k P 3 cv − − − → P 1 k P 0 3 , b ecause P 3 cv − − − → P 0 3 , for some P 0 3 , O 0 = S 3 o n P 0 3 , and M 0 = M , – or P 1 k P 3 cv − − − → P 0 1 k P 3 , b ecause P 1 cv − − − → P 0 1 , for some P 0 1 , and M = S 1 o n P 0 1 and O 0 = O . In the first case, b y an application of rule (P ar) w e derive P 2 k P 3 cv − − − → P 2 k P 0 3 . Since b oth system inv ariants of N and O are satisfied, we can derive the required trace N ] O cv − − − → N ] O 0 b y an application of rule (Out) . In the second case, since P 1 cv − − − → P 0 1 and the in v ariant of M is satisfied, by an application of rule (Out) w e can derive M cv − − − → M 0 . As M v N , there exists a trace N b cv = = = ⇒ N 0 , for some system N 0 . Thus, by sev eral applications of rule (P ar) w e can easily deriv e N ] O b cv = = = ⇒ N 0 ] O = N 0 ] O 0 , as required. • α = cv . Suppose M ] O cv − − − → M 0 ] O 0 is derived by an application of rule (Inp) . This case is similar to the previous one. • α = τ . Supp ose M ] O τ − − → M 0 ] O 0 is derived by an application of rule (SensRead) . W e hav e tw o p ossible cases: – either P 1 k P 3 s ? v − − − → P 1 k P 0 3 b ecause P 3 s ? v − − − → P 0 3 , for some P 0 3 , P 1 k P 3 E s ! v − − − − → 6 (and hence P 3 E s ! v − − − − → 6 ), M 0 = M and O 0 = S 3 o n P 0 3 , – or P 1 k P 3 s ? v − − − → P 0 1 k P 3 b ecause P 1 s ? v − − − → P 0 1 , for some P 0 1 , P 1 k P 3 E s ! v − − − − → 6 (and hence P 1 E s ! v − − − − → 6 ) and M 0 = S 1 o n P 0 1 and O 0 = O . 35 In the first case, by an application of rule (P ar) w e derive P 2 k P 3 s ? v − − − → P 2 k P 0 3 . Moreov er from P 3 E s ! v − − − − → 6 and since the sets of sensors are alwa ys disjoint, we can derive P 2 k P 3 E s ! v − − − − → 6 . Since b oth inv arian ts of N and O are satisfied, we can derive N ] O τ − − → N ] O 0 b y an application of rule (SensRead) , as required. In the second case, since P 1 s ? v − − − → P 0 1 and the inv ariant of M is satisfied, b y an application of rule (SensRead) w e can derive M τ − − → M 0 with M 0 = S 1 o n P 0 1 . As M v N , there exists a deriv ation N ˆ τ = = ⇒ N 0 , for some N 0 . Thus, we can derive the required trace N ] O ˆ τ = = ⇒ N 0 ] O by an application of rule (P ar) . • α = τ . Supp ose that M ] O τ − − → M 0 ] O 0 is derived by an application of rule ( E SensSniff E ) . This case is similar to the previous one. • α = τ . Supp ose that M ] O τ − − → M 0 ] O 0 is derived by an application of rule (A ctW rite) . This case is similar to the case (SensRead) . • α = τ . Supp ose that M ] O τ − − → M 0 ] O 0 is derived by an application of rule ( E A cIntegr E ) . This case is similar to the case ( E SensSniff E ) . • α = τ . Supp ose that M ] O τ − − → M 0 ] O 0 is derived by an application of rule (T au) . W e hav e four p ossible cases: – P 1 k P 3 τ − − → P 0 1 k P 0 3 b y an application of rule (Com) . W e hav e tw o sub-cases: either P 1 cv − − − → P 0 1 and P 3 cv − − − → P 0 3 , or P 1 cv − − − → P 0 1 and P 3 cv − − − → P 0 3 , for some P 0 1 and P 0 3 . W e prov e the first case, the second one is similar. As the inv ariant of M is satisfied, by an application of rule (Out) w e can derive M cv − − − → M 0 . As M v N , there exists a trace N ˆ τ = = ⇒ cv − − − → ˆ τ = = ⇒ N 0 , for some N 0 = E 2 ; S 0 2 o n P 0 2 . As P 3 cv − − − → P 0 3 , by several applications of rule (P ar) and one of rule (Com) we derive N ] O b cv = = = ⇒ N 0 ] O 0 , as required. – P 1 k P 3 τ − − → P 1 k P 0 3 or P 1 k P 3 τ − − → P 0 1 k P 3 b y an application of (Par) . This case is easy . – P 1 k P 3 τ − − → P 0 1 k P 0 3 b y an application of either rule ( E A ctDrop E ) or rule ( E SensIn tegr E ) . This case do es not apply as the s ets of actuators of M and O are disjoin t. – P 1 k P 3 τ − − → P 0 1 k P 0 3 b y the application of on rule among (Res) , (Rec) , (Then) and (Else) . This case do es not apply to parallel pro cesses. • α = deadlo ck . Supp ose that M ] O deadlock − − − − − − − → M 0 ] O 0 is deriv ed by an application of rule (Deadlo c k) . This case is not admissible as the inv ariants of M , N and O are satisfied. • α = tick . Supp ose that M ] O tick − − − → M 0 ] O 0 is derived by an application of rule (Time) . This implies P 1 k P 3 tick − − − → P 0 1 k P 0 3 , for some P 0 1 and P 0 3 , M 0 = E 1 ; S 0 1 o n P 0 1 and O = E 3 ; S 0 3 o n P 0 3 , with S 0 1 ∈ next ( E 1 ; S 1 ) and S 0 3 ∈ next ( E 3 ; S 3 ) . As P 1 k P 3 tick − − − → P 0 1 k P 0 3 can only b e derived by an application of rule (TimeP ar) , it follows that P 1 tick − − − → P 0 1 and P 3 tick − − − → P 0 3 . Since the inv ariant of M is satisfied, by an application of rule (Time) w e can derive M tick − − − → M 0 with M 0 = E 1 ; S 0 1 o n P 0 1 . As M v N , there exists a deriv ation N ˆ τ = = ⇒ N 00 tick − − − → N 000 ˆ τ = = ⇒ N 0 , for some N 0 = E 2 ; S 0 2 o n P 0 2 , N 00 = E 2 ; S 00 2 o n P 00 2 , N 000 = E 2 ; S 000 2 o n P 000 2 , with S 000 2 ∈ next ( E 2 ; S 00 2 ) . By several applications of rule (P ar) w e can derive that N ] O ˆ τ = = ⇒ N 00 ] O and N 000 ] O 0 ˆ τ = = ⇒ N 0 ] O 0 . In order to conclude the pro of, it is suffi cien t to prov e N 00 ] O tick − − − → N 000 ] O 0 . By the definition of rule (Time) , from N 00 tick − − − → N 000 it follows that P 00 2 tick − − − → P 000 2 . As P 3 tick − − − → P 0 3 , by an application of rule (TimeP ar) it follows that P 00 2 k P 3 tick − − − → P 000 2 k P 0 3 . Since S 000 2 ∈ next ( E 2 ; S 00 2 ) and S 0 3 ∈ next ( E 3 ; S 3 ) we can deriv e that S 000 2 ] S 0 3 ∈ next ( E 2 ; S 00 2 ) ∪ next ( E 3 ; S 3 ) . By an application of rule (Time) w e ha ve N 00 ] O tick − − − → N 000 ] O 0 and hence N ] O c tick = = = ⇒ N 0 ] O 0 , as required. 36 • α = unsafe . Supp ose that M ] O unsafe − − − − − → M 0 ] O 0 is derived by an application of rule (Safet y) . This is similar to the case α = cv b y considering the fact that ξ x 6∈ safe implies that ξ x ∪ ξ x 0 6∈ safe ∪ safe 0 , for any ξ x 0 and any safe 0 . Inductive c ase. W e hav e to prov e that M ] O = M 0 ] O 0 α 1 − − − → · · · α n − − − → M n ] O n implies N ] O = N 0 ] O 0 c α 1 = = = ⇒ · · · c α n = = = ⇒ N n ] O n . W e can use the inductive h yp othesis to easily deal with the first n − 1 actions and resort to the base case to handle the n th action. 2. W e hav e to prov e that M v N implies M k P v N k P , for any pure-logical pro cess P . This is a sp ecial case of (1) as M k P = M ] ( ∅ ; ∅ o n P ) and N k P = N ] ( ∅ ; ∅ o n P ) , where ∅ ; ∅ o n P is a CPS with no physical pro cess in it, only logics. 3. W e ha ve to pro ve that M v N implies M \ c v N \ c , for any channel c . F or an y deriv ation M \ c t − − → M 0 \ c w e can easily derive that M t − − → M 0 with c not o ccurring in t . Since M v N , it follo ws that N ˆ t = = ⇒ N 0 , for some N 0 . Since c do es not app ear in t , we can easily derive that N \ c ˆ t = = ⇒ N 0 \ c , as required. In order to prov e Theorem 2 we adapt to CCPSA t wo standard lemmata used in pro cess calculi theory to comp ose and dec ompose the actions p erformed by a comp ound system. Lemma 2 (Decomp osing system actions) . L et M and N b e two CPSs in CCPSA . Then, • if M ] N tick − − − → M 0 ] N 0 , for some M 0 and N 0 , then M tick − − − → M 0 and N tick − − − → N 0 ; • if M ] N deadlock − − − − − − − → M ] N , then M deadlock − − − − − − − → M or N deadlock − − − − − − − → N ; • if M ] N τ − − → M 0 ] N 0 , for some M 0 and N 0 , due to a channel synchr onisation b etwe en M and N , then either M cv − − − → M 0 and N cv − − − → N 0 , or M cv − − − → M 0 and N cv − − − → N 0 , for some channel c ; • if M ] N α − − → M 0 ] N 0 , for some M 0 and N 0 , α 6 = tick , not due to a channel synchr onisation b etwe en M and N , then either M α − − → M and N = N 0 , or N α − − → N and M = M 0 . Lemma 3 (Comp osing system actions) . L et M and N b e two CPSs of CCPSA . Then, • If M tick − − − → M 0 and N tick − − − → N 0 , for some M 0 and N 0 , then M ] N tick − − − → M 0 ] N 0 ; • If N deadlock − − − − − − − → 6 and M α − − → M 0 , for some M 0 and α 6 = tick , then M ] N α − − → M 0 ] N and N ] M α − − → N ] M 0 . Pr o of of The or em 2. Here, we prov e case (1) of the theorem. The pro ofs of cases (2) and (3) are similar to the corresp onding ones of Theorem 1. W e prov e that M v m..n N implies that there are m 0 , n 0 ∈ N + ∪ ∞ , with m 0 ..n 0 ⊆ m..n suc h that M ] O v m 0 ..n 0 N ] O . W e prov e separately that m 0 ≥ m and n 0 ≤ n . • m 0 ≥ m . W e recall that m, m 0 ∈ N + . If m = 1 , then we trivially hav e m 0 ≥ 1 = m . Otherwise, since m is the minimum in teger for which there is a trace t , with # tick ( t ) = m − 1 , such that M t − − → and N 6 ˆ t = = ⇒ , then for an y trace t , with # tick ( t ) < m − 1 and suc h that M t − − → , it holds that N ˆ t = = ⇒ . As done in the pro of of case (1) of Theorem 1, we can derive that for an y trace t , with # tick ( t ) < m − 1 and such that M ] O t − − → it holds that N ] O ˆ t = = ⇒ . This implies the required condition, m 0 ≥ m . • n 0 ≤ n . W e recall that n is the infimum element of N + ∪ {∞} , n ≥ m , suc h that whenever M t 1 − − → M 0 , with # tick ( t 1 ) = n − 1 , there is t 2 , with # tick ( t 1 ) = # tick ( t 2 ) , suc h that N t 2 − − → N 0 , for some N 0 , and M 0 v N 0 . No w, if M ] O t − − → M 0 ] O 0 , with # tick ( t ) = n − 1 , by Lemma 2 we can split the trace t b y extracting the actions p erformed by M and those p erformed by O . Thus, there exist tw o traces M t 1 − − → M 0 and O t 3 − − → O 0 , with # tick ( t 1 ) = # tick ( t 3 ) = n − 1 whose combination has generated the trace M ] O t − − → M 0 ] O 0 . As M v m..n N , from M t 1 − − → M 0 w e know that there is a trace t 2 , 37 with # tick ( t 1 ) = # tick ( t 2 ) , such that N t 2 − − → N 0 , for some N 0 , and M 0 v N 0 . Since N t 2 − − → N 0 and O t 3 − − → O 0 , by an application of Lemma 3 w e can build a trace N ] O t 0 − − → N 0 ] O 0 , for some t 0 suc h that # tick ( t ) = # tick ( t 0 ) = n − 1 . As M 0 v N 0 , by Theorem 1 we can derive that M 0 ] O 0 v N 0 ] O 0 . This implies that n 0 ≤ n . A.2 Pro ofs of Section 3 In order to prov e Prop osition 1 and Prop osition 2, we use the following lemma that formalises the inv ariant prop erties binding the state v ariable temp with the activit y of the co oling system. In tuitively , when the co oling system is inactiv e the v alue of the state v ariable temp la ys in the real in terv al [0 , 11 . 5] . F urthermore, if the co olan t is not active and the v ariable temp la ys in the real interv al (10 . 1 , 11 . 5] , then the co oling will b e turned on in the next time slot. Finally , when active the co oling system will remain so for k ∈ 1 .. 5 time slots (coun ting also the curren t time slot) with the v ariable temp b eing in the real interv al (9 . 9 − k ∗ (1+ δ ) , 11 . 5 − k ∗ (1 − δ )] . Lemma 4. L et Sys b e the system define d in Se ction 3. L et Sys = Sys 1 t 1 − − − → tick − − − → Sys 2 t 2 − − − → tick − − − → · · · t n − 1 − − − − − → tick − − − → Sys n such that the tr ac es t j c ontain no tick -actions, for any j ∈ 1 ..n − 1 , and for any i ∈ 1 ..n , Sys i = S i o n P i with S i = h ξ i x , ξ i s , ξ i a i . Then, for any i ∈ 1 ..n − 1 , we have the fol lowing: 1. if ξ i a ( c o ol ) = off then ξ i x ( temp ) ∈ [0 , 11 . 1 + δ ] ; with ξ i x ( str ess ) = 0 if ξ i x ( temp ) ∈ [0 , 10 . 9 + δ ] , and ξ i x ( str ess ) = 1 , otherwise; 2. if ξ i a ( c o ol ) = off and ξ i x ( temp ) ∈ (10 . 1 , 11 . 1 + δ ] then, in the next time slot, ξ i +1 a ( c o ol ) = on and ξ i +1 x ( str ess ) ∈ 1 .. 2 ; 3. if ξ i a ( c o ol ) = on then ξ i x ( temp ) ∈ (9 . 9 − k ∗ (1 + δ ) , 11 . 1 + δ − k ∗ (1 − δ )] , for some k ∈ 1 .. 5 such that ξ i − k a ( c o ol ) = off and ξ i − j a ( c o ol ) = on , for j ∈ 0 ..k − 1 ; mor e over, if k ∈ 1 .. 3 then ξ i x ( str ess ) ∈ 1 ..k + 1 , otherwise, ξ i x ( str ess ) = 0 . Pr o of. Let us write v i and s i to denote the v alues of the state v ariables temp and str ess , resp ectively , in the systems Sys i , i.e., ξ i x ( temp ) = v i and ξ i x ( str ess ) = s i . Moreov er, we will say that the co olan t is activ e (resp., is not active) in Sys i if ξ i a ( c o ol ) = on (resp., ξ i a ( c o ol ) = off ). The pro of is by mathematical induction on n , i.e., the n umber of tick -actions of our traces. The c ase b ase n = 1 follows directly from the definition of Sys . Let us prov e the inductive c ase . W e assume that the three statements hold for n − 1 and prov e that they also hold for n . 1. Let us assume that the co oling is not active in Sys n . In this case, we prov e that v n ∈ [0 , 11 . 1 + δ ] , with and s n = 0 if v n ∈ [0 , 10 . 9 + δ ] , and s n = 1 otherwise. W e consider separately the cases in which the co olan t is active or not in Sys n − 1 • Supp ose the co olan t is not active in Sys n − 1 (and not active in Sys n ). By the induction hypothesis w e ha ve v n − 1 ∈ [0 , 11 . 1 + δ ] ; with s n − 1 = 0 if v n − 1 ∈ [0 , 10 . 9 + δ ] , and s n − 1 = 1 otherwise. F urthermore, if v n − 1 ∈ (10 . 1 , 11 . 1 + δ ] , then, by the induction hypothesis, the co olan t must b e active in Sys n . Since we kno w that in Sys n the co oling is not activ e, it follo ws that v n − 1 ∈ [0 , 10 . 1] and s n = 0 . F urthermore, in Sys n the temp erature will increase of a v alue la ying in the real interv al [1 − δ, 1 + δ ] = [0 . 6 , 1 . 4] . Thus, v n will b e in [0 . 6 , 11 . 1 + δ ] ⊆ [0 , 11 . 1 + δ ] . Moreo ver, if v n − 1 ∈ [0 , 9 . 9] , then the state v ariable str ess is not incremented and hence s n = 0 with v n ∈ [0 + 1 − δ , 9 . 9 + 1 + δ ] = [0 . 6 , 10 . 9 + δ ] ⊆ [0 , 10 . 9 + δ ] . Otherwise, if v n − 1 ∈ (9 . 9 , 10 . 1] , then the state v ariable str ess is incremented, and hence s n = 1 . 38 • Supp ose the co olan t is activ e in Sys n − 1 (and not active in Sys n ). By the induction h yp othesis, v n − 1 ∈ (9 . 9 − k ∗ (1 + δ ) , 11 . 1 + δ − k ∗ (1 − δ )] for some k ∈ 1 .. 5 such that the co olan t is not active in Sys n − 1 − k and is active in Sys n − k , . . . , Sys n − 1 . The case k ∈ { 1 , . . . , 4 } is not admissible. In fact if k ∈ { 1 , . . . , 4 } then the co olan t would b e activ e for less than 5 tick -actions as we kno w that Sys n is not active. Hence, it must b e k = 5 . Since δ = 0 . 4 and k = 5 , it holds that v n − 1 ∈ (9 . 9 − 5 ∗ 1 . 4 , 11 . 1 + 0 . 4 − 5 ∗ 0 . 6] = (2 . 8 , 8 . 6] and s n − 1 = 0 . Moreov er, since the co olan t is active for 5 time slots, in Sys n − 1 the controller and the IDS sync hronise together via channel sync and hence the IDS c hecks the temp erature. Since v n − 1 ∈ (2 . 8 , 8 . 6] the IDS pro cess sends to the controller a command to stop the co oling, and the con troller will switch off the co oling system. Thus, in the next time slot, the temp erature will increase of a v alue laying in the real interv al [1 − δ , 1 + δ ] = [0 . 6 , 1 . 4] . As a consequence, in Sys n w e will ha ve v n ∈ [2 . 8 + 0 , 6 , 8 . 6 + 1 . 4] = [3 . 4 , 10] ⊆ [0 , 11 . 1 + δ ] . Moreov er, since v n − 1 ∈ (2 . 8 , 8 . 6] and s n − 1 = 0 , we derive that the state v ariable str ess is not increased and hence s n = 0 , with v n ∈ [3 . 4 , 10] ⊆ [0 , 10 . 9 + δ ] . 2. Let us assume that the co olan t is not active in Sys n and v n ∈ (10 . 1 , 11 . 1 + δ ] ; we pro ve that the co olant is active in Sys n + 1 with s n +1 ∈ 1 .. 2 . Since the co olan t is not active in Sys n , then it will chec k the temp erature b efore the next time slot. Since v n ∈ (10 . 1 , 11 . 1 + δ ] and  = 0 . 1 , then the pro cess Ctrl will sense a temp erature greater than 10 and the co olan t will b e turned on. Thus, the co olan t will b e activ e in Sys n + 1 . Moreov er, since v n ∈ (10 . 1 , 11 . 1 + δ ] , and s n could b e either 0 or 1 , the state v ariable str ess is increased and therefore s n +1 ∈ 1 .. 2 . 3. Let us assume that the co olan t is activ e in Sys n ; w e prov e that v n ∈ (9 . 9 − k ∗ (1 + δ ) , 11 . 1 + δ − k ∗ (1 − δ )] for some k ∈ 1 .. 5 and the co olan t is not active in Sys n − k and active in Sys n − k + 1 , . . . , Sys n . Moreov er, w e hav e to prov e that if k ≤ 3 then s n ∈ 1 ..k +1 , otherwise, if k > 3 then s n = 0 . W e prov e the first statemen t. That is, we pro ve that v n ∈ (9 . 9 − k ∗ (1 + δ ) , 11 . 1 + δ − k ∗ (1 − δ )] , for some k ∈ 1 .. 5 , and the coolant is not active in Sys n − k , whereas it is activ e in the systems Sys n − k + 1 , . . . , Sys n . W e separate the case in which the co olan t is active in Sys n − 1 from that in which is not active. • Supp ose the co olan t is not active in Sys n − 1 (and active in Sys n ). In this case k = 1 as the co olant is not active in Sys n − 1 and it is active in Sys n . Since k = 1 , we ha ve to prov e v n ∈ (9 . 9 − (1 + δ ) , 11 . 1 + δ − (1 − δ )] . Ho wev er, since the co olan t is not activ e in Sys n − 1 and is activ e in Sys n it means that the co olant has b een switc hed on in Sys n − 1 b ecause the sensed temp erature was ab o ve 10 (since  = 0 . 1 this ma y happ en only if v n − 1 > 9 . 9 ). By the induction hypothesis, since the co olan t is not activ e in Sys n − 1 , we hav e that v n − 1 ∈ [0 , 11 . 1 + δ ] . Therefore, from v n − 1 > 9 . 9 and v n − 1 ∈ [0 , 11 . 1 + δ ] it follo ws that v n − 1 ∈ (9 . 9 , 11 . 1 + δ ] . F urthermore, since the coolant is active in Sys n , the temperature will decrease of a v alue in [1 − δ, 1 + δ ] and therefore v n ∈ (9 . 9 − (1 + δ ) , 11 . 1 + δ − (1 − δ )] , which concludes this case of the pro of. • Supp ose the co olan t is activ e in Sys n − 1 (and active in Sys n as well). By the induction hypothesis, there is h ∈ 1 .. 5 such that v n − 1 ∈ (9 . 9 − h ∗ (1 + δ ) , 11 . 1 + δ − h ∗ (1 − δ )] and the co olan t is not active in Sys n − 1 − h and is active in Sys n − h , . . . , Sys n − 1 . The case h = 5 is not admissible. In fact, since δ = 0 . 4 , if h = 5 then v n − 1 ∈ (9 . 9 − 5 ∗ 1 . 4 , 11 . 1 + δ − 5 ∗ 0 . 6] = (2 . 8 , 8 . 6] . F urthermore, since the co oling system has b een active for 5 time instants, in Sys n − 1 the controller and the IDS synchronise together via c hannel sync , and the IDS chec ks the received temp erature. As v n − 1 ∈ (2 . 8 , 8 . 6] , the IDS sends to the con troller via c hannel ins the command stop . This implies that the controller should turn off the co oling system, in contradiction with the hypothesis that the co olant is active in Sys n . Hence, it must b e h ∈ 1 .. 4 . Let us prov e that for k = h + 1 we obtain our result. Namely , we hav e to pro ve that, for k = h + 1 , (i) v n ∈ (9 . 9 − k ∗ (1 + δ ) , 11 . 1 + δ − k ∗ (1 − δ )] , and (ii) the co olant is not active in Sys n − k and active in Sys n − k + 1 , . . . , Sys n . Let us prov e the statement (i). By the induction hypothesis, it holds that v n − 1 ∈ (9 . 9 − h ∗ (1 + δ ) , 11 . 1 + δ − h ∗ (1 − δ )] . Since the co olan t is active in Sys n , the temp erature will decrease Hence, v n ∈ (9 . 9 − ( h + 1) ∗ (1 + δ ) , 11 . 1 + δ − ( h + 1) ∗ (1 − δ )] . Therefore, since k = h + 1 , we hav e that 39 v n ∈ (9 . 9 − k ∗ (1 + δ ) , 11 . 1 + δ − k ∗ (1 − δ )] . Let us prov e the statement (ii). By the induction hypothesis the co olan t is not active in Sys n − 1 − h and it is active in Sys n − h , . . . , Sys n − 1 . Now, since the co olan t is active in Sys n , for k = h + 1 , we ha ve that the co olan t is not activ e in Sys n − k and is active in Sys n − k + 1 , . . . , Sys n , which concludes this case of the pro of. Th us, we ha ve prov ed that v n ∈ (9 . 9 − k ∗ (1 + δ ) , 11 . 1 + δ − k ∗ (1 − δ )] , for some k ∈ 1 .. 5 ; moreov er, the co olant is not active in Sys n − k and active in the systems Sys n − k + 1 , . . . , Sys n . It remains to prov e that s n ∈ 1 ..k +1 if k ≤ 3 , and s n = 0 , otherwise. By inductive hypothesis, since the co olan t is not active in Sys n − k , we hav e that s n − k ∈ 0 .. 1 . Now, for k ∈ [1 .. 2] , the temp erature could b e greater than 9 . 9 . Hence if the state v ariable str ess is either increased or reset, then s n ∈ 1 ..k + 1 , for k ∈ 1 .. 3 . Moreov er, since for k ∈ 3 .. 5 the temp erature is b elo w 9 . 9 , it follows that s n = 0 for k > 3 . Pr o of of Pr op osition 1. Since δ = 0 . 4 , by Lemma 4 the v alue of the state v ariable temp is alwa ys in the real interv al [0 , 11 . 5] . As a consequence, the in v ariant of the system is never violated and the system never deadlo c ks. Moreov er, after 5 time units of co oling, the state v ariable temp is alwa ys in the real interv al (9 . 9 − 5 ∗ 1 . 4 , 11 . 1 + 0 . 4 − 5 ∗ 0 . 6] = (2 . 9 , 8 . 5] . Hence, the pro cess IDS will never transmit on the channel alarm . Finally , b y Lemma 4 the maximum v alue reac hed by the state v ariable str ess is 4 and therefore the system do es not reach unsafe states. Pr o of of Pr op osition 2. Let us prov e the tw o statements separately . • Since  = 0 . 1 , if pro cess Ctrl senses a temp erature ab o ve 10 (and hence Sys turns on the co oling) then the v alue of the state v ariable temp is greater than 9 . 9 . By Lemma 4, the v alue of the state v ariable temp is alwa ys less than or equal to 11 . 1 + δ . Therefore, if Ctrl senses a temp erature ab o v e 10 , then the v alue of the state v ariable temp is in (9 . 9 , 11 . 1 + δ ] . • By Lemma 4 (third item), the co olan t can b e active for no more than 5 time slots. Hence, by Lemma 4, when Sys turns off the co oling system the state v ariable temp ranges o ver (9 . 9 − 5 ∗ (1 + δ ) , 11 . 1 + δ − 5 ∗ (1 − δ )] . A.3 Pro ofs of Section 4 Pr o of of Pr op osition 3. W e distinguish the t wo cases, dep ending on m . • Let m ≤ 8 . W e recall that the co oling system is activ ated only when the sensed temp erature is ab o ve 10 . Since  = 0 . 1 , when this happ ens the state v ariable temp m ust b e at least 9 . 9 . Note that after m − 1 ≤ 7 tick -actions, when the attack tries to interact with the controller of the actuator c o ol , the v ariable temp ma y reach at most 7 ∗ (1 + δ ) = 7 ∗ 1 . 4 = 9 . 8 degrees. Thus, the co oling system will not b e activ ated and the attac k will not hav e any effect. • Let m > 8 . By Prop osition 1, the system Sys in isolation may never deadlo c k, it do es not get in to an unsafe state, and it may never emit an output on channel alarm . Th us, any execution trace of the system Sys consists of a sequence of τ -actions and tick -actions. In order to prov e the statement it is enough to show the following four facts: – the system Sys k A m ma y not deadlo c k in the first m + 3 time slots; – the system Sys k A m ma y not emit any output in the first m + 3 time slots; – the system Sys k A m ma y not enter in an unsafe state in the first m + 3 time slots; – the system Sys k A m has a trace reaching un unsafe state from the ( m + 4) -th time slot on, and un til the inv ariant gets violated and the system deadlo c ks. 40 The first three facts are easy to show as the attack may steal the command addressed to the actuator c o ol only in the m -th time slot. Thus, un til time slot m , the whole system behav es correctly . In particular, by Prop osition 1 and Prop osition 2, no alarms, deadlo c ks or violations of safety conditions o ccur, and the temp erature lies in the exp ected ranges. Any of those three actions requires at least further 4 time slots to o ccur. Indeed, by Lemma 4, when the co oling is switc hed on in the time slot m , the v ariable str ess migh t b e equal to 2 and hence the syste m might not enters in an unsafe state in the first m + 3 time slots. Moreov er, an alarm or a deadlo c k needs more than 3 time slots and hence no alarm can o ccur in the first m + 3 time slots. Let us show the fourth fact, i.e., that there is a trace where the system Sys k A m en ters into an unsafe state starting from the ( m +4) -th time slot and un til the inv ariant gets violated. Firstly , we prov e that for all time slots n , with 9 ≤ n < m , there is a trace of the system Sys k A m in whic h the state v ariable temp reaches the v alues 10 . 1 in the time slot n . The fastest trace reaching the temp erature of 10 . 1 degrees requires d 10 . 1 1+ δ e = d 10 . 1 1 . 4 e = 8 time units, whereas the slow est one d 10 . 1 1 − δ e = d 10 . 1 0 . 6 e = 17 time units. Thus, for any time slot n , with 9 ≤ n ≤ 18 , there is a trace of the system where the v alue of the state v ariable temp is 10 . 1 . Now, for any of those time slots n there is a trace in which the state v ariable temp is equal to 10 . 1 in all time slots n + 10 i < m , with i ∈ N . Indeed, when the v ariable temp is equal to 10 . 1 the co oling might be activ ated. Thus, there is a trace in which the co oling system is activ ated. W e can alwa ys assume that during the co oling the temp erature decreases of 1 + δ degrees p er time unit, reaching at the end of the co oling cycle the v alue of 5 . This entails that the trace may contin ue with 5 time slots in whic h the v ariable temp is increased of 1 + δ degrees p er time unit; reaching again the v alue 10 . 1 . Thus, for all time slots n , with 9 ≤ n < m , there is a trace of the system Sys k A m in which the state v ariable temp is 10 . 1 in n . As a consequence, we can supp ose that in the m − 1 -th time slot there is a trace in which the v alue of the v ariable temp is 10 . 1 . Since  = 0 . 1 , the sensed temp erature lays in the real interv al [10 , 10 . 2] . Let us fo cus on the trace in which the sensed temp erature is 10 and the co oling system is not activ ated. In this case, in the m -th time slot the system ma y reach a temp erature of 10 . 1 + (1 + δ ) = 11 . 5 degrees and the v ariable str ess is 1 . The pro cess Ctrl will sense a temp erature ab o v e 10 sending the command cool ! on to the actuator c o ol . No w, since the attack A m is active in that time slot ( m > 8 ), the command will b e stolen by the attack and it will never reach the actuator. Without that dose of co olan t, the temp erature of the system will con tinue to grow. As a consequence, after further 4 time units of co oling, i.e. in the m + 4 -th time slot, the v alue of the state v ariable str ess ma y b e 5 and the system enters in an unsafe state. After 1 time slots, in the time slot m + 5 , the controller and the IDS sync hronise via channel sync , the IDS will detect a temp erature ab o ve 10 , and it will fire the output on channel alarm sa ying to pro cess Ctrl to keep cooling. But Ctrl will not send again the command cool ! on . Hence, the temp erature would con tinue to increase and the system remains in an unsafe state while the pro cess IDS will k eep sending of alarm (s) until the inv ariant of the environmen t gets violated. Pr o of of Pr op osition 4. As prov ed for Prop osition 3, we can prov e that there is a trace of the system Sys k A m in which the state v ariable temp reac hes a v alue greater than 9 . 9 in the time slot m , for all m > 8 . The pro cess Ctrl nev er activ ates the Co oling comp onen t as it will alwa ys detect a temp erature b elo w 10 . Hence, after further 5 tick -actions (in the m + 5 -th time slot) the system will violate the safet y conditions emitting an unsafe action. So, since the pro cess Ctrl nev er activ ates the Co oling , the temp erature will increase its v alue until the inv arian t will b e violated and the system will deadlo c k. Th us, Sys k A m v m +5 .. ∞ Sys . In order to prov e Prop osition 5, w e introduce Lemma 5. This is a v arian t of Lemma 4 in which the CPS Sys runs in parallel with the attac k A n defined in Example 3. Here, due to the presence of the attack, the temp erature is 4 degrees higher when compared to the system Sys in isolation. Lemma 5. L et Sys b e the system define d in Se ction 3 and A n b e the attack of Example 3. L et Sys k A n = Sys 1 t 1 − − − → tick − − − → . . . Sys n − 1 t n − 1 − − − − − → tick − − − → Sys n 41 such that the tr ac es t j c ontain no tick -actions, for any j ∈ 1 ..n − 1 , and for any i ∈ 1 ..n Sys i = S i o n P i with S i = h ξ i x , ξ i s , ξ i a i . Then, for any i ∈ 1 ..n − 1 we have the fol lowing: • if ξ i a ( c o ol ) = off , then ξ i x ( temp ) ∈ [0 , 11 . 1 + 4 + δ ] ; • if ξ i a ( c o ol ) = off and ξ i x ( temp ) ∈ (10 . 1 + 4 , 11 . 1 + 4 + δ ] , then we have ξ i +1 a ( c o ol ) = on ; • if ξ i a ( c o ol ) = on , then ξ i x ( temp ) ∈ (9 . 9 + 4 − k ∗ (1 + δ ) , 11 . 1 + 4 + δ − k ∗ (1 − δ )] , for some k ∈ 1 .. 5 , such that ξ i − k a ( c o ol ) = off and ξ i − j a ( c o ol ) = on , for j ∈ 0 ..k − 1 . Pr o of. Similar to the pro of of Lemma 4. No w, everything is in place to prov e Prop osition 5. Pr o of of Pr op osition 5. Let us pro ced by case analysis. • Let 0 ≤ n ≤ 8 . In the pro of of Prop osition 3, we remark ed that the system Sys in isolation may sense a temp erature greater than 10 only after 8 tick -actions, i.e., in the 9 -th time slot. How ever, the life of the attac k is n ≤ 8 , and in the 9 -th time slot the attack is already terminated. As a consequence, starting from the 9 -th time slot the system will correctly sense the temp erature and it will correctly activ ate the co oling system. • Let n = 9 . The maximum v alue that ma y b e reached by the state v ariable temp after 8 tick -actions, i.e., in the 9 -th time slot, is 8 ∗ (1 + δ ) = 8 ∗ 1 . 4 = 11 . 2 . How ever, since in the 9 -th time slot the attack is still alive, the pro cess Ctrl will sense a temp erature b elo w 10 and the system will mov e to the next time slot and the state v ariable str ess is incremented. Then, in the 10 -th time slot, when the attack is already terminated, the maximum temp erature the system ma y reach is 11 . 2 + (1 + δ ) = 12 . 6 degrees and the state v ariable str ess is equal to 1 . Th us, the pro cess Ctrl will sense a temp erature greater than 10 , activ ating the co oling system and incrementing the state v ariable str ess . As a consequence, during the following 4 time units of co oling, the v alue of the state v ariable temp will be at most 12 . 6 − 4 ∗ (1 − δ ) = 12 . 6 − 4 ∗ 0 . 6 = 10 . 2 , and hence in the 14 -th time slot, the v alue of the state v ariable str ess is 5 and the system will enter in an unsafe state. In the 15 -th time slot, the v alue of the state v ariable str ess is still equal to 5 and the system will still b e in an unsafe state. How ever, the v alue of the state v ariable temp will b e at most 12 . 6 − 5 ∗ (1 − δ ) = 12 . 6 − 5 ∗ 0 . 6 = 9 . 6 whic h will b e sensed b y pro cess IDS as at most 9 . 7 (sensor error  = 0 . 1 ). As a consequence, no alarm will b e turned on and the v ariable str ess will b e reset. Moreov er, the inv ariant will b e ob viously alwa ys preserved. As in the current time slot the attac k has already terminated, from this p oin t in time on, the system will b ehav e correctly with neither deadlo cks or alarms. • Let n ≥ 10 . In order to prov e that Sys k A n v [14 ,n +11] Sys , it is enough to show that: – the system Sys k A n do es not deadlo ck; – the system Sys k A n ma y not emit any output in the first 13 time slots; – there is a trace in which the system Sys k A n en ters in an unsafe state in the 14 -th time slot; – there is a trace in which the system Sys k A n is in an unsafe state in the ( n +11) -th time slot; – the system Sys k A n do es not hav e an y execution trace emitting an output along channel alarm or entering in an unsafe state after the n + 11 -th time slot. As regards the first fact, since δ = 0 . 4 , by Lemma 5 the temp erature of the system under attac k will alw ays remain in the real interv al [0 , 15 . 5] . Thus, the inv ariant is never violated and the trace of the system under attack cannot contain any deadlo ck -action. Moreov er, when the attack terminates, if the temp erature is in [0 , 9 . 9] , the system will contin ue his b eha viour correctly , as in isolation. Otherwise, since the temp erature is at most 15 . 5 , after a p ossible sequence of co oling cycles, the temp erature will reac h a v alue in the in terv al [0 , 9 . 9] , and again the system will contin ue its b eha viour correctly , as in isolation. Concerning the second and the third facts, the pro of is analogous to that of case n = 9 . Concerning the fourth fact, firstly we prov e that for all time slots m , with 9 < m ≤ n , there is a trace of the system Sys k A n in which the state v ariable temp reac hes the v alues 14 in the time slot m . Since 42 the attack is alive at that time, and  = 0 . 1 , when the v ariable temp will b e equal to 14 the sensed temp erature will lay in the real in terv al [9 . 9 , 10 . 1] . The fastest trace reaching the temp erature of 14 degrees requires d 14 1+ δ e = d 14 1 . 4 e = 14 1 . 4 = 10 time units, whereas the slo west one d 14 1 − δ e = d 14 0 . 6 e = 24 time units. Thus, for any time slot m , with 9 < m ≤ 24 , there is a trace of the system where the v alue of the state v ariable temp is 14 . Now, for an y of those time slots m there is a trace in which the state v ariable temp is equal to 14 in all time slots m + 10 i < n , with i ∈ N . As already said, when the v ariable temp is equal to 14 the sensed temp erature lays in the real interv al [9 . 9 , 10 . 1] and the co oling might be activ ated. Thus, there is a trace in which the co oling system is activ ated. W e can alwa ys find a trace where during the co oling the temp erature decreases of 1 + δ degrees p er time unit, reaching at the end of the co oling cycle the v alue of 5 . Th us, the trace ma y contin ue with 5 time slots in which the v ariable temp is increased of 1 + δ degrees p er time unit; reac hing again the v alue 14 . Thus, for all time slots m , with 9 < m ≤ n , there is a trace of the system Sys k A n in which the state v ariable temp has v alue 14 in the time slot m . Therefore, we can supp ose that in the n -th time slot the v ariable temp is equal to 14 and, since the maxim um increment of temperature is 1 . 4 , the the v ariable str ess is at least equal to 1 . Since the attac k is aliv e and  = 0 . 1 , in the n -th time slot the sensed temp erature will lay in [9 . 9 , 10 . 1] . W e consider the case in which the sensed temp erature is less than 10 and hence the co oling is not activ ated. Th us, in the n + 1 -th time slot the system may reach a temp erature of 14 + 1 + δ = 15 . 4 degrees and the pro cess Ctrl will sense a temp erature ab o v e 10 , and it will activ ate the co oling system. In this case, the v ariable str ess will b e increased. As a consequence, after further 5 time units of co oling, i.e. in the n + 6 -th time slot, the v alue of the state v ariable temp ma y reach 15 . 4 − 5 ∗ (1 − δ ) = 12 . 4 and the alarm will b e fired and the v ariable str ess will b e still equal to 5 . After 4 time unit in the n + 10 -th time slot the state v ariable temp ma y reach 12 . 4 − 4 ∗ (1 − δ ) = 10 and the v ariable str ess will b e still equal to 5 and the system will b e in an unsafe state. So in the n + 11 -th time slot str ess will b e still equal to 5 and the system will b e in an unsafe state. Concerning the fifth fact, b y Lemma 5, in the n +1 -th time s lot the attack will b e terminated and the system may reac h a temp erature that is, in the worst case, at most 15 . 5 . Thus, the co oling system ma y b e activ ated and the v ariable str ess will b e increased. As a consequence, in the n + 11 -th time slot, the v alue of the state v ariable temp ma y b e at most 15 . 5 − 10 ∗ (1 − δ ) = 15 . 5 − 10 ∗ 0 . 6 = 9 . 5 and the v ariable str ess will b e reset to 0 . Thus, after the n + 11 -th time slot, the system will b eha v e correctly , as in isolation. In order to prov e Theorem 3, we in tro duce the following lemma. Lemma 6. L et M b e an honest and sound CPS, C an arbitr ary class of attacks, and A an attack of a class C 0  C . Whenever M k A t − − → M 0 k A 0 , then M k T op ( C ) ˆ t = = ⇒ M 0 k Q ι ∈I A tt ( ι, # tick ( t )+1 , C ( ι )) . Pr o of. Let us define T op h ( C ) as the attac k pro cess Q ι ∈I A tt ( ι, h, C ( ι )) . Then, T op 1 ( C ) = T op ( C ) . The pro of is by mathematical induction on the length k of the trace t . Base c ase. k = 1 . This means t = α , for some action α . W e pro ceed by case analysis on α . • α = cv . As the attack er A do es not use communication channels, from M k A cv − − − → M 0 k A 0 w e can derive that A = A 0 and M cv − − − → M 0 . Thus, by applications of rules (P ar) and (Out) w e derive M k T op ( C ) cv − − − → M 0 k T op 1 ( C ) = M 0 k T op ( C ) . • α = cv . This case is similar to the previous one. • α = τ . There are five sub-cases. – Let M k A τ − − → M 0 k A 0 b e derived by an application of rule (SensRead) . Since the attack er A p erforms only malicious actions, from M k A τ − − → M 0 k A 0 w e can derive that A = A 0 and 43 P s ? v − − − → P 0 for some pro cess P and P ’ such that M = E ; S o n P and M 0 = E ; S o n P 0 . By considering rnd ( { true , false } ) = false for an y pro cess A tt ( ι, 1 , C ( ι )) , w e hav e that T op ( C ) can only p erform a tick action, and T op ( C ) E s ! v − − − − → 6 . Hence, by an application of rules (P ar) and (SensRead) w e derive M k T op ( C ) τ − − → M 0 k T op 1 ( C ) = M 0 k T op ( C ) . – Let M k A τ − − → M 0 k A 0 b e derived by an application of rule (A ctW rite) . This case is similar to the previous one. – Let M k A τ − − → M 0 k A 0 b e deriv ed by an application of rule ( E SensSniff E ) . Since M is sound it follows that M = M 0 and A E s ? v − − − − → A 0 . This entails 1 ∈ C 0 ( E s ? ) ⊆ C ( E s ? ) . By assum- ing rnd ( { true , false } ) = true for the pro cess A tt ( E s ? , 1 , C ( E s ? )) , it follows that T op ( C ) E s ? v − − − − → T op 1 ( C ) = T op ( C ) . Hence, by applying the rules (P ar) and ( E SensRead E ) w e derive M k T op ( C ) τ − − → M 0 k T op 1 ( C ) = M 0 k T op ( C ) . – Let M k A τ − − → M 0 k A 0 b e derived by an application of rule ( E A ctIntgr E ) . Since M is sound it follo ws that M = M 0 and A E a ! v − − − − → A 0 . As a consequence, 1 ∈ C 0 ( E a !) ⊆ C ( E a !) . By assuming rnd ( { true , false } ) = true and rnd ( R ) = v for the pro cess A tt ( E a ! , 1 , C ( E a !)) , it follo ws that T op ( C ) E a ! v − − − − → T op 1 ( C ) = T op ( C ) . Th us, by applying the rules (P ar) and ( E A ctIntegr E ) w e derive M k T op ( C ) τ − − → M 0 k T op 1 ( C ) = M 0 k T op ( C ) . – Let M k A τ − − → M 0 k A 0 b e derived by an application of rule (T au) . Let M = E ; S o n P and M 0 = E 0 ; S o n P 0 . First, we consider the case when P k A τ − − → P 0 k A 0 is derived by an application of either rule ( E SensIn tegr E ) or rule ( E A ctDrop E ) . Since M is sound and A can p erform only malicious actions, w e hav e that: (i) either P s ? v − − − → P 0 and A E s ! v − − − − → A 0 (ii) or P a ! v − − − → P 0 and A E a ? v − − − − → A 0 . W e fo cus on the first case as the second one is similar. Since A E s ! v − − − − → A 0 , we derive 1 ∈ C 0 ( E s !) ⊆ C ( E s !) , and T op ( C ) E s ! v − − − − → T op 1 ( C ) = T op ( C ) , by assuming rnd ( { true , false } ) = true and rnd ( R ) = v for the pro cess A tt ( E s ! , 1 , C ( E s !)) . Thus, by applying the rules ( E SensIn tegr E ) and (T au) w e deriv e M k T op ( C ) τ − − → M 0 k T op 1 ( C ) = M 0 k T op ( C ) , as required. T o conclude the proof we observe that if P k A τ − − → P 0 k A 0 is derived by an application of a rule differen t from ( E SensIn tegr E ) and ( E A ctDrop E ) , then b y inspection of T able 1 and b y definition of attack er, it follows that A can’t p erform a τ -action since A do es not use channel comm unication and p erforms only malicious actions. Thus, the only possibility is that the τ -action is p erformed by P in isolation. As a consequence, by applying the rules (P ar) and (T au) , we derive M k T op ( C ) τ − − → M 0 k T op 1 ( C ) = M 0 k T op ( C ) . • α = tick . In this case the transition M k A tick − − − → M 0 k A 0 is derived by an application of rule (Time) b ecause M tick − − − → M 0 and A tick − − − → A 0 . Hence, it suffices to pro ve that T op ( C ) tick − − − → T op 2 ( C ) . W e consider t wo cases: 1 ∈ C ( ι ) and 1 6∈ C ( ι ) . If 1 ∈ C ( ι ) , then the transition A tt ( ι, 1 , C ( ι )) tick − − − → A tt ( ι, 2 , C ( ι )) can be deriv ed b y assuming rnd ( { true , false } ) = false . Moreov er, since rnd ( { true , false } ) = false the pro cess A tt ( ι, 1 , C ( ι )) can only perform a tick action. If 1 6∈ C ( ι ) , then the pro cess A tt ( ι, 1 , C ( ι )) can only perform a tick action. As a consequence, A tt ( ι, 1 , C ( ι )) tick − − − → Att ( ι, 2 , C ( ι )) and T op ( C ) tick − − − → T op 2 ( C ) . By an application of rule (Time) , w e derive M k T op ( C ) tick − − − → M 0 k T op 2 ( C ) . • α = deadlo ck . This case is not admissible b ecause M k A deadlock − − − − − − − → M 0 k A 0 w ould entail M deadlock − − − − − − − → M 0 . How ever, M is sound and it can’t deadlo c k. • α = unsafe . Again, this case is not admissible b ecause M is sound. 44 Inductive c ase ( k > 1 ). W e hav e to prov e that M k A t − − → M 0 k A 0 implies M k T op ( C ) ˆ t = = ⇒ M 0 k T op # tick ( t )+1 ( C ) . Since the length of t is greater than 1 , it follows that t = t 0 α , for some trace t 0 and some action α . Thus, there e xist M 00 and A 00 suc h that M k A t 0 − − → M 00 k A 00 α − − → M 0 k A 0 . By inductiv e h yp othesis, it follows that M k T op ( C ) ˆ t 0 = = ⇒ M 00 k T op # tick ( t 0 )+1 ( C ) . T o conclude the pro of, it is enough to sho w that M 00 k A 00 α − − → M 0 k A 0 implies M 00 k T op # tick ( t 0 )+1 ( C ) ˆ α = = ⇒ M 0 k T op # tick ( t )+1 ( C ) . The reasoning is similar to that follow ed in the base case, except for actions α = deadlo ck and α = unsafe that need to b e treated separately . W e prov e the case α = deadlock as the case α = unsafe is similar. Let M = E ; S o n P . The transition M 00 k A deadlock − − − − − − − → M 0 k A 0 m ust b e derived b y an application of rule (Deadlo c k) . This implies that M 00 = M 0 , A 00 = A 0 and the state function of M is not in the inv ariant set inv . Th us, by an application of rule (Deadlo c k) we derive M 00 k T op # tick ( t 0 )+1 ( C ) deadlock − − − − − − − → M 0 k T op # tick ( t 0 )+1 ( C ) . Since # tick ( t ) + 1 = # tick ( t 0 ) + # tick ( deadlo ck ) + 1 = # tick ( t 0 ) + 1 , it follows, as required, that M 00 k T op # tick ( t 0 )+1 ( C ) deadlock − − − − − − − → M 0 k T op # tick ( t )+1 ( C ) . Ev erything is finally in place to prov e Theorem 3. Pr o of of The or em 3. W e hav e to pro ve that either M k A v M or M k A v m 2 ..n 2 M , for some m 2 and n 2 suc h that m 2 ..n 2 ⊆ m 1 ..n 1 ( m 2 = 1 and n 2 = ∞ if the tw o systems are completely unrelated). The pro of pro ceeds b y contradiction. Supp ose that M k A 6v M and M k A v m 2 ..n 2 M , with m 2 ..n 2 6⊆ m 1 ..n 1 . W e distinguish tw o cases: either n 1 = ∞ or n 1 ∈ N + . If n 1 = ∞ , then it must b e m 2 < m 1 . Since M k A v m 2 ..n 2 M , by Definition 10 there is a trace t , with # tick ( t ) = m 2 − 1 , such that M k A t − − → and M 6 ˆ t = = ⇒ . By Lemma 6, this entails M k T op ( C ) ˆ t = = ⇒ . Since M 6 ˆ t = = ⇒ and # tick ( t ) = m 2 − 1 < m 2 < m 1 , this contradicts M k T op ( C ) v m 1 ..n 1 M . If n 1 ∈ N + , then m 2 < m 1 and/or n 1 < n 2 , and we reason as in the previous case. A.4 Pro ofs of Section 5 In order to prov e Prop osition 7, we need a couple of lemmas. Lemma 7 is a v arian t of Lemma 4. Here the b eha viour of Sys is parametric on the uncertaint y . Lemma 7. L et Sys b e the system define d in Se ction 3, and 0 . 4 < γ ≤ 9 20 . L et S y s [ δ ← γ ] = Sys 1 t 1 − − − → tick − − − → Sys 2 · · · t n − 1 − − − − − → tick − − − → Sys n such that the tr ac es t j c ontain no tick -actions, for any j ∈ 1 ..n − 1 , and for any i ∈ 1 ..n Sys i = S i o n P i with S i = h ξ i x , ξ i s , ξ i a i . Then, for any i ∈ 1 ..n − 1 we have the fol lowing: • if ξ i a ( c o ol ) = off then ξ i x ( temp ) ∈ [0 , 11 . 1 + γ ] and ξ i x ( str ess ) = 0 if ξ i x ( temp ) ∈ [0 , 10 . 9 + γ ] and, otherwise, ξ i x ( str ess ) = 1 ; • if ξ i a ( c o ol ) = off and ξ i x ( temp ) ∈ (10 . 1 , 11 . 1 + γ ] then ξ i +1 a ( c o ol ) = on and ξ i +1 x ( str ess ) ∈ 1 .. 2 ; • if ξ i a ( c o ol ) = on then ξ i x ( temp ) ∈ (9 . 9 − k ∗ (1 + γ ) , 11 . 1 + γ − k ∗ (1 − γ )] , for some k ∈ 1 .. 5 , such that ξ i − k a ( c o ol ) = off and ξ i − j a ( c o ol ) = on , for j ∈ 0 ..k − 1 ; mor e over, if k ∈ 1 .. 3 then ξ i x ( str ess ) ∈ 1 ..k + 1 , otherwise, ξ i x ( str ess ) = 0 . Pr o of. Similar to the pro of of Lemma 4. The crucial difference w.r.t. the pro of of Lemma 4 is limited to the second part of the third item. In particular the part saying that ξ i x ( str ess ) = 0 , when k ∈ 4 .. 5 . Now, after 3 time units of co oling, the state v ariable str ess la ys in the integer interv al 1 ..k + 1 = 1 .. 4 . Thus, in order to 45 ha ve ξ i x ( str ess ) = 0 , when k ∈ 4 .. 5 , the temp erature in the third time slot of the co oling must b e less than or equal to 9 . 9 . How ev er, from the first statement of the third item we deduce that, in the third time slot of co oling, the state v ariable temp reac hes at most 11 . 1 + γ − 3 ∗ (1 − γ ) = 8 . 1 + 4 γ . Thus, Hence we hav e that 8 . 1 + 4 γ ≤ 9 . 9 for γ ≤ 9 20 . The following lemma is a v ariant of Prop osition 1. Lemma 8. L et Sys b e the system define d in Se ction 3 and γ such that 0 . 4 < γ ≤ 9 20 . If Sys [ δ ← γ ] t − − → S y s 0 , for some t = α 1 . . . α n , then α i ∈ { τ , tick } , for any i ∈ 1 ..n . Pr o of. By Lemma 7, the temp erature will alwa ys lay in the real interv al [0 , 11 . 1 + γ ] . As a consequence, since γ ≤ 9 20 , the system will nev er deadlo c k. Moreo ver, after 5 tick action of co olant the state v ariable temp is in (9 . 9 − 5 ∗ (1 + γ ) , 11 . 1 + γ − 5 ∗ (1 − γ )] = (4 . 9 − 5 γ , 6 . 1 + 6 γ ] . Since  = 0 . 1 , the v alue detected from the sensor will b e in the real interv al (4 . 8 − 5 γ , 6 . 2 + 6 γ ] . Thus, the temp erature sensed b y IDS will b e at most 6 . 2 + 6 γ ≤ 6 . 2 + 6 ∗ 9 20 ≤ 10 , and no alarm will b e fired. Finally , the maximum v alue that can b e reached by the state v ariable str ess is k + 1 m for k = 3 . As a consequence, the system will not reach an unsafe state. The following Lemma is a v ariant of Proposition 2. Here the behaviour of Sys is parametric on the uncertain ty . Lemma 9. L et Sys b e the system define d in Se ction 3 and γ such that 0 . 4 < γ ≤ 9 20 . Then, for any exe cution tr ac e of S y s [ δ ← γ ] we have the fol lowing: • if either pr o c ess Ctrl or pr o c ess IDS senses a temp er atur e ab ove 10 then the value of the state variable temp r anges over (9 . 9 , 11 . 1 + γ ] ; • when the pr o c ess IDS tests the temp er atur e the value of the state variable temp r anges over (9 . 9 − 5 ∗ (1 + γ ) , 11 . 1 + γ − 5 ∗ (1 − γ )] . Pr o of. As to the first statement, since  = 0 . 1 , if either pro cess Ctrl or pro cess IDS senses a temp erature ab o v e 10 then the v alue of the state v ariable temp is ab ov e 9 . 9 . By Lemma 7, the state v ariable temp is less than or equal to 11 . 1 + γ . Therefore, if either pr o c ess Ctrl or pr o c ess IDS sense a temp erature ab o ve 10 then the v alue of the state v ariable temp is in (9 . 9 , 11 . 1 + γ ] . Let us prov e now the second statement. When the pro cess IDS tests the temp erature then the co olan t has b een active for 5 tick actions. By Lemma 7, the state v ariable temp ranges ov er (9 . 9 − 5 ∗ (1 + γ ) , 11 . 1 + γ − 5 ∗ (1 − γ )] . Ev erything is finally in place to prov e Prop osition 7. Pr o of of Pr op osition 7. F or (1) w e hav e to show that Sys [ δ ← γ ] v Sys , for γ ∈ ( 8 20 , 9 20 ) . But this ob viously holds by Lemma 8. As regards item (2), we ha ve to prov e that Sys [ δ ← γ ] 6v Sys , for γ > 9 20 . By Prop osition 1 it is enough to show that the system Sys [ δ ← γ ] has a trace which either (i) sends an alarm, or (ii) deadlo c ks, or (iii) en ters in an unsafe state. W e can easily build up a trace for Sys [ δ ← γ ] in which, after 10 tick -actions, in the 11 -th time slot, the v alue of the state v ariable temp is 10 . 1 . In fact, it is enough to increase the temp erature of 1 . 01 degrees for the first 10 rounds. Notice that this is an admissible v alue since, 1 . 01 ∈ [1 − γ , 1 + γ ] , for an y γ > 9 20 . Being 10 . 1 the v alue of the state v ariable temp , there is an execution trace in which the sensed temp erature is 10 (recall that  = 0 . 1 ) and hence the co oling system is not activ ated but the state v ariable str ess will be increased. In the following time slot, i.e., the 12 -th time slot, the temp erature ma y reac h at most the v alue 10 . 1 + 1 + γ and the state v ariable str ess is 1 . Now, if 10 . 1 + 1 + γ > 50 then the system deadlo c ks. Otherwise, the con troller will activ ate the co oling system, and after 3 time units of co oling, in the 15 -th time slot, the state v ariable str ess will b e 4 and the v ariable temp will b e at most 11 . 1 + γ − 3(1 − γ ) = 8 . 1 + 4 γ . Th us, there is an execution trace in which the temp erature is 8 . 1 + 4 γ , which will b e greater than 9 . 9 b eing 46 γ > 9 20 . As a consequence, in the next time slot, the state v ariable str ess will b e 5 and the system will enter in an unsafe state. This is enough to deriv e that Sys [ δ ← γ ] 6v Sys , for γ > 9 20 . Pr o of of The or em 4. W e consider the t wo parts of the statement separately . Definitive imp act. By an application of Lemma 6 we hav e that M k A t − − → en tails M k T op ( C ) ˆ t = = ⇒ . This implies M k A v M k T op ( C ) . Thus, if M k T op ( C ) v M [ ξ w ← ξ w + ξ ] , for ξ ∈ R ˆ X , ξ > 0 , then, b y transitivit y of v , it follows that M k A v M [ ξ w ← ξ w + ξ ] . Pointwise imp act . The pro of pro ceeds by con tradiction. Supp ose ξ 0 > ξ . Since T op ( C ) has a p oint wise impact ξ at time m , it follows that ξ is given b y: inf  ξ 00 : ξ 00 ∈ R ˆ X ∧ M k T op ( C ) v m..n M [ ξ w ← ξ w + ξ 00 ] , n ∈ N + ∪ ∞  . Similarly , since A has a p oint wise impact ξ 0 at time m 0 , it follows that ξ 0 is given by inf  ξ 00 : ξ 00 ∈ R ˆ X ∧ M k A v m 0 ..n M [ ξ w ← ξ w + ξ 00 ] , n ∈ N + ∪ ∞  . No w, if m = m 0 , then ξ ≥ ξ 0 b ecause M k A t − − → en tails M k T op ( C ) ˆ t = = ⇒ due to an application of Lemma 6. This is contradiction with the fact that ξ < ξ 0 . Thus, it must be m 0 < m . Now, since b oth ξ and ξ 0 are the infimum functions and since ξ 0 > ξ , there are ξ and ξ 0 , with ξ ≤ ξ ≤ ξ 0 ≤ ξ 0 suc h that: (i) M k T op ( C ) v m..n M [ ξ w ← ξ w + ξ ] , for some n ; (ii) M k A v m 0 ..n 0 M [ ξ w ← ξ w + ξ 0 ] , for some n 0 . F rom M k A v m 0 ..n 0 M [ ξ w ← ξ w + ξ 0 ] it follows that there exists a trace t with # tick ( t ) = m 0 − 1 such that M k A t − − → and M [ ξ w ← ξ w + ξ 0 ] 6 ˆ t = = ⇒ . Since ξ ≤ ξ 0 , by monotonicity (Prop osition 6), we deduce that M [ ξ w ← ξ w + ξ ] 6 ˆ t = = ⇒ . Moreov er, b y Lemma 6 M k A t − − → entails M k T op ( C ) ˆ t = = ⇒ . Summarising, there exists a trace t 0 with # tick ( t 0 ) = m 0 − 1 such that M k T op ( C ) t 0 − − → and M [ ξ w ← ξ w + ξ ] 6 ˆ t 0 = = ⇒ . How ever, this, together with m 0 < m , is in contradiction with the fact (i) ab o ve saying that M k T op ( C ) v m..n M [ ξ w ← ξ w + ξ ] , for some n . As a consequence it must b e ξ 0 ≤ ξ and m 0 ≤ m . This concludes the pro of. Pr o of of Pr op osition 8. Let us prov e the first sub-result. F rom Prop osition 5 we know that Sys k A 10 v 14 .. 21 S y s . In particular we show ed that the system Sys k A 10 has an execution trace which is in an unsafe state from the 14 -th to the 21 -th time in terv al and fires only one alarm in the 16 -th time slot, and which cannot b e matc hed by Sys . Hence, if in the 14 -th time slot the the system is in unsafe state, then the temp erature in the 9 -th time slot must b e greater than 9 . 9 . Moreov er, to fire an allarm in the in the 16 -th time slot, the co oling must b e activ ated in the 11 -th time slot and hence the temp erature in the 10 -th time slot must b e less or equal than 10 . 1 (recall that  = 0 . 1 ). But this is imp ossible since in the 9 -th time slot temp is greater than 9 . 9 , and, the minim um increasing of the temp erature is 1 − γ = 0 . 2 . As a consequence, for γ ≤ 0 . 8 we ha ve S y s k A 10 6v S y s { γ / δ } . Let us prov e the second sub-result. That is, S y s k A 10 v S y s { γ / δ } for γ > 0 . 8 . F ormally , w e hav e to demonstrate that whenever Sys k A 10 t − − → , for some trace t , then Sys { γ / δ } ˆ t = = ⇒ as w ell. Let us do a case analysis on the structure of the trace t . W e notice that, since A 10 is a temp oranely attack, the trace t do es not con tain deadlo ck -action. W e distinguish four p ossible cases. 47 • The trace t con tains contains only τ -, tick -, alar m - and unsafe -actions. Firstly , notice that the system Sys k A 10 ma y pro duce only one output on channel alar m , in the 16 -th time slot. After that, the trace will hav e only τ - and tick -actions. In fact, in Prop osition 5 we provided a trace in which, in the 16 -th time slot, the state v ariable temp reac hes the v alue 10 . 5 and an output on channel alar m is emitted. This is the maximum p ossible v alue for v ariable temp in that p oin t in time. After the transmission of the alar m , the system Sys k A 10 activ ates the co oling for the following 5 time slots. Thus, in the 21 -th time slot, the temp erature will b e at most 10 . 5 − 5 ∗ (1 − δ ) = 10 . 5 − 5 ∗ (0 . 6) = 7 . 5 , and no alarm is fired. F rom that time on, since the attac k A 10 terminated is life in the 10 -th time slot, no other alarms will b e fired. Moreov er, in Prop osition 5, w e hav e show ed that Sys k A 10 is in an unsafe state from the 14 -th to the 21 -th time interv al. Summarising Sys k A 10 is in an unsafe state from the 14 -th to the 21 -th time interv al and fires only one alarm in the 16 -th time slot. By monotonicity (Prop osition 6), it is enough to show that suc h a trace exists for Sys [ δ ← γ ] , with 0 . 8 < γ ≤ 0 . 81 . In fact, if this trace exists for 0 . 8 < γ ≤ 0 . 81 , then it would also exist for γ > 0 . 81 . In the following, we show ho w to build the trace of Sys [ δ ← γ ] which simulates the trace t of Sys k A 10 . W e can easily build up a trace for Sys { γ / δ } in which, after 8 tick -actions, in the 9 -th time slot, the v alue of the state v ariable temp is in 9 . 1 + γ . In fact, it is enough to increase the temp erature of a v alue 9 . 1+ γ 8 degrees for the first 8 rounds. Notice that this is an admissible v alue since, 9 . 1+ γ 8 is in [1 − γ , 1 + γ ] , for an y 0 . 8 < γ ≤ 0 . 81 . Moreov er, since γ > 0 . 8 , in the 9 -th time slot w e hav e that 9 . 1 + γ > 9 . 9 . Now, in the in the 10 -th time slot, temp ma y reach 9 . 1 + γ + (1 − γ ) = 10 . 1 . Being 10 . 1 the v alue of the state v ariable temp , there is an execution trace in whic h the sensed temp erature is 10 (recall that  = 0 . 1 ) and hence the co oling system is not activ ated. How ever, in the following time slot, i.e. the 11 -th time slot, the temp erature may reac h the v alue 11 . 8 , imp osing the activ ation of the co oling system (notice 1 . 7 is an admissible increasing). Summarizing, in the 9 -th time slot the temp erature is greater than 9 . 9 and in the 11 -th time slot the co oling system is activ ated with a temp erature equal to 11 . 8 . The thesis follows from the following tw o facts: – Since in the 9 -th time slot the temp erature is greater than 9 . 9 , then in the 14 -th time slot the system en ters in an unsafe state. Since in the 11 -th time slot the co oling system is activ ated with a temp erature equal to 11 . 8 , then, in the 20 -th time slot, after 9 time units of co oling, the temp erature ma y reach the v alue 11 . 8 − 9(1 − γ ) = 2 . 8 + 9 γ whic h will b e greater than 9 . 9 being γ > 0 . 8 . Hence, in the 21 -th time slot, the system still b e in an unsafe state. Finally , in the 21 -th time slot, after 10 time units of co oling, the temp erature may reach the v alue 11 . 8 − 10(1 − γ ) = 1 . 8 + 10 γ whic h will b e less or equal to 9 . 9 b eing γ ≤ 0 . 81 . Hence, in the 22 -nd time slot, the v ariable str ess is reset to 0 and the system enters in a safe state. F rom that time on, since Sys { γ / δ } can mimic all traces of Sys , we can alwa ys choose a trace whic h do es not enter in an unsafe state an y more. Summarising Sys k A 10 is in an unsafe state from the 14 -th to the 21 -th time in terv al. – Since in the 11 -th time slot the co oling system is activ ated with a temp erature equal to 11 . 8 , then, in the 16 -th time slot, the temp erature may reach the v alue 11 . 8 − 5(1 − γ ) = 6 . 8 + 5 γ . Since  = 0 . 1 , the sensed temp erature would b e in the real interv al [6 . 7 + 5 γ , 6 . 9 + 5 γ ] . Thus, the sensed temp erature is greater than 10 b eing γ > 0 . 8 . Thus, the alarm will b e transmitted, in the 16 -th time slot, as required. After the transmission on channel alar m , the system Sys { γ / δ } activ ates the co oling for the following 5 time slots. As a consequence, in 21 -th time slot, the temp erature will b e at most 6 . 8 + 5 γ − 5 ∗ (1 − γ ) = 1 . 8 + 10 γ . Since we assumed 0 . 8 < γ ≤ 0 . 81 the temp erature will b e w ell b elo w 10 and no alarm will b e sent. F rom that time on, since Sys { γ / δ } can mimic all traces of Sys , we can alwa ys choose a trace whic h do es not fire the alarm any more. Summarising Sys k A 10 fires only one alarm in the 16 -th time slot. • The trace t con tains contains only τ -, tick - and unsafe -actions. This case is similar to the previous one. • The trace t con tains only τ -, tick - and alar m -actions. This case cannot o ccur. In fact, an alar m -action can not o ccur without un unsafe -action. 48 • The trace t con tains only τ - and tick -actions. If the system Sys k A 10 has a trace t whic h contains only τ - and tick -actions then, by Prop osition 1, the system Sys in isolation must hav e a similar trace with the same num b er of tick -actions. By an application of Prop osition 6, as δ < γ , an y trace of Sys can b e sim ulated by Sys { γ / δ } . As a consequence, Sys { γ / δ } ˆ t = = ⇒ . This is enough to deriv e that: S y s k A 10 v S y s { γ / δ } . Pr o of of Pr op osition 9. Let us pro ve the first sub-result. As demonstrated in Example 2, we know that Sys k A v 14 .. ∞ Sys b ecause in the 14 -th time slot the comp ound system will violate the safety conditions emitting an unsafe -action until the inv ariant will b e violated. No alarm will b e emitted. Since the system keeps violating the safety condition the temp erature must remain greater than 9 . 9 . As prov ed for Lemma 7 we can prov e that we hav e that the temp erature is less than or equal to 11 . 1 + γ . Hence, in the time slot b efore getting in deadlo c k, the temp erature of the system is in the real interv al (9 . 9 , 11 . 1 + γ ] . T o deadlo c k with one tick action and from a temp erature in the real interv al (9 . 9 , 11 . 1 + γ ] , either the temp erature reaches a v alue greater than 50 (namely , 11 . 1 + γ + 1 + γ > 50 ) or the temp erature reac hes a v alue less than 0 (namely , 9 . 9 − 1 − γ < 0 ). Since γ ≤ 8 . 9 , b oth cases can not o ccur. Thus, we ha ve that Sys k A 6v Sys [ δ ← γ ] . Let us prov e the second sub-result. That is, S y s k A v Sys [ δ ← γ ] for γ > 8 . 9 . W e demonstrate that whenever Sys k A t − − → , for some trace t , then Sys [ δ ← γ ] ˆ t = = ⇒ as well. W e will pro ceed b y case analysis on the kind of actions contained in t . W e distinguish four p ossible cases. • The trace t con tains contains only τ -, tick -, unsafe - and deadlo ck -actions. As discussed in Example 2, Sys k A v 14 .. ∞ Sys b ecause in the 14 -th time slot the system will violate the safety conditions emitting an unsafe -action un til the inv ariant will b e brok en. No alarm will b e emitted. Note that, when Sys k A en ters in an unsafe state then the temp erature is at most 9 . 9 + (1 + δ ) + 5(1 + δ ) = 9 . 9 + 6(1 . 4) = 18 . 3 . Moreo ver, the fastest execution trace, reaching an unsafe state, deadlocks just after d 50 − 18 . 3 1+ δ e = d 31 , 7 1 . 4 e = 23 tick -actions. Hence, there are m, n ∈ N , with m ≥ 14 and n ≥ m + 23 , such that the trace t of Sys k A satisfies the following conditions: (i) in the time interv al 1 ..m − 1 the trace t of is comp osed by τ - and tick -actions; (ii) in the time interv al m.. ( n − 1) , the trace t is comp osed b y τ -, tick - and unsafe - actions; (iii) in the n -th time slot the trace t deadlo c ks. By monotonicity (Prop osition 6), it is enough to show that suc h a trace exists for Sys [ δ ← γ ] , with 8 . 9 < γ < 9 . In fact, if this trace exists for 8 . 9 < γ < 9 , then it would also exist for γ ≥ 9 . In the follo wing, we sho w how to build the trace of Sys [ δ ← γ ] which simulates the trace t of Sys k A . W e build up the trace in three steps: (i) the sub-trace from time slot 1 to time slot m − 6 ; (ii) the sub-trace from the time slot m − 5 to the time slot n − 1 ; (iii) the final part of the trace reaching the deadlo c k. (i) As γ > 8 . 9 (and hence 1 + γ > 9 . 9 ), the system ma y increment the temp erature of 9 . 9 degrees after a single tick -action. Hence, we choose the trace in which the system Sys [ δ ← γ ] , in the second time slot, reaches the temperature equal to 9 . 9 . Moreov er, the system may maintain this temp erature v alue until the ( m − 6) -th time slot (indeed 0 is an admissible increasing since 0 ∈ [1 − γ , 1 + γ ] ⊇ [ − 7 . 9 , 10 . 9] ) . Obviously , with a temp erature equal to 9 . 9 , only τ - and tick -actions are p ossible. (ii) Let k ∈ R suc h that 0 < k < γ − 8 . 9 (suc h k exists since γ > 8 . 9 ). W e may consider an increment of the temp erature of k . This implies that in the ( m − 5) -th time slot, the system Sys [ δ ← γ ] ma y reac h the temp erature 9 . 9 + k . Note that k is an admissible increment since 0 < k < γ − 8 . 9 49 and 8 . 9 < γ < 9 entails k ∈ (0 , 0 . 1) . Moreov er, the system may maintain this temp erature v alue un til the ( n − 1) -th time slot (indeed, as said b efore, 0 is an admissible increment). Summarising from the ( m − 5) -th time slot to the ( n − 1) -th time slot, the temp erature may remain equal to 9 . 9 + k ∈ (9 . 9 , 10) . As a consequence, from the m -th time slot to the ( n − 1) -th time slot the system Sys [ δ ← γ ] may enter in an unsafe state. Thus, an unsafe -action may b e p erformed in the time in terv al m.. ( n − 1) . Moreov er, since  = 0 . 1 and the temp erature is e 9 . 9 + k ∈ (9 . 9 , 10) , w e can alw ays assume that the co oling is not activ ated until the ( n − 1) -th time slot. This implies that neither alarm nor deadlo c k o ccur. (iii) A t this p oin t, since in the ( n − 1) -th time slot the temp erature is equal to 9 . 9 + k ∈ (9 . 9 , 10) (recall that k ∈ (0 , 1) ), the co oling ma y b e activ ated. W e may consider a decrement of 1 + γ . In this manner, in the n -th time slot the system may reach a temperature of 9 . 9 + k − (1 + γ ) < 9 . 9 + 0 − 1 − 8 . 9 = 0 degrees, and the system Sys [ δ ← γ ] will deadlo c k. Summarising, for any γ > 8 . 9 the system Sys [ δ ← γ ] can mimic any trace t of Sys k A . • The trace t con tains contains only τ -, tick - and unsafe -actions. This case is similar to the previous one. • The trace t con tains only τ -, tick - and alar m -actions. This case cannot o ccur. In fact, as discussed in Example 2, the process Ctrl nev er activ ates the Co oling comp onen t (and hence also the IDS comp onen t, whic h is the only one that could send an alarm) since it will alwa ys detect a temp erature b elo w 10 . • The trace t con tains only τ - and tick -actions. If the system Sys k A has a trace t that contains only τ - and tick -actions, then, by Prop osition 1, the system Sys in isolation must hav e a similar trace with the same num b er of tick -actions. By an application of Prop osition 6, as δ < γ , an y trace of Sys can b e sim ulated by Sys [ δ ← γ ] . As a consequence, Sys [ δ ← γ ] ˆ t = = ⇒ . This is enough to obtain what required: Sys k A v Sys [ δ ← γ ] . 50

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment