Towards a formal notion of impact metric for cyber-physical attacks (full version)

Industrial facilities and critical infrastructures are transforming into "smart" environments that dynamically adapt to external events. The result is an ecosystem of heterogeneous physical and cyber components integrated in cyber-physical systems wh…

Authors: Ruggero Lanotte, Massimo Merro, Simone Tini

T o w ards a formal notion of impact metric for cyb er-ph ysical atta cks (full v ersion) ⋆ Ruggero Lanotte 1 , Massimo Merro 2 , and Simone Tini 1 1 Dipartimento di Scienza e Alta T ecnologia , U n ivers it` a dell’Insubria, Como, Italy { ruggero. lanotte,simone.tini } @uninsubria.it 2 Dipartimento di Informatica, Un ivers it` a degli Studi d i V erona, V erona, Italy massimo.me rro@univr.it Abstract. Industrial facilities and critical infrastructures are transform- ing into “smart” environmen ts that dynamically adapt to ex ternal ev ents. The result is an ecosystem of heterogeneous physical and cyb er comp o- nents integ rated in cyber-physical systems which are more and more exp osed to cyb er-phy si c al attacks , i .e. , securit y breaches in cy b erspace that adversely affect the physical p rocesses at th e core of th e systems. W e provide a formal c omp osi tional m etric t o estimate t he imp act of cyb er-physical attacks targeting sensor d evices of IoT systems formalised in a simple extension of Hennessy and Regan’s Time d Pr o c ess L anguage . Our im p act metric relies on a d iscrete-time generalisation of Desharnais et al.’s we ak bi si mulation metric for concu rrent systems. W e show the ad- equacy of our definition on t wo different attacks on a simple surveillance system. 1 In tro duction The Int ernet of Things (IoT) is heavily affecting our daily liv es in many domains, ranging from tiny wearable devices to large industr ia l sy stems with thous a nds of heterogeneo us c y ber and physical comp onents that interact with e a ch o ther. Cyb er-Physic a l Systems (CPSs) are integrations of netw ork ing and distributed computing systems with ph ysica l pro c e sses, where feedback loo ps allow the latter to affect the computations of the former and vice versa. Historica lly , CP Ss r e- lied on pr oprietary technologies and were implemented as stand-a lo ne netw orks in physically protected lo cations. Howev er, the g rowing c onnectivity and inte- gration o f these systems has trigge red a dra ma tic increa se in the num b e r of cyb er-physic al attacks [26], i.e. , security bre aches in cybe rspace that adversely affect the physical pro ces ses, e.g. , manipulating sensor r e adings and, in ge neral, influencing physical pro cesses to bring the sys tem int o a state desire d by the at- tack er. Cyb er-physical attacks a re c o mplex and challenging a s they usua lly cross the b oundary b etw een cyb erspac e and the ph ys ical w o rld, p ossibly more than ⋆ An extended abstract will app ear in t h e Pro c. of the 14th International Confer enc e on inte gr ate d F ormal Metho ds (iFM 2018), 5th-7th September 2018, Maynooth Un i- versi ty , Ireland, and published in a volume of L e ctur e Notes in Computer Scienc e . once [1 4]. Some notorious examples are: (i) the S tuxnet worm, which repr o- grammed PLCs of nuclear centrifuges in Iran [9], (ii) the attack on a sewage treatment facilit y in Queensland, Australia , whic h manipulated the SCAD A sys- tem to r elease raw sewage in to local riv er s [34], or the (iii) the recent BlackEner gy cyb er-attack on the Ukr a inian power grid, again compro mising the SCADA sys- tem [18]. The points in common of these systems is that they are all safety cr itica l and failure s may cause ca tastrophic cons e q uences. Thus, the concern for conse- quences a t the physical level puts CPS s e cu rity apa rt from standa rd IT se curity . Timing is particular ly relev ant in CP S security b ecause the physical s ta te of a sys tem changes c o nt inuously ov er time and, as the s ystem evolv es in time, some states might b e mor e vulner able to attacks than other s [20]. F or example, an a ttack launched when the ta r get state v aria ble reaches a lo cal ma xim um (or minim um) may ha ve a g r eat impact on the whole system b ehaviour [21]. Also the dur ation of t he attack is an imp ortant parameter to b e taken into considera tio n in o rder to achiev e a succes sful a ttack. F or ex ample, it may take minutes fo r a chemical rea ctor to rupture [37], hour s to heat a tank of w a ter or burn out a motor, and days to destr oy cent r ifuges [9]. Actually , the estimation o f the imp act o f c yber -physical attacks o n the target system is cr ucial when protecting CP Ss [13]. F or ins tance, in industria l C P Ss, befo re taking a n y countermeasure ag ainst an attack, engineers first try to e sti- mate the impact of the attack o n the system functioning (e.g., p erfo r mance and security) and weigh t it against the c o st of stopping the plant. If this cost is hig he r than the damage caused by the attack (a s is sometimes the case ), then engineers might actually decide to let the s ystem c o nt inue its activities even under attack. Thu s, once an attack is detected, imp act metrics ar e necess ary to quantif y the per turbation int r o duced in the physical b ehaviour of the sys tem under attack. The go al of this pa per is to lay theoretica l foundations to provide formal instruments to precisely define the notion of impact o f c yber -physical attack targeting ph ys ical devices, s uc h as sensor devic es of Io T systems. F or that w e rely on a timed g eneralisatio n of bisimulation metrics [8,7,39] to compare the behaviour o f tw o s y stems up to a given tolerance, for time-b ounded executions. We ak bisimulation metric [8] allows us to compare tw o s ystems M a nd N , writing M ≃ p N , if the weak bisimilarity holds with a distanc e or toler anc e p ∈ [0 , 1], i.e. , if M and N exhibit a different b ehaviour with pro bability p , and the same b ehaviour with probability 1 − p . A us e ful genera lis ation is the n -bisimulation metric [38] that takes in to account b ounded co mputatio ns. In- tuitiv ely , the dis tance p is ensured o nly for the fir st n c omputational s teps, fo r some n ∈ N . How ever, in timed systems it is desira ble to fo cus on the passag e of time rather than the num be r of computational steps . This would allow us to deal with situations where it is not necessar y (or it simply do es not make sense) to compare t wo sy stems “a d infinitum” but only for a limited amount of time. Con tributi on. In this pap er, we fir s t in tro duce a general notion of time d bisim- ulation met ric for concurr ent probabilis tic systems equipp ed with a discrete no- tion of time. Intuitiv ely , this kind of metric a llows us to derive a time d we ak bisimulation with toler anc e , deno ted with ≈ k p , for k ∈ N + ∪ {∞} and p ∈ [0 , 1], to e x press that the toler ance p b et ween tw o timed s y stems is ensured o nly for the first k time instants ( tick -actions). Then, we use our timed bisimulation met- ric to set up a forma l c omp ositional theory to study a nd measur e the imp act of cyb er-physical attacks on IoT systems sp ecified in a simple probabilis tic timed pro cess calculus which extends Henness y and Regan’s Time d Pr o c ess L anguage (TPL) [16 ]. IoT systems in our ca lculus are mo delled by sp ecifying: (i) a physi- c al envir onment , c ontaining informations on the physical sta te v ariables a nd the sensor meas urements, and (ii) a lo gics that governs both acces s es to senso rs and channel-based communications with other cyb er comp onents. W e fo cus o n att acks on sensors that may eav esdr op and po s sibly modify the sensor measurements pr ovided to the controllers o f sensors, affecting b oth the inte grity a nd the availabili ty of the system under attack. In order to ma ke security as s essments o f our IoT systems, we adapt a well- know approa ch called Gener alize d Non D e ducibility on Comp osition (GNDC) [10] to compa re the behaviour o f an Io T system M with the b ehaviour of the sa me system under a ttack, written M k A , for some a rbitrary cy b er -physical attack A . T his compa rison makes use of our timed bisimulation metric to ev aluate not only the toler anc e and the vulner ability of a system M with resp ect to a certain attack A , but also the imp act of a s uccessful attack in terms of the deviation int r o duced in the b ehaviour of the targe t s ystem. In par ticular, we say that a s ystem M toler ates an attack A if M k A ≈ ∞ 0 M , i.e. , the pr e sence of A do es not affect the b ehaviour of M ; wherea s M is said to b e vulner able to A in the time interv a l m..n with impact p if m..n is the smallest interv al s uc h that M k A ≈ m − 1 0 M a nd M k A ≈ k p M , for any k ≥ n , i.e. , if the pe r turbation int r o duced by the attack A beco mes observ able in the m -th time slot and yields the maximum imp act p in the n -th time slot. In the concluding discussion we will show that the temp or al vu lner ability window m ..n provides several infor ma - tions ab out the corr esp o nding attack, such as ste althiness ca pability , dur ation of the physic al effe cts of the atta ck, a nd c o nsequent ro om for p ossible run-time c ounterme asur es . As a cas e s tudy , we use our timed bisimulation metric to measur e the impact of t wo different attacks injecting false p ositives and false ne gative , resp ectively , int o a s imple surveillance system ex pressed in our pro ces s calculus. Outline. Section 2 formalises our timed bis im ula tion metrics in a gener al setting. Section 3 provides a simple calc ulus of Io T systems . Section 4 defines cyb er- ph y sical attacks tog e ther with the notions of tolerance a nd vulner ability w.r.t. an attack. In Section 5 we us e o ur metrics to ev alua te the impact of tw o attacks on a simple surveillance sys tem. Section 6 draws conclus io ns and discusses related and future work. In this extended abstra ct pro ofs are o mitted, full details of the pro ofs can b e found in the App endix. 2 Timed B isim ulation Metrics In this section, w e intro duce time d bisimulation metrics as a genera l instrument to derive a notion of timed and approximate weak bisimulation betw een pro ba- bilistic systems equipp ed with a discrete notion of time. In Section 2 .1, we recall the semantic mo del of nondeterministic pr ob abilistic lab el le d tr ansition systems ; in Section 2 .2, we pr esent our metric se ma n tics . 2.1 Nondetermini stic Probabili s tic Lab elled T ra ns ition Systems Nondeterministic pro babilistic lab elled transition systems (pL TS) [33] combine classic L TSs [19] and discrete-time Ma rko v chains [15,35] to mo del, a t the same time, r eactive b ehaviour, nondeterminism and pro babilit y . W e first provide the mathematical machinery required to define a pL TS. The state space in a pL TS is given by a set T , who se elements a r e called pr o c esses , o r terms . W e use t, t ′ , .. to range ov er T . A (discr ete) pr ob ability sub- distribution ov er T is a mapping ∆ : T → [0 , 1], with P t ∈T ∆ ( t ) ∈ (0 , 1]. W e denote P t ∈T ∆ ( t ) by | ∆ | , and we say tha t ∆ is a pr ob ability distribution if | ∆ | = 1. The supp ort of ∆ is given by ⌈ ∆ ⌉ = { t ∈ T : ∆ ( t ) > 0 } . The set of all sub-distributions (resp. distr ibutions) ov er T with finite supp ort will be denoted with D sub ( T ) (r esp. D ( T )). W e use ∆ , Θ , Φ to range ov er D sub ( T ) a nd D ( T ). Definition 1 (pL TS [33]). A pL TS is a triple ( T , A , − → ) , wher e: ( i) T is a c ountable set of ter ms , (ii) A is a c ountable set of a ctions , and (iii) − → ⊆ T × A × D ( T ) is a tr a nsition r elation . In Definition 1, we assume the presence of a sp ecial deadlo ck ed term Dead ∈ T . F urthermore, we as s ume that the set o f actions A c ontains at lea st tw o a ctions: τ and tick . The former to model internal c omputations that cannot be externa lly observed, while the latter denotes the pass a ge of one time unit in a s e tting with a disc rete notio n of time [16]. In par ticula r, tick is the only time d action in A . W e write t α − → ∆ for ( t, α, ∆ ) ∈ − → , t α − → if ther e is a distribution ∆ ∈ D ( T ) with t α − → ∆ , and t α − → 6 otherwise. Let der ( t, α ) = { ∆ ∈ D ( T ) | t α − → ∆ } denote the set of the deriv atives (i.e. distributions) r eachable fro m term t through action α . W e say that a pL TS is image-finite [17] if der ( t, α ) is finite for a ll t ∈ T and α ∈ A . In this pap er, we will a lwa ys work with image - finite pL TSs. We ak tr ansitions. As we are interested in developing a we ak bisimulation metric, we need a definition of weak transitio n whic h abstracts a way from τ -actio ns. In a probabilistic setting, the definition of weak tra nsition is so mewhat complicated by the fact that (stro ng) trans itions take terms to distributions; consequently if we ar e to use weak transitions then we need to gener alise transitio ns, so that they take (sub-)distributions to (sub-)distributions. T o this end, we nee d some extra notation on distributions. F or a term t ∈ T , the p oint (Dir ac) distribution at t , denoted t , is defined b y t ( t ) = 1 and t ( t ′ ) = 0 for all t ′ 6 = t . Then, the conv ex co m bina tion P i ∈ I p i · ∆ i of a family { ∆ i } i ∈ I of (sub-)distributions, with I a finite set of index es, p i ∈ (0 , 1] and P i ∈ I p i ≤ 1, is the (sub-)distribution defined b y ( P i ∈ I p i · ∆ i )( t ) def = P i ∈ I p i · ∆ i ( t ) for all t ∈ T . W e wr ite P i ∈ I p i · ∆ i as p 1 · ∆ 1 + . . . + p n · ∆ n when I = { 1 , . . . , n } . Along the lines of [6], we wr ite t ˆ τ − → ∆ , for some term t and some distr ibution ∆ , if either t τ − → ∆ or ∆ = t . T hen, for α 6 = τ , we write t ˆ α − → ∆ if t α − → ∆ . Relation ˆ α − → is extended to mo del tr a nsitions from sub-distr ibutions to sub- distributions. F or a sub-distribution ∆ = P i ∈ I p i · t i , we wr ite ∆ ˆ α − → Θ if there is a non-empty set of indexes J ⊆ I such that: (i) t j ˆ α − → Θ j for all j ∈ J , (ii) t i ˆ α − → 6 , fo r all i ∈ I \ J , and (iii) Θ = P j ∈ J p j · Θ j . No te that if α 6 = τ then this definition a dmits that o nly some terms in the s upp or t of ∆ make the ˆ α − → tr a nsition. Then, we define the we ak tr ansition r elation ˆ τ = ⇒ as the tra nsitive and reflexive closure of ˆ τ − → , i.e. , ˆ τ = ⇒ = ( ˆ τ − → ) ∗ , while for α 6 = τ we let ˆ α = ⇒ denote ˆ τ = ⇒ ˆ α − → ˆ τ = ⇒ . 2.2 Timed W eak Bi simulation with T o lerance In this section, we define a fa mily o f rela tions ≈ k p ov er T , with p ∈ [0 , 1] and k ∈ N + ∪ {∞} , wher e , intuitiv ely , t ≈ k p t ′ means tha t t and t ′ c an we akly bisimulate e ach other with a toler anc e p ac cumulate d in k time d steps . This is done by int r o ducing a family o f pseudometrics m k : T × T → [0 , 1] and defining t ≈ k p t ′ iff m k ( t, t ′ ) = p . The pseudometrics m k will have the following prop erties for any t, t ′ ∈ T : (i) m k 1 ( t, t ′ ) ≤ m k 2 ( t, t ′ ) whenever k 1 < k 2 (tolerance monoto nic ity); (ii) m ∞ ( t, t ′ ) = p iff p is the distance betw een t a nd t ′ as g iven by the weak bisimilarity metric in [8 ] in an untimed setting; (iii) m ∞ ( t, t ′ ) = 0 iff t and t ′ are related by the s tandard weak pr obabilistic bisimilarity [30]. Let us r e call the standard definition of pseudometr ic. Definition 2 (Pseudome tri c). A function d : T × T → [0 , 1 ] is a 1-b ounded pseudometric over T if – d ( t, t ) = 0 for al l t ∈ T , – d ( t, t ′ ) = d ( t ′ , t ) for al l t, t ′ ∈ T (symmetry), – d ( t, t ′ ) ≤ d ( t, t ′′ ) + d ( t ′′ , t ′ ) for al l t, t ′ , t ′′ ∈ T (triangle ine quality). In o rder to define the family of functions m k , we define an auxilia ry family of functions m k,h : T × T → [0 , 1], with k , h ∈ N , quantifying the to lerance o f the weak bisimulation after a s equence of computation steps such that: (i) the sequence contains exac tly k tick - actions, (ii) the sequence terminates with a tick -action, (iii) any term p erfo r ms exactly h untimed actio ns b efor e the first tick -action, (iv) b etw een any i -th and ( i +1)- th tick -ac tion, with 1 ≤ i < k , there are an a rbitrary num b er of untimed ac tio ns. The definition of m k,h relies on a time d and quantitative version o f the classic bisimulation game: The tolerance betw ee n t and t ′ as given by m k,h ( t, t ′ ) ca n b e below a thre shold ǫ ∈ [0 , 1] only if each transition t α − → ∆ is mimick e d by a weak transition t ′ ˆ α = ⇒ Θ such that the bisimulation tole rance b etw een ∆ and Θ is , in turn, b elow ǫ . This requir es to lift ps eudometrics ov er T to pseudometr ics ov er (sub-)distributions in D sub ( T ). T o this end, we a dopt the notions of matching [43] (also called coupling) and Kantor ovich lifting [5]. Definition 3 (Matc hing ). A matching for a p air of distributions ( ∆, Θ ) ∈ D ( T ) × D ( T ) is a distribution ω in the st ate pr o duct sp ac e D ( T × T ) such that: – P t ′ ∈T ω ( t, t ′ ) = ∆ ( t ) , for al l t ∈ T , and – P t ∈T ω ( t, t ′ ) = Θ ( t ′ ) , for al l t ′ ∈ T . We write Ω ( ∆, Θ ) to denote the set of al l matchings for ( ∆, Θ ) . A matching for ( ∆, Θ ) may b e understo o d as a transp ortatio n s chedule for the shipment of probability mass from ∆ to Θ [43]. Definition 4 (Kan torovic h l ifting). Assume a pseudometric d : T × T → [0 , 1] . The K antoro vich lifting of d is the function K ( d ) : D ( T ) × D ( T ) → [0 , 1] define d for distributions ∆ and Θ as: K ( d )( ∆, Θ ) def = min ω ∈ Ω ( ∆,Θ ) P s,t ∈T ω ( s, t ) · d ( s, t ) . Note that since we are co nsidering o nly distributions with finite supp ort, the minim um ov er the set of ma tchings Ω ( ∆, Θ ) used in Definition 4 is well defined. Pseudometrics m k,h are inductively defined on k a nd h by mea ns o f suitable functionals over the complete lattice ([0 , 1] T ×T , ⊑ ) of functions of t yp e T × T → [0 , 1], ordered by d 1 ⊑ d 2 iff d 1 ( t, t ′ ) ≤ d 2 ( t, t ′ ) for a ll t, t ′ ∈ T . Notice that in this lattice, for ea ch s et D ⊆ [0 , 1] T ×T , the supr em um and infimum ar e defined as sup( D )( t, t ′ ) = sup d ∈ D d ( t, t ′ ) a nd inf ( D )( t, t ′ ) = inf d ∈ D d ( t, t ′ ), for all t, t ′ ∈ T . The infim um of the lattice is the constant function ze r o, deno ted by 0 , a nd the supremum is the constant function one, deno ted b y 1 . Definition 5 (F unctionals for m k,h ). The funct ionals B , B tick : [0 , 1] T ×T → [0 , 1] T ×T ar e define d for any function d ∈ [0 , 1] T ×T and terms t, t ′ ∈ T as: B ( d )( t, t ′ ) = max { d ( t, t ′ ) , sup α ∈ A \{ tick } max t α − → ∆ inf t ′ ˆ α = ⇒ Θ K ( d )  ∆, Θ + (1 − | Θ | ) Dead  , sup α ∈ A \{ tick } max t ′ α − → Θ inf t ˆ α = ⇒ ∆ K ( d )  ∆ + (1 − | ∆ | ) Dead , Θ  } B tick ( d )( t, t ′ ) = max { d ( t, t ′ ) , max t tick − − → ∆ inf t ′ c tick = = ⇒ Θ K ( d )  ∆, Θ + (1 − | Θ | ) Dead  , max t ′ tick − − → Θ inf t c tick = = ⇒ ∆ K ( d )  ∆ + (1 − | ∆ | ) Dead , Θ  } wher e inf ∅ = 1 and max ∅ = 0 . Notice that all max in Definition 5 are well defined since the pL TS is image- finite. Notice also that any strong transitions from t to a distribution ∆ is mim- ick ed b y a weak transition from t ′ , which, in genera l, takes to a sub-distribution Θ . Thus, pro cess t ′ may not simulate t with probability 1 − | Θ | . Definition 6 (Timed weak bi simil arit y me trics). The family of t he timed weak bisimilarity metrics m k : ( T × T ) → [0 , 1] is define d for al l k ∈ N by m k = ( 0 if k = 0 sup h ∈ N m k,h if k > 0 while the functions m k,h : ( T × T ) → [0 , 1] ar e define d for al l k ∈ N + and h ∈ N by m k,h = ( B tick ( m k − 1 ) if h = 0 B ( m k,h − 1 ) if h > 0 . Then, we define m ∞ : ( T × T ) → [0 , 1] as m ∞ = sup k ∈ N m k . Note that any m k,h is o bta ined from m k − 1 by one a pplication of the functional B tick , in order to take into a c c ount the distance betw een terms in tro duced by the k -th tick -action, and h applications of the functional B , in or der to lift such a distance to terms that take h untimed a ctions to b e able to per form a tick -action. By tak ing sup h ∈ N m k,h we c onsider an a rbitrary num b er of untimed steps. The pseudometric prop erty of m k is necessar y to conclude that the toler ance betw een terms as g iven b y m k is a reasona ble notio n o f b ehavioural distance. Theorem 1. F or any k ≥ 1 , m k is a 1-b ounde d pseudometric. Finally , everything is in place to define our timed weak bis imila rity ≈ k p with tolerance p ∈ [0 , 1] acc um ula ted after k time units, for k ∈ N ∪ {∞} . Definition 7 (Timed w eak bisi milarity with tol erance). L et t, t ′ ∈ T , k ∈ N and p ∈ [0 , 1 ] . We say that t and t ′ are w eak ly bisimilar with a tolerance p , which a ccumu la tes in k timed a ctions , written t ≈ k p t ′ , if and only if m k ( t, t ′ ) = p . Then, we write t ≈ ∞ p t ′ if and only if m ∞ ( t, t ′ ) = p . Since the Kantorovic h lifting K is monotone [29], it follows that b oth func- tionals B and B tick are monotone . This implies that, for any k ≥ 1, ( m k,h ) h ≥ 0 is a non-decrea sing c hain and, analog o usly , a lso ( m k ) k ≥ 0 is a non-decrea sing c hain, th us giving the following exp ected r esult saying that the distance be tw e en terms grows when we consider a higher num b er of tick computation steps. Prop ositio n 1 (T olerance mo notonicit y). F or al l terms t, t ′ ∈ T and k 1 , k 2 ∈ N + with k 1 < k 2 , t ≈ k 1 p 1 t ′ and t ≈ k 2 p 2 t ′ entail p 1 ≤ p 2 . W e conclude this section by comparing our b ehavioural distance with the behavioura l relations known in the liter ature. W e reca ll that in [8] a family o f relations ≃ p for unt ime d pro cess calc uli are defined such that t ≃ p t ′ if and only if t and t ′ weakly bisimulate each other with toler ance p . Of course, o ne ca n apply these r e lations also to timed pro cess calculi, the effect being that timed actions are treated in exac tly the same manner as untimed actions. The following result compar es the behavioural metrics pro po sed in the present pap er with those of [8], and with the classical notions of probabilistic weak bisimilarity [30] denoted ≈ . Prop ositio n 2. L et t, t ′ ∈ T and p ∈ [0 , 1] . Then, – t ≈ ∞ p t ′ iff t ≃ p t ′ – t ≈ ∞ 0 t ′ iff t ≈ t ′ . 3 A Simple Probabilistic Timed C alculus for IoT Systems In this section, w e prop os e a simple extension of Hennessy and Regan’s time d pr o c ess algebr a TPL [16] to express IoT systems and cyb er-physic al attacks . The goal is to show tha t timed weak bisimilar it y w ith tolerance is a suitable notion to estimate the impa ct o f cyb er-physical attacks o n IoT systems. Let us sta rt with some preliminar y nota tions. Notation 1 We use x, x k for state v ariables , c, c k , for communication channels , z , z k for communication v ariables , s, s k for sensor s devices , while o r anges over b oth channels and sensors. V alues , r ange d over by v , v ′ , b elong t o a finite set of admissible values V . We u se u , u k for b oth values and c ommunic ation variables. Given a generic s et of names N , we write V N to denote the set of functions N → V assigning a value t o e ach name in N . F or m ∈ N and n ∈ N ∪ {∞} , we write m..n t o denote an integer interv a l . As we wil l adopt a discr ete n otion of time, we wil l use inte ger intervals to denote time interv als . State variables ar e ass o ciated to physical pr op erties like temp er atur e , pr essu r e , etc. Sensor names a r e metav aria ble s for s ensor devices, such as thermometers and b ar ometers . Pleas e, notice that in cyb er-physical systems, s tate v ariables cannot be directly acce ssed but they can only b e tested via one or more sensor s. Definition 8 (IoT sys te m). L et X b e a set of state variable s and S b e a set of sensors. L et r ange : X → 2 V b e a total function r et urning the ra n ge of admissible values for any st ate variable x ∈ X . An IoT system c onsists of two c omp onents: – a ph y sical environment ξ = h ξ x , ξ m i wher e: • ξ x ∈ V X is the physical state of t he system that asso ciates a value to e ach s t ate variable in X , such that ξ x ( x ) ∈ r ange ( x ) for any x ∈ X , • ξ m : V X → S → D ( V ) is the measurement map that given a physic al state r etu rns a funct ion that asso ciates to any sensor in S a discr ete pr ob ability distribution over the set of p ossible sense d values; – a logica l (or cyb er) comp onent P that inter acts with the sensors define d in ξ , and c an c ommu n ic ate, via channels, with other cyb er c omp onents. We write ξ ⋊ ⋉ P t o denote the re s u lting IoT system, and use M and N to r ange over IoT systems. Let us now for ma lise the cyb er c omp onent of an Io T sy s tem. Basically , we adapt Hennes s y and Reg an’s t ime d pr o c ess algebr a TPL [16]. Definition 9 (Logics). Logica l comp onents of IoT systems ar e define d by the fol lowing gr ammar: P, Q ::= nil   tick .P   P k Q   ⌊ pfx .P ⌋ Q   H h ˜ u i   if ( b ) { P } else { Q }   P \ c pfx ::= o ! v   o ?( z ) The pro cess tick .P sleeps for o ne time unit and then co ntin ues a s P . W e write P k Q to denote the p ar al lel c omp osition of co ncurrent pr o cesses P and Q . The pro cess ⌊ pfx .P ⌋ Q denotes pr efixing with time out . W e r e call that o ra nges over bo th channel and senso r names. Thus, fo r ins ta nce, ⌊ c ! v .P ⌋ Q se nds the v a lue v on channel c and, after tha t, it c o nt inues as P ; otherwise, if no communica- tion partner is av ailable within one time unit, it evolv es into Q . The pro cess ⌊ c ?( z ) .P ⌋ Q is the obvious counterpart for channel reception. On the other hand, the pro cess ⌊ s ?( z ) .P ⌋ Q r e ads the s ensor s , according to the measurement map of the sy stems, and, after that, it c ontin ues as P . The pr o cess ⌊ s ! v .P ⌋ Q writes to the sensor s and, a fter that, it contin ues as P ; her e, w e wish to p oint out that this a malicious activity , as co nt r ollers may only a ccess sensors for rea ding sensed da ta . Thus, the construct ⌊ s ! v .P ⌋ Q serves to implemen t an inte grity at- tack that attempts at synchronising with the co n tr oller of sensor s to provide a fake v alue v . In the following, we s ay that a pr o cess is honest if it never wr ites on sensors. The definitio n of ho nesty naturally lifts to IoT systems. In pro cesses of the form tick .Q and ⌊ pfx .P ⌋ Q , the o ccur r ence of Q is sa id to b e time-guar de d . R e cursive pr o c esses H h ˜ u i a re defined via equations H ( z 1 , . . . , z k ) = P , where (i) the tuple z 1 , . . . , z k contains all the v ariables that app ear free in P , and (ii) P contains only time-guar de d o c curr enc es of the pro cess identifiers, such as H itself (to av oid zeno b ehaviours ) . The tw o remaining constructs are standar d; they mo del conditionals a nd ch a nnel re s triction, resp ectively . Finally , we define how to comp ose IoT s ystems. F or simplicity , we comp ose t wo s y stems only if they hav e the same ph ys ical environment. Definition 1 0 (System comp osition). L et M 1 = ξ ⋊ ⋉ P 1 and M 2 = ξ ⋊ ⋉ P 2 b e two IoT systems, and Q b e a pr o c ess whose sensors ar e define d in the physic al envir onment ξ . We write: – M 1 k M 2 to denote ξ ⋊ ⋉ ( P 1 k P 2 ) ; – M 1 k Q t o denote ξ ⋊ ⋉ ( P 1 k Q ) ; – M 1 \ c as an abbr eviation for ξ ⋊ ⋉ ( P 1 \ c ) . W e co nclude this se c tio n with the following abbreviations that will b e us ed in the rest o f the pa per . Notation 2 We write P \{ c 1 , c 2 , . . . , c n } , or P \ ˜ c , to me an P \ c 1 \ c 2 · · · \ c n . F or simplicity, we sometimes abbr eviate b oth H ( i ) and H h i i with H i . We write pfx .P as an abbr eviation for the pr o c ess define d via the e quation H = ⌊ pfx .P ⌋ H , wher e the pr o c ess name H do es not o c cur in P . We write tick k .P as a shorthand for tick . tick . . . . t i ck .P , wher e t he pr efix tick app e ars k ≥ 0 c onse cutive times. We write Dea d to denote a de ad lo cke d IoT syst em t hat c annot p erform any action. 3.1 Probabilistic lab el led transitio n semantics As sa id befo re, sensor s serve to observe the e volution of the physical state o f an IoT system. How ever, s e nsors are usually affected by an err or/noise that w e r ep- resent in our measure ment ma ps by means of discrete proba bilit y dis tributions. F or this reaso n, we equip o ur calculus with a proba bilistic labelle d transition system. In the following, the symbol ǫ range s ov er distributions o n ph ys ical envi- ronments, where as π r anges o ver distributions on (logical) proces ses. Thus, ǫ ⋊ ⋉ π (W rite) − ⌊ o ! v .P ⌋ Q o ! v − − − → P (Read) − ⌊ o ?( z ) .P ⌋ Q o ?( z ) − − − − − → P (Sync) P o ! v − − − → P ′ Q o ?( z ) − − − − − → Q ′ P k Q τ − − → P ′ k Q ′ { v / z } (Pa r) P λ − − → P ′ λ 6 = tick P k Q λ − − → P ′ k Q (Res) P λ − − → P ′ λ 6∈ { o ! v , o ?( z ) } P \ o λ − − → P ′ \ o (Rec) P { ˜ v / ˜ z } λ − − → Q H ( ˜ z ) = P H h ˜ v i λ − − → Q (Then) J b K = true P λ − − → P ′ if ( b ) { P } else { Q } λ − − → P ′ (Else) J b K = false Q λ − − → Q ′ if ( b ) { P } else { Q } λ − − → Q ′ (TimeNil) − nil tick − − − → n il (Dela y ) − tick .P tick − − − → P (Timeout) − ⌊ pfx .P ⌋ Q tick − − − → Q (TimeP ar) P tick − − − → P ′ Q tick − − − → Q ′ P k Q tick − − − → P ′ k Q ′ T able 1. Lab elled transition system for pro cesses denotes the distribution ov er IoT systems defined by ( ǫ ⋊ ⋉ π )( ξ ⋊ ⋉ P ) = ǫ ( ξ ) · π ( P ). The symbol γ r anges over distr ibutions on Io T sys tems. In T a ble 1, we give a standar d lab elled transition sys tem for log ic a l comp o- nent s (timed pro cesses), wherea s in T able 2 we rely on the L TS o f T able 1 to define a simple pL TS for IoT systems by lifting tra ns ition r ules from pro cesses to systems. In T a ble 1, the meta-v a riable λ rang es over lab els in the set { τ , tick , o ! v , o ?( z ) } . Rule (Sync) serve to model synchronisation and v alue passing, on some name (for channel or senso r) o : if o is a channel then we hav e standard p oint-to-po int co m- m unica tion, whereas if o is a sensor then this r ule mo dels an inte grity attack on sensor s , as the co n tr oller is provided with a fak e v alue v . The remaining rules are standard. The sy mmetric co un terparts of rules (Sync) and (P ar) are omitted. According to T able 2, IoT s y stems may fire four p ossible actions ranged ov er by α . These a ctions r epresent: internal activities ( τ ), the passag e of time ( tick ), channel transmission ( c ! v ) and channel reception ( c ? v ). Rules (Snd) a nd (R cv) mo del tr a nsmission a nd r eception on a channel c with an ex ternal system, resp ectively . Rule (SensRead) mo dels the reading of the v alue detected at a sensor s accor ding to the cur rent physical en v ironment ξ = h ξ x , ξ m i . In par ticular, this rule says that if a pro cess P in a system ξ ⋊ ⋉ P reads a senso r s defined in ξ then it will get a v a lue that may v ary a ccording to the probability distribution r esulting by providing the state function ξ x and the sensor s to the measurement map ξ m . Rule (T au) lifts internal actions from pro cess e s to systems. This includes communications on c ha nnels and malicious a ccesses to s ensors’ controllers. Ac- (Snd) P c ! v − − − → P ′ ξ ⋊ ⋉ P c ! v − − − → ξ ⋊ ⋉ P ′ (Rcv) P c ?( z ) − − − − − → P ′ ξ ⋊ ⋉ P c ? v − − − → ξ ⋊ ⋉ P ′ { v / z } (SensRead) P s ?( z ) − − − − − → P ′ ξ m ( ξ x )( s ) = P i ∈ I p i · v i ξ ⋊ ⋉ P τ − − → ξ ⋊ ⋉ P i ∈ I p i · P ′ { v i / z } (T au) P τ − − → P ′ ξ ⋊ ⋉ P τ − − → ξ ⋊ ⋉ P ′ (Time) P tick − − − → P ′ ξ ⋊ ⋉ P τ − − → 6 ξ ′ ∈ next ( ξ ) ξ ⋊ ⋉ P tick − − − → ξ ′ ⋊ ⋉ P ′ T able 2. Probabilistic L TS for a IoT system ξ ⋊ ⋉ P with ξ = h ξ x , ξ m i cording to Definition 10, rule ( T au) mo dels als o ch a nnel communication betw een t wo pa rallel IoT systems s ha ring the same physical environmen t. A sec o nd lifting o ccurs in r ule (Time) fo r timed actions tick . Here , ξ ′ denotes an admissible physical environmen t for the next time s lo t, nondeterministica lly chosen from the finite set next ( h ξ x , ξ m i ). This set is defined as { h ξ ′ x , ξ m i : ξ ′ x ( x ) ∈ r ange ( x ) for any x ∈ X } . 3 As a consequence, the r ules in T able 2 de fine an image-finite pL TS. F or simplicity , w e abstract fro m the physic al pr o c ess b ehind our IoT systems. 4 Cyb er-ph ysical attac ks on sensor devices In this se ction, we co nsider attacks tamp ering with senso rs by eavesdropping and p ossibly mo difying the senso r measurements provided to the cor resp onding controllers. These attacks may affect bo th the inte grity a nd the availabili ty of the system under attack. W e do not represent (well-known) attacks on communi- cation channels as our foc us is on attacks to physical devices and the consequent impact on the physical sta te. Howev er, our technique can b e eas ily generalised to deal with a ttacks o n channels as well. Definition 1 1 (Cyb er-physical attac k). A (pur e) cyb er-physic al attack A is a pr o c ess derivable fr om the gr ammar of Definition 9 such that: – A writes on at le ast one sensor; – A never uses c ommu n ic ation channels. In order to make secur it y assessments o n our Io T systems, we adapt a w e ll- known approa ch called Gener alize d Non D e ducibility on Comp osition (GNDC) [10]. Int uitively , an atta ck A affects an honest IoT system M if the execution of the comp osed system M k A differs from that of the origina l system M in an observ- able manner. Basically , a cyb er-physical attack can influence the system under attack in at lea st tw o different wa ys: 3 The finiteness follo ws from the finiteness of V , and hence of r ange ( x ), for any x ∈ X . – The system M k A mig h t have non-genuine execution tra ces containing observ ables tha t cannot be repro duced by M ; her e the attack affects the inte grity o f the s ystem b ehaviour ( inte grity attack ). – The sy s tem M might hav e execution traces containing observ ables that can- not b e r e pro duced b y the system under attack M k A (b ecause they are preven ted by the attack); this is an attack against the availability of the system ( DoS att ack ). Now, everything is in place to provide a fo rmal definition of s yst em toler anc e and system vulner ability with resp ect to a g iven attack. Intuitiv ely , a system M tolerates an attack A if the presence of the attack do es not affect the b ehaviour of M ; on the other hand M is vulnerable to A in a certain time interv al if the attack has an imp act on the b ehaviour of M in that time int e r v al. Definition 1 2 (A ttack to lerance). L et M b e a honest IoT system. We say that M tolerates an a ttack A if M k A ≈ ∞ 0 M . Definition 1 3 (A ttack vulnerability and i mpact). L et M b e a honest IoT system. We say that M is v ulnerable to an a tta ck A in the time in terv al m..n with imp act p ∈ [0 , 1] , fo r m ∈ N + and n ∈ N + ∪{ ∞} , if m..n is the smal lest time interval su ch that: (i) M k A ≈ m − 1 0 M , ( ii) M k A ≈ n p M , (iii) M k A ≈ ∞ p M . 4 Basically , the definition ab ov e says that if a sys tem is vulnerable to a n a tta ck in the time interv al m..n then the p erturbation introduced by the a ttack s ta rts in the m -th time slot a nd rea c he s the maximum impact in the n -th time slot. The following result says that b oth notions of toler ance and vulnerability are suitable for c omp ositional re asonings . Mor e pr ecisely , we prov e that they are bo th preserved by parallel comp osition a nd c hannel restriction. Actually , c ha nnel restriction may obviously make a system le s s vulner able by hiding channels. Theorem 2 (Comp ositional i t y). L et M 1 = ξ ⋊ ⋉ P 1 and M 2 = ξ ⋊ ⋉ P 2 b e two honest IoT systems with the same physic al envir onment ξ , A an arbitr ary att ack, and ˜ c a set of channels. – If b oth M 1 and M 2 toler ate A then ( M 1 k M 2 ) \ ˜ c toler ates A . – If M 1 is vu lner able to A in the time interval m 1 ..n 1 with imp act p 1 , and M 2 is vu lner able to A in the time int erval m 2 ..n 2 with imp act p 2 , t hen M 1 k M 2 is vulner able to A in a the time interval min( m 1 , m 2 ) .. ma x( n 1 , n 2 ) with an imp act p ′ ≤ ( p 1 + p 2 − p 1 p 2 ) . – If M 1 is vulner able to A in the interval m 1 ..n 1 with imp act p 1 then M 1 \ ˜ c is vulner able to A in a t ime interval m ′ ..n ′ ⊆ m 1 ..n 1 with an imp act p ′ ≤ p 1 . Note that if an attack A is tole r ated by a sy stem M and can interact with a honest pro cess P then the comp ound system M k P ma y be v ulnerable to A . How ever, if A do es not write on the sensor s o f P then it is tolerated by M k P as w ell. The b ound p ′ ≤ ( p 1 + p 2 − p 1 p 2 ) can b e e x plained as follows. 4 By Prop osition 1, at all time instants greater than n the imp act remains p . The likelihoo d tha t the attack do es not impact on M i is (1 − p i ), for i ∈ { 1 , 2 } . Thu s, the likeliho o d that the attack impacts neither on M 1 nor on M 2 is at least (1 − p 1 )(1 − p 2 ). Summarising, the likelihoo d that the a ttack impacts on at le ast one of the tw o systems M 1 and M 2 is at mo st 1 − (1 − p 1 )(1 − p 2 ) = p 1 + p 2 − p 1 p 2 . An easy coro lla ry of Theorem 2 allows us to lift the notions of tolera nce and vulnerability from a honest system M to the comp o und systems M k P , for a honest pr o cess P . Corollary 1 . L et M b e a honest syst em, A an attack, ˜ c a set of channels, and P a honest pr o c ess that r e ads sensors define d in M but not those written by A . – If M toler ates A then ( M k P ) \ ˜ c toler ates A . – If M is vulner able t o A in the interval m..n with imp act p , then ( M k P ) \ ˜ c is vuln er able to A in a time interval m ′ ..n ′ ⊆ m..n , with an imp act p ′ ≤ p . 5 A tt ac king a smart surveillan ce system: A case study Consider an alarmed ambien t co nsisting of three ro oms, r i for i ∈ { 1 , 2 , 3 } , each of which equipp ed with a sensor s i to detect una uthorised a ccesses. The ala r m go es off if at leas t one of the three sensors detects an in tr usion. The log ics o f the system can b e easily sp ecified in o ur la nguage a s follows: Sys = ( Mng k Ctrl 1 k Ctrl 2 k Ctrl 3 ) \{ c 1 , c 2 , c 3 } Mng = c 1 ?( z 1 ) .c 2 ?( z 2 ) .c 3 ?( z 3 ) . if ( W 3 i =1 z i = on ) { alarm ! on . tick . Che ck k } else { tick . Mng } Che ck 0 = Mng Che ck j = alarm ! on .c 1 ?( z 1 ) .c 2 ?( z 2 ) .c 3 ?( z 3 ) . if ( W 3 i =1 z i = on ) { tick . Che ck k } else { tick . Che ck j − 1 } for j > 0 Ctrl i = s i ?( z i ) . if ( z i = presence ) { c i ! on . tick . Ctrl i } else { c i ! off . tick . Ctrl i } for i ∈{ 1 , 2 , 3 } . Int uitively , the pr o c ess Sys is comp osed by three controllers, Ctrl i , one for each sensor s i , and a manager Mng that in teracts with the controllers via priv a te channels c i . The pro cess Mng fires a n alarm if at least one of the co n tr ollers signals an intrusion. As usua l in this k ind of surveillance systems, the a larm will keep going off for k ins ta n ts of time after the last detected intrusion. As reg ards the physical environmen t, the physical s tate ξ x : { r 1 , r 2 , r 3 } → { presence , absence } is set to ξ x ( r i ) = absence , for any i ∈ { 1 , 2 , 3 } . F urthermore, let p + i and p − i be the probabilities of having false p ositives (erroneo usly detected int r usion) and false ne gatives (erro neously missed intrusion) at sensor s i 5 , re- sp ectively , fo r i ∈ { 1 , 2 , 3 } , the mea surement function ξ m is defined a s fo llows: ξ m ( ξ x )( s i ) = (1 − p − i ) pr e sence + p − i absence , if ξ x ( r i ) = presence ; ξ m ( ξ x )( s i ) = (1 − p + i ) absence + p + i pr e sence , o therwise. Thu s, the whole IoT system has the form ξ ⋊ ⋉ Sys , with ξ = h ξ x , ξ m i . W e s tart o ur analysis studying the impact o f a simple cyb er-physical attack that provides fake false p ositives to the controller of o ne of the sensors s i . This attack affects the inte grity of the system b ehaviour as the system under attack will fire a larms without any physical intrusion. 5 These probabilities are usually very small; we assume them smaller t h an 1 2 . Example 1 (Int r o ducing false p ositives). In this example, we provide an attack that tries to increa se the num b er o f false p ositives detected by the controller of some s e ns or s i during a s pecific time interv a l m..n , with m, n ∈ N , n ≥ m > 0 . Int uitively , the attack waits for m − 1 time slots, then, during the time interv al m..n , it pro v ides the controller o f sensor s i with a fake in trusio n signal. F ormally , A fp ( i, m, n ) = tick m − 1 .B h i, n − m + 1 i B ( i, j ) = if ( j = 0) { nil } else {⌊ s i ! pr e sence . tick .B h i , j − 1 i⌋ B h i, j − 1 i} . In the following pr op osition, w e use o ur metric to meas ur e the p erturba tion int r o duced by the attack to the controller of a se nsor s i by v arying the time o f observ ation o f the sy stem under a ttack. Prop ositio n 3. L et ξ b e an arbitr ary physic al state for the systems M i = ξ ⋊ ⋉ Ctrl i , for i ∈ { 1 , 2 , 3 } . Then, – M i k A fp h i, m, n i ≈ j 0 M i , for j ∈ 1 ..m − 1 ; – M i k A fp h i, m, n i ≈ j h M i , with h = 1 − ( p + i ) j − m +1 , for j ∈ m..n ; – M i k A fp h i, m, n i ≈ j r M i , with r = 1 − ( p + i ) n − m +1 , for j > n or j = ∞ . By a n application of Definition 13 we ca n measure the impact of the attack A fp to the (sub)systems ξ ⋊ ⋉ Ctrl i . Corollary 2 . The IoT systems ξ ⋊ ⋉ Ctrl i ar e vulner able to the attack A fp h i, m, n i in t he time interval m..n with imp act 1 − ( p + i ) n − m +1 . Note that the v ulnerability window m..n co incides with the activity per io d of the attack A fp . This means that the system under attack recovers its nor mal be- haviour immedia tely after the termination of the a ttack. How ever, in gener al, an attack may impa c t the b ehaviour of the target sy stem long after its termination. Note also that the attack A fp h i, m, n i ha s an impact not only on the controller Ctrl i but also on the whole system ξ ⋊ ⋉ Sys . This b ecause the pro cess Mng will surely fir e the ala rm as it will receive at lea st o ne intrusion detectio n from Ctrl i . How ever, b y an application of Coro llary 1 we can prove that the impact on the whole s ystem will not g et amplified. Prop ositio n 4 (Impact of the attac k A fp ). The system ξ ⋊ ⋉ Sys is vulner able to the att ack A fp h i, m, n i in a time interval m ′ ..n ′ ⊆ m..n with imp act p ′ ≤ 1 − ( p + i ) n − m +1 . Now, the reader may wonder what happ ens if we consider a complementary attack that pr ovides fake false ne gatives to the controller of one o f the sensor s s i . In this case, the attack affects the availability of the system behaviour as the system will no fire the alar m in the presence of a r eal intrusion. T his bec a use a real int rusion will b e somehow “ hidden” by the attack. Example 2 (Int r o ducing false ne gatives). The go al of the following attack is to increase the num b er of false neg atives during the time interv al m..n , with n ≥ m > 0. F ormally , the attack is defined a s follows: A fn ( i, m, n ) = tick m − 1 .C h i, n − m + 1 i C ( i, j ) = if ( j = 0 ) { nil } else {⌊ s i ! absence . tick .C h i, j − 1 i⌋ C h i, j − 1 i} . In the following pro po sition, we use our metric to mea sure the deviatio n intro- duced by the attack A fn to the controller of a senso r s i . With no surprise we get a r esult that is the sy mmetr ic version of Pr op osition 3. Prop ositio n 5. L et ξ b e an arbitr ary phy sic al st ate for the system M i = ξ ⋊ ⋉ Ctrl i , for i ∈ { 1 , 2 , 3 } . Then, – M i k A fn h i, m, n i ≈ j 0 M i , for j ∈ 1 ..m − 1 ; – M i k A fn h i, m, n i ≈ j h M i , with h = 1 − ( p − i ) j − m +1 , for j ∈ m..n ; – M i k A fn h i, m, n i ≈ j r M i , with r = 1 − ( p − i ) n − m +1 , for j > n or j = ∞ . Again, by an application of Definition 13 we ca n measure the impact of the attack A fn to the (sub)systems ξ ⋊ ⋉ Ctrl i . Corollary 3 . The IoT systems ξ ⋊ ⋉ Ctrl i ar e vulner able to the attack A fn h i, m, n i in t he time interval m..n with imp act 1 − ( p − i ) n − m +1 . As our timed metric is comp ositio nal, by an a pplication of Co rollary 1 we can es timate the impact of the attack A fn to the whole system ξ ⋊ ⋉ Sys . Prop ositio n 6 (Impact of the attac k A fn ). The system ξ ⋊ ⋉ Sys is vulner able to the att ack A fn h i, m, n i in a time interval m ′ ..n ′ ⊆ m..n with imp act p ′ ≤ 1 − ( p − i ) n − m +1 . 6 Conclusions, related and future work W e ha ve prop osed a timed generalisa tio n of the n -bisimulation metric [38], called time d bisi m u lation metric , o btained by defining t wo functionals over the complete lattice o f the functions assigning a dista nce in [0 , 1] to each pair of sys tems: the former deals with the distance accumulated whe n e x ecuting untimed steps, the latter w ith the dista nce in tr o duced by timed a c tio ns. W e have use d our timed bisimulation metrics to provide a fo r mal and c om- p ositional notion of imp act metric for cyb er-physic al attacks on IoT systems sp ecified in a simple timed pro c e ss calculus. In particula r, we have fo cussed on cyb er-physical attacks ta r geting s e nsor devices (attack on s e nsors are by far the most studied cyb er-physical attacks [44]). W e hav e used our timed weak bisim- ulation with toler a nce to formalise the notio ns o f attack toler anc e and attack vulner ability with a given imp act p . In particular, a system M is said to b e vul- nerable to an attack A in the time interv al m..n with impact p if the pe rturbation int r o duced by A b ecomes observ able in the m -th time slot a nd yields the maxi- m um impact p in the n -th time slot. Here, we wish to stres s that the vuln er ability window m..n is quite informative. In practise, this in ter v a l says when an attack will pr o duce o bs erv able effects o n the system under attack. Thus, if n is finite we hav e an attack with temp or ary effe cts , otherwis e we hav e an attack with p er- manent effe cts . F urthermor e , if the attack is quick enough, and terminates well befo re the time instant m , then we hav e a ste althy attack that affects the system late enough to a llow attack c amouflages [14]. O n the other hand, if a t time m the a ttack is far from ter mina tion, then the IoT sy stem under attack has g o o d chances of under taking countermeasures to stop the a ttack. As a case study , we hav e estimated the impact of t wo cyb er-physical attacks on senso rs that introduce false p ositives a nd false n e gatives , resp ectively , into a simple surveillance system, affecting the inte grity and the availabi lity o f the IoT system. Although our attacks a re quite simple, the sp ecification langua g e and the corr esp o nding metric sema n tics prese n ted in the pap er a llow us to deal with smarter a ttacks, such as p erio dic attacks with co nstant or v ar iable p er io d of attack. Moreov er, w e ca n easily extend o ur threat mo del to rec over (well-kno wn) attacks on communication channels. Related work. W e ar e aw a r e of a num b er of works using forma l metho ds for CPS security , although they apply methods, and mos t of the time hav e goals, that are quite different fr om our s. Burmester et al. [3] e mploy ed hybrid time d automata to give a threa t mo del based on the tra ditional Byzantine fault mo del for crypto-s ecurity . How ever, as remar ked in [36], cyb er-physical a ttacks and faults hav e inherently distinct characteristics. In fact, unlik e faults, c yber -physical atta cks may b e per formed ov er a sig nificant num b er of attack p oints and in a co o rdinated w ay . In [40], Vigo presented a n attack scena rio that addresses some o f the p ecu- liarities of a cyb er- ph y s ical adversary , and discussed how this scenar io relates to other attack mo dels p opular in the secur ity pr oto col litera tur e. Then, in [41,42] Vigo et al. prop os e d an untimed calculus of broadca s ting pro cesses equipp ed with notio ns of failed and unw anted communication. They fo cus on DoS atta cks without taking in to co nsideration timing a spe c ts or attack impact. Bo dei et al. [1,2] propo sed an untimed pr o cess calculus, Io T- LySa, suppor ting a cont r ol flow analysis that s a fely a pproximates the abstrac t b ehaviour of Io T systems. Esse ntially , they tra ck how da ta spre ad fro m senso r s to the lo g ics of the net work, and how physical data are manipulated. Ro cchett o and Tipp enhaur [32] introduced a taxonomy of the diverse attacker mo dels pr op osed for CPS se c urity and o utline requirements for generalised at- tack er mo dels; in [31], they then prop osed an extended Dole v-Y ao attack er mo del suitable for CPSs. In their a pproach, physical layer in ter a ctions are mo delled as abstract in ter actions b etw een lo gical compo nent s to s uppor t r easoning on the ph y sical-lay e r security of CP Ss. This is done by int r o ducing a dditional o rthogo- nal channels. Time is not re pr esented. Nigam et al. [28] worked aro und the notion of Timed Do le v-Y ao Intruder Mo dels for Cyb er-Physical Security Proto cols by b ounding the num b er of in- truders required for the automated verification o f such pr oto cols. F o llowing a tradition in security proto col a nalysis, they provide an answer to the question: How many in truders a re enough for verification and where should they b e placed? Their notion of time is somehow different from ours, as they foc us o n the time a message needs to trav el from an agent to another. The pap er do es not mention ph y sical devices, s uc h as senso rs and/o r actuator s. Finally , Lanotte et al. [23] defined a hybrid pr o cess calculus to model b oth CPSs and cy ber -physical a ttacks; they defined a threat model for cyb er-physical attacks to physical dev ices a nd provided a pro of metho ds to ass ess attack to l- erance/v ulner ability with res pec t to a timed trace sema ntics (no tolera nce a l- low ed). F uture w ork. Recent w o rks [22,11,24,25,12] hav e shown that bisimulation met- rics a re suita ble for comp ositio nal rea soning, as the distance b etw een tw o com- plex s ystems can b e often derived in terms of the distance b etw een their com- po nent s . In this r esp ect, Theor em 2 a nd Cor ollary 1 allows us comp ositio na l reasoning s when computing the impact of a ttacks on a tar get sy s tem, in terms of the impact o n its sub-sy s tems. W e b elieve that this result can b e genera lised to estimate the impact of parallel attacks of the form A = A 1 k . . . k A k in terms of the impacts o f e a ch malicious mo dule A i . As future work, we also intend to adopt our impact metric in more inv o lved languages for cyb er-physic al systems and attacks , s uch as the lang ua ge de vel- op ed in [23], with a n explicit represe n ta tio n of physical pro c esses via different ial equations or their discrete c ount e rpart, difference eq uations. A cknow le dgements. W e thank the anonymous revie wers for v a luable co mmen ts. This work has been partially s uppo r ted by the pr o ject “Dipar timen ti di Ecc e l- lenza 2018- 2022” , funded by the Italian Ministry of Educa tion, Universities and Research (MIUR), and by the Joint P r o ject 2 017 “Security Static Analysis for Android Thing s”, funded by the University o f V ero na a nd JuliaSo ft Srl. [4,27] References 1. C. Bodei, P . Degano, G. F errari, and L. Galletta. Where Do Your IoT Ingredients Come From? I n COORDINA TION , volume 9686 of LNCS , pages 35–50. Springer, 2016. 2. C. Bo dei, P . Degano, G. F errari, and L. Galletta. T racing where IoT data are collected and aggregated. L o gic al Metho ds in Computer Scienc e , 13(3):1–38, 2017. 3. M. Burmester, E. Magkos, and V. Chrissikopoulos. Modeling security in cy b er- physical systems. IJCI P , 5(3-4):118–1 26, 2012. 4. A. Cerone, M. Hennessy , and M. Merro. Mo delling mac-lay er communications in wireless systems. L o gic al Metho ds i n Computer Scienc e , 11(1:18), 2015. 5. Y. Deng and W. Du. The Kantoro vich Metric in Computer Science: A Brief Survey . In QAPL , volume 253(3) of ENTCS , pages 73–82, 2009. 6. Y. Deng, R. J. v an Glabb eek, M. Hen nessy , and C. Morgan. Characterising testing preorders for fi n ite probabilistic p ro cesses. L o gic al Meth. C om put. Sci. , 4(4), 2008. 7. J. Desharnais, J. Gupta, R . Jagadeesan, and P . P anangaden. Metrics for Lab elled Mark ov Pro cesses. The or etic al Computer Scienc e , 318(3):323–354 , 2004. 8. J. Desharnais, R. Ja gadeesan, V. Gupt a, and P . P anangaden. The metric analogue of w eak bisimulatio n for probabilistic pro cesses. In LICS 2002 , pages 413–422, 2002. 9. N. F alliere, L. Murch u, and E. Chien. W32.Stux net Dossier, 2011. 10. R . F o cardi and F. Martinelli. A U n iform Ap p roac h for the Definition of Security Properties. In FM , volume 1708 of LNCS , pages 794–813. Springer, 1999. 11. D . Gebler, K. G. Larsen, and S . Tini. Compositional Bisim ulation Metric Reason- ing with Probabilistic Process Calculi. L o gic al Meth. Comput. Sci. , 12(4), 2016. 12. D . Gebler and S. Tini. Sos specifications for uniformly con tinuous op erators. Jour- nal of Computer and System Scienc es , 92:113–15 1, 2018. 13. B. Genge, I. Kiss, and P . H aller. A system dyn amics approach for assessing the impact of cyb er attac k s on critical infrastructures. Int. J. Criti c al I nfr astructur e Pr ote ction , 10:3–17, 2015. 14. D . Gollmann, P . Guriko v , A . Isako v, M. Krotofil, J. Larsen, and A. Winnicki. Cyb er-Physical Sy stems Securit y : Exp erimen t al Analysis of a Vinyl Acetate Monomer Plan t. In ACM CCPS , pages 1–12, 2015. 15. H . Hansson and B. Jonsson. A logic for reasoning about time and reliabilit y . F ormal Asp e cts of Computing , 6(5):512–5 35, 1994. 16. M. Hennessy and T. Regan. A p rocess algebra for timed systems. I nformation and Computation , 117(2):221– 239, 1995. 17. H . Hermanns, A. Parma, R. Segala , B. W ac hter, and L. Z h ang. Probabilistic logical chara cterization. Information and Computation , 209(2):154– 172, 2011. 18. I CS-CER T. Cyb er- Attack Against Ukrainian Critical I nfrastructure. https://ics- cert.us-cert.go v/alerts/IR-A LER T-H-16-056-01. 19. R . M. K eller. F ormal verificati on of parallel p rograms. Communic ations of the ACM , 19:371–384 , 1976. 20. M. Krotofil and A. A. C´ ardenas. R esilience of Process Con trol Systems to Cyb er- Physica l Attacks. I n Nor dSe c , volume 8208 of LNCS . S pringer, 2013. 21. M. Krotofil, A. A. C´ ardenas, J. Larsen, and D. Goll mann . V u ln erabilities of cyb er- physical systems to stale data - Determining the optimal time to launch attacks. Int. J. Critic al Infr astructur e Pr ote ction , 7(4):213–232, 2014. 22. R . Lanotte and M. Merro. Semantic analysis of gossip protocols for wireless sensor netw orks. In CONCUR 2011 , volume 6901 of LNCS , pages 156–170. Sp ringer, 2011. 23. R . Lanotte, M. Merro, R. Muradore, and L. Vigan` o. A formal approac h to cyb er- physical attac k s. In CSF , pages 436–450. I EEE, 2017. 24. R . Lanotte, M. Merro, and S. Tini. Compositional wea k metrics for group key up date. In MFCS , volume 42 of LIPIcs , 2017. 25. R . Lanotte, M. Merro, and S. Tini. W eak simulation q uasimetric in a gossip scenario. In F OR TE 2017 , volume 10321 of LNC S , pages 139–155. Sp ringer, 2017. 26. G. Louk as. Cyb er-Physic al Attacks - A Gr owing Invisi ble Thr e at . Butterw orth- Heinemann, 2015. 27. M. Merro, J. Kleist, and U. Nestmann. Mobile Ob jects as Mobile Processes. In- formation and Computation , 177(2):195 –241, 2002. 28. V . Nigam, C. T alcott, and A. A. Urquiza. T o wards th e Au tomated V erification of Cyb er-Physical Security Proto cols: Bounding the Nu mber of Timed Intruders. In ESORICS , vol u me 9879 of LNCS , pages 450–470. Springer, 2016. 29. P . Panangaden. L ab el le d Markov Pr o c esses . Imp erial College Press, 2009. 30. A . Phili p p ou, I. Lee, and O. Sok olsky . W eak bisimula t ion for probabilis t ic systems. In CONCUR , vol u me 1877 of LNCS , pages 334–349, 2000. 31. M. Ro cchetto and N. O . Tipp enhauer. CPD Y : Extending the Dolev-Y ao Attac ker with Physica l-Layer Interactions. In I CFEM , vol u me 10009 of LNCS , pages 175– 192, 2016. 32. M. Ro cc h etto and N. O. Ti p p enhauer. On A ttacke r Models and Profiles for Cyb er- Physica l Systems. In ESO R I CS , vol u me 9879 of LNCS , pages 427–449. Springer, 2016. 33. R . Segala. Mo deling and V erific ation of R andomize d Distribute d Re al-Time Sys- tems . PhD thesis, MIT, 1995. 34. J. Sla y and M. Miller. Lessons Learned from the Maroo ch y W ater Breac h. In Critic al Infr astructur e Pr ote ction , v olume 253 of IFIP , pages 73–82. Springer, 2007. 35. W. J. Stewart. Intr o duction to the Numeric al Solution of Markov Chains . Princeton Universit y Press, 1994. 36. A . T eixeira, I. Shames, J. Sandb erg, and K. H. Johansson. A secure control frame- w ork for resource-limited adversari es. Automatic a , 51:135–1 48, 2015. 37. U .S. Chemical Safet y and Hazard Inve stigation Board, T2 Lab oratories Inc. R e- active Chemical Explosion: Final I nv estigation Rep ort. Rep ort No. 2008-3-I-FL, 2009. 38. F. v an Breugel. On b ehavioural pseudometrics and closure ordinals. Inf. Pr o c ess. L ett. , 112(19):715–718, 2012. 39. F. van Breugel and J. W orrell. A b ehavioural pseudometric for probabilistic tran- sition systems. The or etic al C om puter Scienc e , 331(1):115 –142, 2005. 40. R . V igo. The Cyb er-Physical Attack er. In SAFECOMP , vo lu m e 7613 of LNCS , pages 347–35 6. Springer, 2012. 41. R . Vigo. Availability by Design: A Complementary Appr o ach to Denial-of-Servic e . PhD thesis, D anish T echnical Universit y , 2015. 42. R . Vigo, F. Nielson, and H. Riis N ielson. Broadcast, denial-of-service, and secure comm u nication. In IFM , volume 7940 of LNCS , p ages 412–427 . Springer, 2013. 43. C. Villani. Optimal tr ansp ort, old and new . Sp ringer, 2008. 44. Y . Zacchia Lun, A. D’Inno cenzo, I . Malav olta, and M. D. Di Bened etto. Cy b er- Physica l Sy stems Security: a S ystematic Mapping Stu d y . CoRR , abs/1605.0 9641, 2016. A Pro ofs T o prov e T he o rem 1 we need some preliminary results. The firs t of thes e results is Prop ositio n 7 below, which states tha t the pseudometric prop erty is pr eserved by function K , namely K ( d ) is a pseudometric ov e r D ( T ) whenever d is a pse u- dometric ov er T . Le mma 1 supp or ts Pr op osition 7. Lemma 1. Assume two functions d, d ′ : T × T → [0 , 1 ] with d ( t, t ′ ) ≤ d ′ ( t, t ′′ ) + d ′ ( t ′′ , t ) fo r al l terms t, t ′ , t ′′ ∈ T . Then K ( d )( ∆ 1 , ∆ 2 ) ≤ K ( d ′ )( ∆ 1 , ∆ 3 )+ K ( d ′ )( ∆ 3 , ∆ 2 ) for al l distributions ∆ 1 , ∆ 2 , ∆ 3 ∈ D ( T ) . Pr o of. Cons ide r the function ω : T × T → [0 , 1 ] defined for all terms t 1 , t 2 ∈ T as ω ( t 1 , t 2 ) = X t 3 ∈T | ∆ 3 ( t 3 ) 6 =0 ω 1 ( t 1 , t 3 ) · ω 2 ( t 3 , t 2 ) ∆ 3 ( t 3 ) with ω 1 ∈ Ω ( ∆ 1 , ∆ 3 ) o ne of the o ptima l matchings realising K ( d ′ )( ∆ 1 , ∆ 3 ), a nd ω 2 ∈ Ω ( ∆ 3 , ∆ 2 ) o ne of the o ptimal matchings realis ing K ( d ′ )( ∆ 3 , ∆ 2 ). W e will prov e that: 1. ω is a matching in Ω ( ∆ 1 , ∆ 2 ), and 2. P t 1 ,t 2 ∈T ω ( t 1 , t 2 ) · d ( t 1 , t 2 ) ≤ K ( d ′ )( ∆ 1 , ∆ 3 ) + K ( d ′ )( ∆ 3 , ∆ 2 ). By prop erty 1 we infer K ( d )( ∆ 1 , ∆ 2 ) ≤ P t 1 ,t 2 ∈T ω ( t 1 , t 2 ) · d ( t 1 , t 2 ), then by prop erty 2 we infer the thesis K ( d )( ∆ 1 , ∆ 2 ) ≤ K ( d ′ )( ∆ 1 , ∆ 3 ) + K ( d ′ )( ∆ 3 , ∆ 2 ). T o show (1) we prove that the left marg inal of ω is ∆ 1 by P t 2 ∈T ω ( t 1 , t 2 ) = P t 2 ∈T P t 3 ∈T | ∆ 3 ( t 3 ) 6 =0 ω 1 ( t 1 ,t 3 ) · ω 2 ( t 3 ,t 2 ) ∆ 3 ( t 3 ) = P t 3 ∈T | ∆ 3 ( t 3 ) 6 =0 ω 1 ( t 1 ,t 3 ) · ∆ 3 ( t 3 ) ∆ 3 ( t 3 ) (b y ω 2 ∈ Ω ( ∆ 3 , ∆ 2 )) = P t 3 ∈T | ∆ 3 ( t 3 ) 6 =0 ω 1 ( t 1 , t 3 ) = ∆ 1 ( t 1 ) (b y ω 1 ∈ Ω ( ∆ 1 , ∆ 3 )) and we observe that the pro of that the right marg inal of ω is ∆ 2 is analo gous. Then, we show (2) b y P t 1 ,t 2 ∈T ω ( t 1 , t 2 ) · d ( t 1 , t 2 ) = P t 1 ,t 2 ∈T P t 3 ∈T | ∆ 3 ( t 3 ) 6 =0 ω 1 ( t 1 ,t 3 ) · ω 2 ( t 3 ,t 2 ) ∆ 3 ( t 3 ) · d ( t 1 , t 2 ) ≤ P t 1 ,t 2 ∈T ,t 3 ∈T | ∆ 3 ( t 3 ) 6 =0 ω 1 ( t 1 ,t 3 ) · ω 2 ( t 3 ,t 2 ) ∆ 3 ( t 3 ) · d ′ ( t 1 , t 3 ) + P t 1 ,t 2 ∈T ,t 3 ∈T | ∆ 3 ( t 3 ) 6 =0 ω 1 ( t 1 ,t 3 ) · ω 2 ( t 3 ,t 2 ) ∆ 3 ( t 3 ) · d ′ ( t 3 , t 2 ) = P t 1 ,t 3 ∈T ω 1 ( t 1 ,t 3 ) · ∆ 3 ( t 3 ) ∆ 3 ( t 3 ) · d ′ ( t 1 , t 3 ) + P t 2 ,t 3 ∈T ∆ 3 ( t 3 ) · ω 2 ( t 3 ,t 2 ) ∆ 3 ( t 3 ) · d ′ ( t 3 , t 2 ) = P t 1 ,t 3 ∈T ω 1 ( t 1 , t 3 ) · d ′ ( t 1 , t 3 ) + P t 2 ,t 3 ∈T ω 2 ( t 3 , t 2 ) · d ′ ( t 3 , t 2 ) = K ( d ′ )( ∆ 1 , ∆ 3 ) + K ( d ′ )( ∆ 3 , ∆ 2 ) where the inequa lit y follows fr om the hypothesis a nd the third last eq ua lit y follows by ω 2 ∈ Ω ( ∆ 3 , ∆ 2 ) and ω 1 ∈ Ω ( ∆ 1 , ∆ 2 ). ⊓ ⊔ Prop ositio n 7. If d : T × T → [0 , 1] is a 1 -b ounde d pseudometric over T , then K ( d ) : D ( T ) × D ( T ) → [0 , 1] is a 1 -b oun de d pseudometric over D ( T ) . Pr o of. W e hav e to pr ov e that K ( d ) satisfies the three pr o pe r ties in Definition 2. T o show K ( d )( ∆, ∆ ) = 0 it is enough to take the matching ω ∈ Ω ( ∆, ∆ ) defined b y ω ( t, t ) = ∆ ( t ), fo r all t ∈ T , a nd ω ( t, t ′ ) = 0, for all t, t ′ ∈ T with t 6 = t ′ . In fact, we o bta in K ( d )( ∆, ∆ ) = 0 by K ( d )( ∆, ∆ ) ≤ P t,t ′ ∈T ω ( t, t ′ ) · d ( t, t ′ ) = P t ∈T ∆ ( t ) · d ( t, t ) = 0, with the las t equa lit y from the pro per t y d ( t, t ) = 0 of the pseudometric d . T o show the sy mmetry pro per ty K ( d )( ∆ 1 , ∆ 2 ) = K ( d )( ∆ 2 , ∆ 1 ) it is enough to observe that for any matching ω ∈ Ω ( ∆ 1 , ∆ 2 ), the function ω ′ : T × T → [0 , 1] defined for all pr o cesses t 1 , t 2 ∈ T as ω ′ ( t 1 , t 2 ) = ω ( t 2 , t 1 ), is a matching in Ω ( ∆ 2 , ∆ 1 ). In fact, by exploiting this prop erty , given one of the optimal match- ing ω ∈ Ω ( ∆ 1 , ∆ 2 ) realising K ( d )( ∆ 1 , ∆ 2 ) w e get K ( d )( ∆ 1 , ∆ 2 ) = P t 1 ,t 2 ∈T ω ( t 1 , t 2 ) · d ( t 1 , t 2 ) = P t 2 ,t 1 ∈T ω ′ ( t 2 , t 1 ) · d ( t 2 , t 1 ) ≥ K ( d )( ∆ 2 , ∆ 1 ) with the second equality from the s y mmetry prop erty d ( t 1 , t 2 ) = d ( t 2 , t 1 ) of the pseudometric d . Then, by exchanging the role of ∆ 1 and ∆ 2 we get K ( d )( ∆ 2 , ∆ 1 ) ≥ K ( d )( ∆ 1 , ∆ 2 ), th us giving K ( d )( ∆ 1 , ∆ 2 ) = K ( d )( ∆ 2 , ∆ 1 ). W e conclude by observing that the triangular prop er t y K ( d )( ∆ 1 , ∆ 2 ) ≤ K ( d )( ∆ 1 , ∆ 3 ) + K ( d )( ∆ 3 , ∆ 2 ) is an instance of Lemma 1, which can b e a pplied since the hyp o thesis d ( t, t ′ ) ≤ d ( t, t ′′ ) + d ( t ′′ , t ′ ) for all t, t ′ , t ′′ ∈ T follows fro m the triangular prop erty of the ps eudometric d . ⊓ ⊔ Now w e prove that for all k ≥ 1, the function m k is a fixed p oint of B . Lemma 2. F or al l k ≥ 1 , B ( m k ) = m k Pr o of. First we note that structure ( { d : T × T → [0 , 1] | B tick ( m k − 1 ) ⊑ d } , ⊑ ), with d 1 ⊑ d 2 iff d 1 ( t, t ′ ) ≤ d 2 ( t, t ′ ) for all t, t ′ ∈ T , is a complete la ttice. In- deed, for each set D ⊆ [0 , 1 ] T ×T , the supr em um and infimum are defined as sup( D )( t, t ′ ) = sup d ∈ D d ( t, t ′ ) a nd inf ( D )( t, t ′ ) = inf d ∈ D d ( t, t ′ ), for all t, t ′ ∈ T . The infimum of the lattice is clear ly B tick ( m k − 1 ). Being B monotone, by the Knaster-T a rski theorem B has a least fixed point. Since our pL TS is ima ge-finite, and all transitions lea d to distributions with finite supp or t, with a r guments anal- ogous to those used in [38] it is possible to prove that B is con tinuous a nd its clo- sure o rdinal is ω , thus implying that its least fixed p oint is the supremum of the Kleene ascending chain B tick ( m k − 1 ) ⊑ B ( B tick ( m k − 1 )) ⊑ B 2 ( B tick ( m k − 1 )) ⊑ . . . = m k, 0 ⊑ m k, 1 ⊑ m k, 2 ⊑ . . . , a nd, by definition, the supremum of this chain is m k . Now we exploit Lemma 2 to prov e that for a rbitrary pro cesses t, t ′ ∈ T , pro cess t ′ is able to simulate transitions of the form t ˆ α = ⇒ ∆ , b esides thos e of the form t α − → ∆ , whe n α 6 = tick . Lemma 3. Given two arbitr ary terms t, t ′ ∈ T , whenever t ˆ α = ⇒ ∆ for α 6 = ti ck , we have: inf t ′ ˆ α = ⇒ Θ K ( m k )( ∆ + (1 − | ∆ | ) Dead , Θ + (1 − | Θ | ) Dead ) ≤ m k ( t, t ′ ) Pr o of. The thesis is immedia te if m k ( t, t ′ ) = 1. Co nsider the case m k ( t, t ′ ) < 1. W e r eason by induction on the length n of t ˆ α = ⇒ ∆ . Base c ase n = 1 . In this case t ˆ α = ⇒ ∆ is dir e c tly derived fr o m t ˆ α − → ∆ . There are tw o sub-cases. The first is α = τ and ∆ = t , the second is t α − → ∆ , with α an arbitrar y actio n in A \ { tick } . In the for mer ca se, by the definition of the weak transition relatio n b τ − − → we have that t ′ b τ − − → t ′ and, conseq uen tly , t ′ b τ = ⇒ t ′ . The thesis holds for distribution Θ = t ′ . Mor e precisely , we hav e that K ( m k )( t + (1 − | t | ) Dead , t ′ + (1 − | t ′ | ) Dead ) = K ( m k )( t, t ′ ) = m k ( t, t ′ ). In the latter ca se, the thesis follows directly by Definition 5 a nd Lemma 2. In detail, Definition 5 gives inf t ′ ˆ α = ⇒ Θ K ( m k )( ∆, Θ + (1 − | Θ | ) Dead ) ≤ B ( m k )( t, t ′ ) and Lemma 2 g ives B ( m k )( t, t ′ ) = m k ( t, t ′ ). Inductive st ep n > 1 . The der iv atio n t ˆ α = ⇒ ∆ is obtained by t ˆ β 1 = = ⇒ ∆ ′ and ∆ ′ ˆ β 2 − − → ∆ , for some dis tribution ∆ ′ ∈ D ( T ) and actions β 1 , β 2 ∈ A \ { tick } . W e hav e tw o sub-c a ses. The first is β 1 = τ and β 2 = α , the other is β 1 = α and β 2 = τ . W e consider the case β 1 = τ and β 2 = α , the other is a nalogous. The length of deriv ation t ˆ β 1 = = ⇒ ∆ ′ is n − 1. Ther efore, by the inductive hypothesis we hav e inf t ′ ˆ β 1 = = ⇒ Θ ′ K ( m k )( ∆ ′ + (1 − | ∆ ′ | ) Dead , Θ ′ + (1 − | Θ ′ | ) Dead ) ≤ m k ( t, t ′ ) (1) Notice tha t m k ( t, t ′ ) < 1 and Equation 1 ensur e that the set { Θ ′ | t ′ ˆ β 1 = = ⇒ Θ ′ } is not b e empt y . Mo reov er , b eing β 1 = τ , we hav e that | ∆ ′ | = 1 a nd, for each transition t ′ ˆ β 1 = = ⇒ Θ ′ , also | Θ ′ | = 1. Therefore, the inductiv e h yp othesis Equation 1 instantiates to inf t ′ ˆ β 1 = = ⇒ Θ ′ K ( m k )( ∆ ′ , Θ ′ ) ≤ m k ( t, t ′ ) (2) The sub-distributio n ∆ ′ is of the form ∆ ′ = P i ∈ I p i · t i for suitable pro cesses t ′ i and, by definition of transition relation ˆ β 2 − − → , the transition ∆ ′ ˆ β 2 − − → ∆ is deriv ed from a β 2 -transition b y some of the pro cesse s t i , namely I is partitioned into sets I 1 ∪ I 2 such that: (i) for all i ∈ I 1 we hav e t i β 2 − − → ∆ i for suitable distributions ∆ i , (ii) fo r ea ch i ∈ I 2 we hav e t i β 2 − − → 6 , and (iii) ∆ = P i ∈ I 1 p i · ∆ i . Let us fix a n a rbitrary tra nsition t ′ ˆ β 1 = = ⇒ Θ ′ (remember we argued ab ov e tha t it is not p ossible that there are none). T he sub-distribution Θ ′ is of the form Θ ′ = P j ∈ J q j · t ′ j , for suitable pro cesses t ′ j . Then, J can b e pa rtitioned into s ets J 1 ∪ J 2 such that for all j ∈ J 1 we hav e t ′ j ˆ β 2 = = ⇒ Θ j for suitable distributions Θ j and for e ach j ∈ J 2 we have t ′ j ˆ β 2 = = ⇒ 6 . If J 1 6 = ∅ this gives Θ ′ ˆ β 2 = = ⇒ Θ with Θ = P j ∈ J 1 q j · Θ j . Since we had t ′ ˆ β 1 = = ⇒ Θ ′ , we can conclude t ′ ˆ α = ⇒ Θ . Notice that we are sur e that there exist some some Θ ′ with t ′ ˆ β 1 = = ⇒ Θ ′ for which J 1 6 = ∅ . Indeed, if for all Θ ′ with t ′ ˆ β 1 = = ⇒ Θ ′ we had J 1 = ∅ , this would cause t ˆ α = ⇒ 6 , giving B ( m k )( t, t ′ ) = 1 and co nt r adicting B ( m k )( t, t ′ ) = m k ( t, t ′ ) < 1. W e remark that in all cases where J 1 6 = ∅ , the weak transition t ′ ˆ α = ⇒ Θ is obtained by firstly choo s ing one o f the av ailable weak trans itions labe lled ˆ β 1 from t ′ , namely t ′ ˆ β 1 = = ⇒ Θ ′ , and, then, by choosing o ne of the av ailable weak transitions la belle d β 2 from t ′ j , namely t ′ j ˆ β 2 = = ⇒ Θ j , for a ll j ∈ J 1 . F or the transition t ′ ˆ β 1 = = ⇒ Θ ′ fixed ab ove, let ω be o ne o f the optimal matc hings realising K ( m k )( ∆ ′ , Θ ′ ). W e can rewr ite the distributions ∆ ′ and Θ ′ as ∆ ′ = P i ∈ I , j ∈ J ω ( t i , t ′ j ) · t i and Θ ′ = P i ∈ I , j ∈ J ω ( t i , t ′ j ) · t ′ j . F or all i ∈ I 1 and j ∈ J , define ∆ i,j = ∆ i . W e can r ewrite ∆ as ∆ = P i ∈ I 1 ,j ∈ J ω ( t i , t ′ j ) · ∆ i,j . Analo g ously , for each j ∈ J 1 and i ∈ I we note that the tra ns ition q j t ′ j ˆ β 2 = = ⇒ q j · Θ j can alwa ys be split into P i ∈ I ω ( t i , t ′ j ) t ′ j ˆ β 2 = = ⇒ P i ∈ I ω ( t i , t ′ j ) · Θ i,j so that we c a n rewrite Θ j as Θ j = P i ∈ I ω ( t i , t ′ j ) · Θ i,j and Θ as Θ = P i ∈ I , j ∈ J 1 ω ( t i , t ′ j ) · Θ i,j . Then we note that for all i ∈ I 1 and j ∈ J 1 , all tr ansition t ′ j ˆ β 2 = = ⇒ Θ i,j ensure that inf t ′ j ˆ β 2 = = ⇒ Θ i,j K ( m k )( ∆ i,j , Θ i,j + (1 − | Θ i,j | ) Dead ) ≤ m k ( t i , t ′ j ) (3) Indeed, by definition o f B , whenever t i β 2 − − → ∆ i = ∆ i,j we have inf t ′ j ˆ β 2 = = ⇒ Θ i,j K ( m k )( ∆ i,j , Θ i,j + (1 − | Θ i,j | ) Dead ) ≤ B ( m k )( t i , t ′ j ) Then, b eing m k a fixed po int of B we hav e B ( m k )( t i , t ′ j ) = m k ( t i , t ′ j ) and Equation 3 follows. Consider an y j ∈ J 1 and i ∈ I 1 . By Equatio n 3 and B ( m k )( t i , t ′ j ) = m k ( t i , t ′ j ), we infer that if if m k ( t i , t ′ j ) < 1 , then the se t o f the weak transitio ns lab elled ˆ β 2 from t ′ j cannot b e empty . F or a n y transition t ′ i ˆ β 2 = = ⇒ Θ i,j , let ω i,j be one of the optimal matchings re alising K ( m k )( ∆ i,j , Θ i,j + (1 − | Θ i,j | ) Dead ). Define ω ′ : T × T → [0 , 1] as the function such that for arbitrar y pr o cesses u , v ∈ T w e hav e: ω ′ ( u, v ) =                                X i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( u, v ) if u 6 = Dead 6 = v X i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( u, v ) + X i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( u ) if u 6 = Dead = v X i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( u, v ) + X i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) Θ i,j ( v ) if u = Dead 6 = v X i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( u, v ) + X i ∈ I 1 ,j ∈ J 2 ω ( t i , t j ) ∆ i,j ( u )+ X i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) Θ i,j ( v ) + X i ∈ I 2 ,j ∈ J 2 ω ( t i , t ′ j ) if u = Dead = v . T o infer the pro of obligatio n inf t ′ ˆ α = ⇒ Θ K ( m k )( ∆ + (1 − | ∆ | ) Dead , Θ + (1 − | Θ | ) Dead ) ≤ m k ( t, t ′ ) (4) it is now enough to show that: 1. the function ω ′ is a matchin g in Ω ( ∆ + (1 − | ∆ | ) Dead , Θ + (1 − | Θ | ) Dead ) 2. inf t ′ ˆ β 1 = = ⇒ Θ ′ Θ ′ = P j ∈ J 1 ∪ J 2 q j δ ( t ′ j ) t ′ j ˆ β 2 = = ⇒ Θ i,j X u,v ∈ T ω ′ ( u, v ) · m k ( u, v ) ≤ m k ( t, t ′ ) (5) T o sho w proper t y 1 we pro ve that the left mar ginal of ω ′ is ∆ + (1 − | ∆ | ) Dead . The pro o f that the r ight marg inal is Θ + (1 − | Θ | ) Dead is analogo us. F or any pro cess u 6 = Dead we hav e P v ∈ T ω ′ ( u, v ) = P v 6 = D ead P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( u, v ) + P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( u, Dead ) + P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( u ) = P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) P v ∈ T ω i,j ( u, v ) + P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( u ) = P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ∆ i,j ( u ) + P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( u ) = P i ∈ I 1 ,j ∈ J ω ( t i , t ′ j ) ∆ i,j ( u ) = P i ∈ I 1 p i ∆ i ( u ) = ∆ ( u ) = ( ∆ + (1 − | ∆ | ) Dead )( u ) with the third eq uality fro m the fact that ω i,j is a matching in Ω ( ∆ i,j , Θ i,j ), the fourth equa lit y by J = J 1 ∪ J 2 and the fifth equality by P j ∈ J ω ( t i , t ′ j ) = p i and ∆ i,j = ∆ i . Consider now Dead . W e have P v ∈ T ω ′ ( Dead , v ) = P v 6 = D ead P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( Dead , v ) + P v 6 = D ead P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) Θ i,j ( v ) + P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( Dead , Dead ) + P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( Dead ) + P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) Θ i,j ( Dead ) + P i ∈ I 2 ,j ∈ J 2 ω ( t i , t ′ j ) = P v ∈ T P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( Dead , v ) + P v ∈ T P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) Θ i,j ( v ) + P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( Dead ) + P i ∈ I 2 ,j ∈ J 2 ω ( t i , t ′ j ) = P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ∆ i,j ( Dead ) + P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) + P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( Dead ) + P i ∈ I 2 ,j ∈ J 2 ω ( t i , t ′ j ) = P i ∈ I 1 ,j ∈ J ω ( t i , t ′ j ) ∆ i,j ( Dead ) + P i ∈ I 2 ,j ∈ J ω ( t i , t ′ j ) = P i ∈ I 1 p i ∆ i ( Dead ) + P i ∈ I 2 p i = ( ∆ + (1 − | ∆ | ) Dead )( Dead ) where the third equa lit y by the fact that ω i,j is a matching in Ω ( ∆ i,j , Θ i,j ) and the fact that Θ i,j is a distribution, the fourth equality by J = J 1 ∪ J 2 , the fifth equality by P j ∈ J ω ( t i , t ′ j ) = p i and ∆ i,j = ∆ i and the last equality follows from P i ∈ I 1 ,j ∈ J ω ( s i , t j ) = P i ∈ I 1 p i = | ∆ | . Summarising, for a ll pro cess e s u ∈ T we hav e proved tha t P v ∈ T ω ′ ( u, v ) = ( ∆ + (1 − | ∆ | ) Dead )( u ), thu s confirming that the left mar ginal of ω ′ is ∆ + (1 − | ∆ | ) Dead . T o prove (2), by lo o king at the definition of ω ′ given ab ov e we ge t that P u,v ∈ T ω ′ ( u, v ) · m k ( u, v ) is the summation of the following v a lues: – P u 6 = Dead 6 = v P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( u, v ) m k ( u, v ) – P u 6 = Dead P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( u, Dead ) m k ( u, Dead ) + P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( u ) m k ( u, Dead ) – P v 6 = D ead P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( Dead , v ) m k ( Dead , v ) + P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) Θ i,j ( v ) m k ( Dead , v ) – P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( Dead , Dead ) m k ( Dead , Dead ) + P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( Dead ) m k ( Dead , Dead ) + P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) Θ i,j ( Dead ) m k ( Dead , Dead )+ P i ∈ I 2 ,j ∈ J 2 ω ( t i , t ′ j ) m k ( Dead , Dead ). By moving the first s ummand of the second, third and fourth items to the first item, we rewrite this summatio n a s the summatio n of the following v alues: – P u,v ∈ T P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) ω i,j ( u, v ) m k ( u, v ) – P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( u ) m k ( u, Dead ) – P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) Θ i,j ( v ) m k ( Dead , v ) – P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) ∆ i,j ( Dead ) m k ( Dead , Dead ) + P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) Θ i,j ( v ) m k ( Dead , Dead ) + P i ∈ I 2 ,j ∈ J 2 ω ( t i , t ′ j ) m k ( Dead , Dead ). Since the function ω i,j was defined as one of the optimal matchings r eal- ising K ( m k )( ∆ i,j , Θ i,j + (1 − | Θ i,j | ) Dead ), the first item ca n b e rewr itten as P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) K ( m k )( ∆ i,j , Θ i,j + (1 − | Θ i,j | ) Dead ). F rom Equation 3 we get inf t ′ j ˆ β 2 = = ⇒ Θ i,j K ( m k )( ∆ i,j , Θ i,j + (1 − | Θ i,j | ) Dead )) ≤ m k ( t i , t ′ j ). Henceforth the infim um for all t ′ j ˆ β 2 = = ⇒ Θ i,j of the first item is less or equal P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) · m k ( t i , t ′ j ). The seco nd item is clearly less or equal than P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ). The third item is clear ly less or equal than P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ). Finally , the last item is 0 since m k ( Dead , Dead ) = 0. Namely , the infimum for a ll t ′ j ˆ β 2 = = ⇒ Θ i,j of P u,v ∈ T ω ′ ( u, v ) · m k ( u, v ) is b ounded by the summation of the following three v a lues: – P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) m k ( t i , t ′ j ) – P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) – P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ). F ormally: inf t ′ j ˆ β 2 = = ⇒ Θ i,j X u,v ∈ T ω ′ ( u, v ) · m k ( u, v ) ≤ X i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) m k ( t i , t ′ j ) + X i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) + X i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) (6) Then, since K ( m k )( ∆ ′ , Θ ′ ) is the summation o f the following v alues: – P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) m k ( t i , t ′ j ) – P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) m k ( t i , t ′ j ) = P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) (since t i β 2 − − → and t ′ j ˆ β 2 = = ⇒ 6 give m k ( t i , t ′ j ) = 1) – P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) m k ( t i , t ′ j ) = P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) (since t ′ j β 2 − − → and t i ˆ β 2 = = ⇒ 6 give m k ( t i , t ′ j ) = 1) – P i ∈ I 2 ,j ∈ J 2 ω ( t i , t ′ j ) m k ( t i , t ′ j ) we infer that the right hand side o f E quation 6 P i ∈ I 1 ,j ∈ J 1 ω ( t i , t ′ j ) · m k ( t i , t ′ j ) + P i ∈ I 1 ,j ∈ J 2 ω ( t i , t ′ j ) + P i ∈ I 2 ,j ∈ J 1 ω ( t i , t ′ j ) is less or equal than K ( m k )( ∆ ′ , Θ ′ ). T ogether with Equation 6 this gives inf t ′ j ˆ β 2 = = ⇒ Θ i,j X u,v ∈ T ω ′ ( u, v ) · m k ( u, v ) ≤ K ( m k )( ∆ ′ , Θ ′ ) (7) which, together with E quation 2 g ives E quation 5, which concludes the pro o f. ⊓ ⊔ W e a re now ready to prov e Theorem 1. Pr o of (of The or em 1). W e ha ve to prove that m k satisfies the three prop erties in Definition 2. Pr op erties m k ( t, t ) = 0 a nd m k ( t, t ′ ) = m k ( t ′ , t ) for all t, t ′ ∈ T are immediate. The interesting ca se is the triangular prop erty m k ( t, t ′ ) ≤ m k ( t, t ′′ ) + m k ( t ′′ , t ′ ) for all t, t ′ , t ′′ ∈ T . T o this pur po s e, let us define the function m : T × T → [0 , 1] such tha t m ( t, t ′ ) = min  m k ( t, t ′ ) , inf t ′′ ∈T ( m k ( t, t ′′ ) + m k ( t ′′ , t ′ ))  . W e will prov e that m = m k . By the definition of m , this g ives m k ( t, t ′ ) ≤ m k ( t, t ′′ ) + m k ( t ′′ , t ′ ) for all t ′′ ∈ T , thus co nfirming that a lso the tr iangular prop erty holds for m k . In order to prov e m = m k , we observe first that relatio n m ⊑ m k follows immediately by the definition of m . It remains to prove m k ⊑ m . T o this pur- po se w e prove that: (i) m k is the least prefixed p oint of the functional B on the complete lattice ( { d : T × T → [0 , 1] : B tick ( m k − 1 ) ⊑ d } , ⊑ ), and (ii) m is a prefixed po in t o f the sa me functional on the same lattice. Let us start with prop erty (i) By Lemma 2, m k is the lea st fixed p oint of the functional B , which is monotone and contin uo us in the lattice. This c oincides with the le a st pre fix ed p oint. Let us consider now (ii). W e have to prov e B ( m ) ⊑ m , namely , whenever m ( t, t ′ ) < 1, then, for all α 6 = ti ck we have ∀ t α − − → ∆. inf t ′ ˆ α = ⇒ Θ K ( m )( ∆, Θ + (1 − | Θ | ) Dead ) ≤ m ( t, t ′ ) . (8) T o prove Equation 8 we distinguish tw o ca ses, namely m ( t, t ′ ) = m k ( t, t ′ ) and m ( t, t ′ ) = inf t ′′ ∈T ( m k ( t, t ′′ ) + m k ( t ′′ , t ′ )). Assume first m ( t, t ′ ) = m k ( t, t ′ ). In this case, b eing m k the leas t fixed p oint of B , t α − − → ∆ implies that inf t ′ ˆ α = ⇒ Θ K ( m k )( ∆, Θ + (1 − | Θ | ) Dead ) ≤ B ( m k )( t, t ′ ) = m k ( t, t ′ ) = m ( t, t ′ ) Since K is mo notone a nd m ⊑ m k , we infer inf t ′ ˆ α = ⇒ Θ K ( m )( ∆, Θ + (1 − | Θ | ) Dead ) ≤ m ( t, t ′ ) th us giving Equa tion 8. Assume now m ( t, t ′ ) = inf t ′′ ∈T ( m k ( t, t ′′ ) + m k ( t ′′ , t ′ )). Since m ( t, t ′ ) < 1, there exist terms t ′′ ∈ T with m k ( t, t ′′ ) + m k ( t ′′ , t ′ ) < 1 , thus implying b oth m k ( t, t ′′ ) < 1 and m k ( t ′′ , t ′ ) < 1. By Lemma 3, from m k ( t, t ′′ ) < 1 and t α − − → ∆ we infer inf t ′′ ˆ α = ⇒ Φ K ( m k )( ∆, Φ + (1 − | Φ | ) Dead ) ≤ m k ( t, t ′′ ) By Lemma 3, fro m m k ( t ′′ , t ′ ) < 1, for all t ′′ ˆ α = = ⇒ Φ we have inf t ′ ˆ α = ⇒ Θ K ( m k )( Φ + (1 − | Φ | ) Dead , Θ + (1 − | Θ | ) Dead ) ≤ m k ( t ′′ , t ′ ) By the definitio n of m and Lemma 1 we hav e K ( m k )( ∆, Φ + (1 − | Φ | ) Dead ) + K ( m k )( Φ + (1 − | Φ | ) Dead ) , Θ + (1 − | Θ | ) Dead ) ≥ K ( m )( ∆, Θ + (1 − | Θ | ) Dead ). W e der ive inf t ′′ ˆ α = ⇒ Φ t ′ ˆ α = ⇒ Θ K ( m )( ∆, Θ + (1 − | Θ | ) Dead ) ≤ m k ( t, t ′′ ) + m k ( t ′′ , t ′ ) and, by definition of infimum, inf t ′′ ˆ α = ⇒ Φ t ′ ˆ α = ⇒ Θ K ( m )( ∆, Θ + (1 − | Θ | ) Dead ) ≤ m ( t, t ′ ) which gives Equation 8 and co nc ludes the pro o f. ⊓ ⊔ W e pr ov e now Prop ositio n 2. Pr o of (of Pr op osition 2). W e prove the firs t item, then the s e c ond item fo llows by the first and the result t ≃ 0 t ′ iff t ≈ t ′ given in [8]. Fir st we re c all that t ≃ p t ′ iff m ( t, t ′ ) = p , where m is the least fix ed p oint (a nd also lea st prefixed po in t) in the lattice ([0 , 1] T ×T , ⊑ ) of a functional B ′ such that B ′ ( d )( t, t ′ ) = max( B ( d )( t, t ′ ) , B tick ( d )( t, t ′ )) for a ll t, t ′ ∈ T and d ∈ [0 , 1] T ×T . Therefor e, we hav e to pr ove that m ∞ = m . Let us start with m ∞ ⊑ m . Being m ∞ the supr emu m of all m k , it is enough to show m k ⊑ m for all k ∈ I N . This prop erty ca n b e s hown by induction ov er k . The bas e case is immediate since m 0 = 0 . Consider the inductive step k + 1. F unction m k +1 is obtained as sup n →∞ B n ( B tick ( m k )). Assume any n ∈ I N. By B ′ ≥ B , B tick we get B n ( B tick ( m k )) ⊑ ( B ′ ) n +1 ( m k ) for all n ∈ I N . By the monotonicity of B ′ and the inductiv e hypothesis m k ⊑ m , w e get ( B ′ ) n +1 ( m k ) ⊑ ( B ′ ) n +1 ( m ). Finally , since m is a fixed p oint of B ′ we infer ( B ′ ) n +1 ( m ) = m . Summarising, B n ( B tick ( m k )) ⊑ m . By the a rbitrarity o f n we infer m ∞ ⊑ m . Let us show now m ⊑ m ∞ . Being m the least pre fixed p oint of B ′ , it is enough to s how that m ∞ is a pre fix ed point of B ′ . W e have b oth m ∞ ⊒ B ( m ∞ ) and m ∞ ⊒ B tick ( m ∞ ), thus giv ing m ∞ ⊒ B ′ ( m ∞ ), confirming that m ∞ is a prefixed po in t o f B ′ . ⊓ ⊔ Now w e prove Theor em 2. Pr o of (of The or em 2). W e prov e the second item. T he pro of of the third item is analogo us, then the first item is a consequence of the others. T o prov e the thesis we c an prove that for all k ∈ I N we hav e m k ( ξ ⋊ ⋉ P 1 k P 2 k A, ξ ⋊ ⋉ P 1 k P 2 ) ≤ m k ( ξ ⋊ ⋉ P 1 k A, ξ ⋊ ⋉ P 1 ) + m k ( ξ ⋊ ⋉ P 2 k A, ξ ⋊ ⋉ P 2 ) − ( m k ( ξ ⋊ ⋉ P 1 k A, ξ ⋊ ⋉ P 1 ) · m k ( ξ ⋊ ⋉ P 2 k A, ξ ⋊ ⋉ P 2 )) Since ξ ⋊ ⋉ P 1 k P 2 k A can mimic a ll the behaviours b y ξ ⋊ ⋉ P 1 k P 2 , the distance m k ( ξ ⋊ ⋉ P 1 k P 2 k A, ξ ⋊ ⋉ P 1 k P 2 ) is given by the b ehaviours by ξ ⋊ ⋉ P 1 k P 2 k A that are not mimick ed by ξ ⋊ ⋉ P 1 k P 2 . The n, since ξ ⋊ ⋉ P 1 k A k P 2 k A can mimic all the b ehaviours by ξ ⋊ ⋉ P 1 k P 2 k A , we have that m k ( ξ ⋊ ⋉ P 1 k P 2 k A, ξ ⋊ ⋉ P 1 k P 2 ) ≤ m k ( ξ ⋊ ⋉ P 1 k A k P 2 k A, ξ ⋊ ⋉ P 1 k P 2 ) th us implying that to hav e the pro o f obligation we can prove the stronger pr op- erty m k ( ξ ⋊ ⋉ P 1 k A k P 2 k A, ξ ⋊ ⋉ P 1 k P 2 ) ≤ m k ( ξ ⋊ ⋉ P 1 k A, ξ ⋊ ⋉ P 1 ) + m k ( ξ ⋊ ⋉ P 2 k A, ξ ⋊ ⋉ P 2 ) − ( m k ( ξ ⋊ ⋉ P 1 k A, ξ ⋊ ⋉ P 1 ) · m k ( ξ ⋊ ⋉ P 2 k A, ξ ⋊ ⋉ P 2 )) . More in general, we pr ov e m k ( ξ ⋊ ⋉ Q 1 k Q 2 , ξ ⋊ ⋉ P 1 k P 2 ) ≤ m k ( ξ ⋊ ⋉ Q 1 , ξ ⋊ ⋉ P 1 ) + m k ( ξ ⋊ ⋉ Q 2 , ξ ⋊ ⋉ P 2 ) − ( m k ( ξ ⋊ ⋉ Q 1 , ξ ⋊ ⋉ P 1 ) · m k ( ξ ⋊ ⋉ Q 2 , ξ ⋊ ⋉ P 2 )) for a rbitrary Q 1 and Q 2 , written a ls o m k ( M 1 k M 2 , N 1 k N 2 ) ≤ m k ( M 1 , N 1 ) + m k ( M 2 , N 2 ) − ( m k ( M 1 , N 1 ) · m k ( M 2 , N 2 )) . (9) T o this purp ose, first we need to intro duce the notion of congr uence closure for m k as the quantitativ e ana logue of the w e ll- known concept of congr uence closure of a pro cess equiv alence. W e define the metric congruence clos ure of m k for ope r ator k w.r.t. the bound pr ovided in Equation 9 as a function m assigning to each pa ir o f sy stems a distance in [0 , 1] given by m ( M , N ) =          min( m ( M 1 , N 1 ) + m ( M 2 , N 2 ) − ( m ( M 1 , N 1 ) m ( M 2 , N 2 )) , m k ( M , N )) if " M = M 1 k M 2 ∧ N = N 1 k N 2 ∧ m k ( M 1 , N 1 ) < 1 ∧ m k ( M 2 , N 2 ) < 1 m k ( M , N ) otherwise W e note that m satisfies by constr uction m ( M 1 k M 1 , N 1 k N 2 ) ≤ m ( M 1 , N 1 )+ m ( M 2 , N 2 ) − ( m ( M 1 , N 1 ) · m ( M 2 , N 2 )). W e note also that m satisfies by con- struction m ⊑ m k . It re ma ins to show that m k ⊑ m , th us giving m k = m , and Equation 9 holds. Since m k is the least prefixed p oint of B ov e r the lat- tice ( { d : T × T → [0 , 1 ] | B tick ( m k − 1 ) ⊑ d } , ⊑ ) to show m k ⊑ m it is enough to prov e that m is a prefixed p oint of the same functional on the same lattice. T o prove that B ( m ) ⊑ m we need to show that m sa tisfies the tra nsfer condition o f the bis imulation metr ic s, namely ∀ M α − − → γ . ∃ M ′ α = = ⇒ γ ′ . K ( m )( γ , γ ′ + (1 − | γ ′ | ) Dead ) ≤ m ( M , M ′ ) (10) for a ll sys tems M , M with m ( M , M ′ ) < 1 a nd α 6 = tick . This can b e proved b y applying the sa me arg umen ts used to prove Pro po si- tion 3.2 in [11]. ⊓ ⊔ Pro of of Prop osi tion 3 First we obs e r ve that in the e volution o f b oth sys- tems ξ ⋊ ⋉ Ctrl i and ξ ⋊ ⋉ Ct rl i k A fp h i, m, n i it never happ ens that there a r e more than tw o instantaneous a ctions in b et ween any tw o tick actio ns. This implies that for all j ∈ I N, system M r eachable from ξ ⋊ ⋉ Ctrl i and system N rea ch- able from ξ ⋊ ⋉ Ctrl i k A fp h i, m, n i , we hav e m j ( M , N ) = sup h ∈ I N m j,h ( M , N ) = m j, 2 ( M , N ). Then, the pro of follows fr om the following 7 prop erties, by observ- ing that first item o f the thesis follows fro m the prop erty expres sed by item 1 below a nd the seco nd and third items of the thes is follow fro m the prop erty expressed by item 7 b elow, when, resp ectively , j 1 = j − m + 1 a nd j 2 = m − 1. F or a ny j ∈ I N, it ho lds that: 1. m j,l ( ξ ⋊ ⋉ P, ξ ⋊ ⋉ P k Q ) = 0 for a ny P and whenever pro cess Q has the form Q = tick j ′ .B h i, n − m + 1 i for some j < j ′ . 2. m j, 0 ( ξ ⋊ ⋉ P, ξ ⋊ ⋉ P k Q ) = 1 − ( p + i ) j − 1 whenever 0 < j ≤ n − m + 1, ξ ( r i ) = absence , and the pr o cesses P and Q hav e the form P = tick . Ctrl i and Q = B h i, n − m + 1 − j i . 3. m j, 1 ( ξ ⋊ ⋉ P, ξ ⋊ ⋉ P k Q ) = 1 − ( p + i ) j − 1 whenever 0 < j ≤ n − m + 1, ξ ( r i ) = absence , and the pro cesse s P and Q hav e the for m P = c i ! on . tick . Ctrl i and Q = B h i, 0 , n − m + 1 − j i . 4. m j ( ξ ⋊ ⋉ P , ξ ⋊ ⋉ P k Q ) = 1 − ( p + i ) j whenever 0 < j ≤ n − m + 1 , ξ ( r i ) = absence , and the pro cesses P and Q ha ve the form P = Ctrl i and Q = B h i, n − m + 1 i . 5. m j, 0 ( ξ ⋊ ⋉ P, ξ ⋊ ⋉ P k Q ) = 1 − ( p + i ) j 1 whenever pro ces ses P a nd Q have the form P = tick . Ctrl i and Q = tick j 2 .B h i, n − m + 1 i , for some 0 < j 2 ≤ j such that j 1 = min( j − j 2 + 1 , n − m + 1). 6. m j, 1 ( ξ ⋊ ⋉ P, ξ ⋊ ⋉ P k Q ) = 1 − ( p + i ) j 1 whenever pro ces ses P has either the form P = c i ! on . tick . Ctrl i or P = c i ! off . tick . Ctrl i , and pr o cess Q has the form Q = tick j 2 .B h i, n − m + 1 i , for some 0 < j 2 ≤ j such that j 1 = min( j − j 2 + 1 , n − m + 1). 7. m j ( ξ ⋊ ⋉ P , ξ ⋊ ⋉ P k Q ) = 1 − ( p + i ) j 1 whenever pro cesses P and Q hav e the form form P = Ctrl i and Q = tick j 2 .B h i, n − m + 1 i , for some 0 < j 2 ≤ j such that j 1 = min( j − j 2 + 1 , n − m + 1). The seven prop erties above can be prov ed for all m j and m j,l by w ell founded induction ov er the r elation ≺ defined a s follows: – m j ≺ m if m ∈ { m j ′ , m j ′ ,l } with j < j ′ – m j,l ≺ m if either m ∈ { m j ′ , m j ′ ,l } with j < j ′ , or, m = m j ′ ,l ′ with j ′ = j and l < l ′ . Obviously , ≺ is irreflexive and there do es not ex ist any infinite des cending c ha in (the base case is m 0 ). The base case j = 0 is immediate since m 0 is the constant zer o function 0 and 1 − ( p + i ) 0 = 0. W e c o nsider the inductive step. 1. The thesis ca n b e easily prov ed s ince Q ca n p erform only tick actio ns and, int uitively , it do es not a ffect the b ehaviour of P . In detail, for j = 1 and l = 0, we hav e that whenever ξ ⋊ ⋉ P tick − − − → P i ∈ I ξ i ⋊ ⋉ P i , then ξ ⋊ ⋉ P k Q tick − − − → P i ∈ I ξ i ⋊ ⋉ P i k Q ′ with Q = tic k j ′ − 1 .B h i, n − m + 1 i . Hence the thesis follows, since m 0 ( ξ i ⋊ ⋉ P i , ξ i ⋊ ⋉ P i k Q ′ ) = 0 by definition of m 0 . Assume now l > 0. In this case, whenever ξ ⋊ ⋉ P α − − → P i ∈ I ξ ⋊ ⋉ P i with α 6 = tick , then ξ ⋊ ⋉ P k Q α − − → P i ∈ I ξ ⋊ ⋉ P i k Q . The thesis holds since, by induction on ca se item 1 , we have m j,l − 1 ( ξ ⋊ ⋉ P i , ξ ⋊ ⋉ P i k Q ) = 0. Similarly , for l = 0 and j > 1, whenever ξ ⋊ ⋉ P tick − − − → P i ∈ I ξ i ⋊ ⋉ P i , then ξ ⋊ ⋉ P k Q tick − − − → P i ∈ I ξ i ⋊ ⋉ P i k Q ′ with Q ′ = tick j ′ − 1 .B h i, n − m + 1 i . Hence the thesis ho lds since, by induction on case item 1, for any h , it holds that m j − 1 ,h ( ξ i ⋊ ⋉ P i , ξ i ⋊ ⋉ P k Q ′ ) = 0 thus imply ing that m j − 1 ( ξ i ⋊ ⋉ P i , ξ i ⋊ ⋉ P k Q ′ ) = sup h ∈ I N ∞ m j − 1 ,h ( ξ i ⋊ ⋉ P i , ξ i ⋊ ⋉ P k Q ′ ) = 0 . 2. Define M = ξ ⋊ ⋉ P and N = ξ ⋊ ⋉ P k Q . W e ha ve that m j. 0 ( M , N ) = B tick ( m j − 1 )( M , N ) = B tick ( m j − 1 , 2 )( M , N ). Hence we hav e to prov e that B tick ( m j − 1 , 2 )( M , N ) = 1 − ( p + i ) j − 1 . Such a pro per ty follows by the follow- ing tw o facts: – max M tick − − → ∆ min N tick = = ⇒ Θ K ( m j − 1 , 2 )( ∆, Θ + (1 − | Θ | ) Dead ) = 1 − ( p + i ) j − 1 – max N tick − − → Θ min M tick = = ⇒ ∆ K ( m j − 1 , 2 )( ∆ + (1 − | ∆ | ) Dead , Θ ) = 1 − ( p + i ) j − 1 . W e pr ov e with the first case, the second one is similar. The only transitions by M ar e of the form M tick − − − → ξ ′ ⋊ ⋉ Ctrl i with ξ ′ ∈ next ( ξ ). The environmen ts ξ ′ ∈ next ( ξ ) max imising the set min N tick = = ⇒ Θ K ( m j − 1 , 2 )( ξ ′ ⋊ ⋉ Ctrl i , Θ ) are such that ξ ′ x ( r i ) = absence . Indeed the atta cker could for ce N to per form c i ! on with probability equal to 1. If ξ ′ ( r i ) = a b sence , then M will p erform c i ! on with proba bility p + i . Hence M do es no t simulate N with a proba bility 1 − p + i . Otherwis e , if ξ ′ ( r i ) = p res e n ce , then M w ill p er fo rm c i ! on with probability 1 − p − i . Hence M do es no t simulate N with a probability p − i . Since 0 ≤ p + i , p − i < 1 2 , then 1 − p + i > p − i . The system N = ξ ⋊ ⋉ P k Q minimises min N tick = = ⇒ Θ K ( m j − 1 , 2 )( ξ ′ ⋊ ⋉ Ctrl i , Θ ) by simulating M with the tr ansition N tick − − − → ξ ′ ⋊ ⋉ Ctrl i k Q ′ with Q ′ = B h i, ma x(0 , n − m + 1 − j − 1 ) i . The only admissible matching ω for K ( m j − 1 , 2 )( ξ ′ ⋊ ⋉ Ctrl i ξ ′ ⋊ ⋉ Ctrl i k Q ′ ) is such that ω ( ξ ′ ⋊ ⋉ Ctrl i , ξ ′ ⋊ ⋉ Ctrl i k Q ′ ) = 1. Summarising we hav e: max M tick − − → ∆ min N tick = = ⇒ Θ K ( m j − 1 , 2 )( ∆, Θ + (1 − | Θ | ) Dead ) = min N tick = = ⇒ Θ K ( m j − 1 , 2 )( ξ ′ ⋊ ⋉ Ctrl i , Θ ) with ξ ′ ( r i ) = absence = K ( m j − 1 , 2 )( ξ ′ ⋊ ⋉ Ctrl i , ξ ′ ⋊ ⋉ Ctrl i k Q ′ ) = m j − 1 , 2 ( ξ ′ ⋊ ⋉ Ctrl i , ξ ′ ⋊ ⋉ Ctrl i k Q ′ ) (b y induct. o n case item 4) = 1 − ( p + i ) j − 1 which completes the the pro of. 3. Define M = ξ ⋊ ⋉ P and N = ξ ⋊ ⋉ P k Q . Analogously to item 2, to pr ov e B ( m j, 0 )( M , N ) = 1 − ( p + i ) j − 1 it is s ufficient to prov e the fo llowing tw o facts: – max M c i ! on − − − → ∆ min N c i ! on = = = ⇒ Θ K ( m j, 0 )( ∆, Θ + (1 − | Θ | ) Dead ) = 1 − ( p + i ) j − 1 – max N c i ! on − − − → Θ min M c i ! on = = = ⇒ ∆ K ( m j, 0 )( ∆ + (1 − | ∆ | ) Dead , Θ ) = 1 − ( p + i ) j − 1 . W e pr ov e the first case, the second one is similar. The only transitio n by M = ξ ⋊ ⋉ P is M c i ! on − − − − → ξ ⋊ ⋉ tick . Ctrl i . The only transition by N = ξ ⋊ ⋉ P k Q is N c i ! on − − − − → ξ ⋊ ⋉ tick . Ctrl i k Q . The only a dmissible matc hing ω fo r K ( m j − 1 , 0 )( ξ ⋊ ⋉ tick . Ctrl i , ξ ⋊ ⋉ tick . Ctrl i k Q ) is such that ω ( ξ ⋊ ⋉ tick . Ctrl i , ξ ⋊ ⋉ tick . Ctrl i k Q ) = 1. Summarising we hav e: max M c i ! on − − − → ∆ min N c i ! on = = = ⇒ Θ K ( m j − 1 , 0 )( ∆, Θ + (1 − | Θ | ) Dead ) = min N c i ! on = = = ⇒ Θ K ( m j − 1 , 0 )( ξ ⋊ ⋉ tick . Ctrl i , Θ ) = K ( m j − 1 , 0 )( ξ ⋊ ⋉ tick . Ctrl i , ξ ⋊ ⋉ tick . Ctrl i k Q ) = m j − 1 , 0 ( ξ ⋊ ⋉ tick . Ctrl i , ξ ⋊ ⋉ tick . Ctrl i k Q ) (by induct. o n cas e item 2) = 1 − ( p + i ) j − 1 which completes the the pro of. 4. Define M = ξ ⋊ ⋉ P and N = ξ ⋊ ⋉ P k Q . Since m j = m j, 2 , ana logously to item 2, to prove B ( m j, 1 )( M , N ) = 1 − ( p + i ) j it is sufficient to prov e the following tw o facts: – max M τ − → ∆ min N τ = ⇒ Θ K ( m j, 1 )( ∆, Θ + (1 − | Θ | ) Dead ) ≤ 1 − ( p + i ) j – max N τ − → Θ min M τ = ⇒ ∆ K ( m j, 1 )( ∆ + (1 − | ∆ | ) Dead , Θ ) = 1 − ( p + i ) j . The in ter esting ca se is the second. Indeed, N is a lways able to simulate M by considering the cas e in which the controller r e a ds the r ight v alue of the sensor and do es not take the v alue provided by the attack er. The system N = ξ ⋊ ⋉ P k Q can p erfor m tw o tr ansitions dep ending on the fact that the c ontroller rea ds or not the fake v alue provided by the attacker. But, obviously , the s ystem N = ξ ⋊ ⋉ P k Q ma x imises max N τ − → Θ min M τ = ⇒ ∆ K ( m j, 1 )( ∆ + (1 − | ∆ | ) Dead , Θ ) when the controller r eads the fake v alue, namely by the transition N b τ = ⇒ γ N = N ′ where N ′ = ξ ⋊ ⋉ c i ! on . tick . Ctrl i . The system M = ξ ⋊ ⋉ P minimises min M τ = ⇒ ∆ K ( m j, 1 )( ∆ + (1 − | ∆ | ) Dead , γ N ) by s im ula ting N by the transition M τ − − → γ M = ( p + i ) · M 1 + (1 − p + i ) · M 2 where M 1 = ξ ⋊ ⋉ c i ! on . tick . Ctrl i and M 2 = ξ ⋊ ⋉ c i ! off . tick . Ctrl i . Moreov e r, the only admissible matching ω for K ( m j. 1 )( γ M , γ N ) is such that ω ( M 1 , N ′ ) = p + i and ω ( M 2 , N ′ ) = 1 − p + i . Summarising: max N τ − → Θ min M τ = ⇒ ∆ K ( m j, 1 )( ∆ + (1 − | ∆ | ) Dead , Θ ) = min M τ = ⇒ ∆ K ( m j, 1 )( ∆ + (1 − | ∆ | ) Dead , γ N ) = K ( m j. 1 )( γ M , γ N ) = ( p + i ) · m j, 1 ( M 1 , N ′ ) + (1 − p + i ) · m j, 1 ( M 2 , N ′ ) = ( p + i ) · (1 − ( p + i ) j − 1 ) + (1 − p + i ) · 1 (by induct. on cas e item 3) = 1 − ( p + i ) j . which completes the pro of. 5. The pro o f is similar to the pro of of item 2. Indeed this case can b e proved b y induction on ca se item 4 if j = j 1 and j 2 = 1, and, on c a se item 7 if j 2 > 1. 6. The pro of is similar to the pro of of item 3. Indeed this case can b e prov ed by inductio n o n case item 5. 7. The pro of is similar to the pro of of item 4. Indeed this case can b e prov ed by inductio n o n case item 6. ⊓ ⊔ Pro of of Prop os ition 5 The pro o f is simila r to that of Pro po sition 3 by considering p − i instead o f p + i , and, C h . . . i instead of B h . . . i . ⊓ ⊔

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment