Attacker Control and Impact for Confidentiality and Integrity

Language-based information flow methods offer a principled way to enforce strong security properties, but enforcing noninterference is too inflexible for realistic applications. Security-typed languages have therefore introduced declassification mech…

Authors: Aslan Askarov (Cornell University), Andrew Myers (Cornell University)

Logical Methods in Computer Science V ol. 7 (3:17) 2011, pp . 1–33 www .lmcs-online.org Submitted Jun. 14, 2010 Published Sep . 26, 2011 A TT A CKER CONTR OL AND IMP A CT F OR CONFIDENTIALITY AND INTEGRITY ASLAN ASKAR OV AND AN DREW C. MYERS Departmen t of Computer Science, Cornell Univ ersity e-mail addr ess : aslan@cs.cornell.edu and andru@cs.cornell.edu Abstra ct. Language-based information flo w metho ds offer a principled w ay to enforce strong securit y prop erties, but enforcing noninterference is to o inflexible for realistic appli- cations. Securit y-typed languages hav e therefore introduced declassification mechanisms for relaxing confidentialit y policies, and endorsement mec hanisms for relaxing integrit y p olicies. How ev er, a contin uing challenge has been to define what security is guaranteed when such mechanisms are used. This pap er presents a new semantic framew ork for ex- pressing security p olicies for declassification and endorsemen t in a language-based setting. The key insight is that security can be c haracterized in terms of the influence that de- classification and endorsement allow to the attack er. The new framew ork introduces tw o notions of security to describe the influence of the attack er. A ttac k er control defines what the attac k er is able to learn from observ able effects of this co de; attack er impact captures the attack er’s influence on trusted locations. This approach yields nov el security condi- tions for chec k ed endorsemen ts and robust integrit y . The framework is flexible enough to reco ver and to impro ve on the previously introduced notions of robustness and qualified robustness. F urther, the new security conditions can b e soundly enforced by a security t yp e system. The applicabilit y and enforcement of the new p olicies is illustrated through v arious examples, including data sanitization and authentication. 1. Introduction Man y common securit y vulnerabilities can b e seen as violations of either confidentialit y or in tegrit y . As a general wa y to prev ent these information securit y vulnerabilities, information flo w control has become a p opular sub ject of study , b oth at the language lev el [23] and at the op erating-system lev el (e.g., [14, 12, 30]). The language-based approach holds the app eal that the security prop erty of nonin terference [13], can b e pro v ably enforced using a t yp e system [27]. In practice, how ever, nonin terference is to o rigid: many programs considered secure need to violate noninterference in limited wa ys. Using language-based do wngrading mechanisms such as de classific ation [17, 21] and en- dorsement [20, 29], programs can b e written in whic h information is in tentionally released, and in whic h untrusted information is inten tionally used to affect trusted information or 1998 ACM Subje ct Classific ation: D.3.3, D.4.6. Key wor ds and phr ases: Securit y type system, information flow, noninterference, confiden tialit y , integrit y , robustness, do wngrading, declassification, endorsement, securit y p olicies. LOGICAL METHODS l IN COMPUTER SCIENCE DOI:10.2168/LMCS-7 (3:17) 2011 © A. Askarov and A. C . Myers CC  Creative Commons 2 A. ASKAROV AND A. C. MYERS decisions. Declassification relaxes confidentialit y p olicies, and endorsemen t relaxes in tegrit y p olicies. Both endorsement and declassification ha v e b een essen tial for building realistic ap- plications, such as v arious applications built with Jif [15, 18]: games [5], a voting system [11], and web applications [9]. A contin uing challenge is to understand what securit y is obtained when co de uses down- grading. This pap er contributes a more precise and satisfactory answer to this question, par- ticularly clarifying ho w the use of endorsemen t w eak ens confiden tialit y . While muc h work has b een done on declassification (usefully summarized b y Sands and Sabelfeld [24]), there is comparatively little work on the interaction b etw een confidentialit y and endorsemen t. T o see suc h an in teraction, consider the follo wing notional co de example, in whic h a service holds b oth old data ( old_data ) and new data ( new_data ), but the new data is not to be released until time embargo_time . The v ariable new_data is considered confiden tial, and must b e declassified to b e released: if request _ time >= embargo _ time then return declassify (new _ data) else return old _ data Because the requester is not trusted, the requester must be treated as a p ossible attack er. Supp ose the requester has control o v er the v ariable request_time , which we can mo del by considering that v ariable to be low-in tegrity . Because the in tended security p olicy dep ends on request_tim e , the attack er controls the p olicy that is b eing enforced, and can obtain the confidential new data earlier than intended. This example shows that the integrit y of request_time affects the confidentialit y of new_data . Therefore, the program should b e considered secure only when the guard expression, request_time >= embargo_time , is high-in tegrit y . A different but reasonable securit y p olicy is that the requester may sp ecify the request time as long as the request time is in the past. This p olicy could b e enforced in a language with endorsemen t b y first c hecking the lo w-integrit y request time to ensure it is in the past; then, if the chec k succeeds, endorsing it to b e high-in tegrity and proceeding with the information release. The explicit endorsemen t is justifiable b ecause the attack er’s actions are p ermitted to affect the release of confident ial information as long as adv ersarial inputs ha v e b een prop erly sanitized. This is a common pattern in serv ers that pro cess p ossibly adv ersarial inputs. R obust de classific ation has b een in tro duced in prior w ork [28, 16, 10] as a semantic condition for secure in teractions b et w een integrit y and confidentialit y . The prior w ork also dev elops type systems for enforcing robust declassification, whic h are implemented as part of Jif [18]. Ho w ev er, prior securit y conditions for robustness are not satisfactory , for tw o reasons. First, these prior conditions c haracterize information security only for terminating programs. A program that do es not terminate is automatically considered to satisfy robust declassification, ev en if it releases information improp erly during execution. Therefore the securit y of programs that do not terminate, such as servers, cannot b e describ ed. A second and perhaps even more serious limitation is that prior security conditions largely ignore the p ossibilit y of endorsement, with the exception of qualifie d r obustness [16]. Qualified robust- ness giv es the endorse op eration a somewhat ad-ho c, nondeterministic semantics, to reflect the attack er’s ability to choose the endorsed v alue. This approach op erationally mo dels what the attack er can do, but do es not directly describ e the attac k er’s control ov er confidentialit y . A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 3 The introduction of nondeterminism also makes the securit y prop erty p ossibilistic. Ho w- ev er, p ossibilistic security prop erties ha v e b een criticized b ecause they can weak en under refinemen t [22, 25]. The main contribution of this pap er is a general, language-based semantic framew ork for expressing information flo w securit y and semantically capturing the ability of the at- tac k er to influence b oth the confiden tialit y and integrit y of information. The key building blo c ks for this seman tics are attacker know le dge [1] and its (nov el) dual, attacker imp act , whic h resp ectively describ e what attac k ers can know and what they can affect. Building up on attac k er knowledge, the in teraction of confiden tialit y and integrit y , which w e term attacker c ontr ol , can b e characterized formally . The robust interaction of confidentialit y and in tegrit y can then b e captured cleanly as a constrain t on attack er control. F urther, endorsemen t is naturally represented in this framework as a form of attack er control, and a more satisfactory version of qualified robustness can b e defined. All these securit y conditions can b e formalized in b oth pr o gr ess-sensitive and pr o gr ess-insensitive v ariants, allowing us to describ e the security of b oth terminating and nonterminating systems. W e sho w that the progress-insensitive v ariants of these impro v ed security conditions are enforced soundly by a simple security t yp e system. Recent v ersions of Jif hav e added a che cke d endorsement construct that is useful for expressing complex security p olicies [9], but whose seman tics were not precisely defined; this paper giv es seman tics, typing rules and a seman tic security condition for chec ked endorsemen t, and sho ws that chec k ed endorsemen t can be translated faithfully into simple endorsemen t at b oth the language and the semantic lev el. Our t yp e system can easily b e adjusted to enforce the progress-sensitive v arian ts of the security conditions, as has b een sho wn in the literature [26, 19]. The rest of this paper is structured as follo ws. Section 2 sho ws ho w to define in- formation security in terms of attack er knowledge. Section 3 in tro duces attack er con trol. Section 4 defines progress-sensitive and progress-insensitive robustness using the new frame- w ork. Section 5 extends this to impro v ed definitions of robustness that allow endorsements, generalizing qualified robustness. A type system for enforcing these robustness conditions is presen ted in Section 6. The c hec k ed endorsemen t construct app ears in Section 7, which in tro duces a new notion of robustness that allo ws chec ked endorsemen ts, and shows that it can b e understoo d in terms of robustness extended with simple endorsements. Section 8 in tro duces attac k er impact. A dditional examples are presen ted in Section 9, related work is discussed in Section 10, and Section 11 concludes. This pap er is an extended version of a previous pap er by the same authors [4]. The significan t c hanges include pro ofs of all the main theorems, a semantic rather than syntactic definition of fair attacks, and a renaming of “attack er p ow er” to “attack er impact”. 2. Semantics Information flow levels. W e assume tw o securit y levels for confidentialit y — public and se cr et — and tw o securit y levels for in tegrit y — truste d and untruste d . These lev els are denoted resp ectively P , S and T , U . W e define information flow ordering v b et ween these t w o levels: P v S , and T v U . The four lev els define a security lattice, as sho wn on Figure 1. Ev ery p oint on this lattice has t w o security comp onen ts: one for confidentialit y , and one for in tegrit y . W e extend the information flow ordering to elements on this lattice: ` 1 v ` 2 if the ordering holds b etw een the corresp onding comp onents. As is standard, we define join ` 1 t ` 2 4 A. ASKAROV AND A. C. MYERS P , T P , U S , U S , T Figure 1. Informa- tion flow lattice e ::= n | x | e op e c ::= skip | x := e | c ; c | if e then c 1 else c 2 | while e do c Figure 2. Syn tax of the language h n, m i ↓ n h x, m i ↓ m ( x ) h e 1 , m i ↓ v 1 h e 2 , m i ↓ v 2 v = v 1 op v 2 h e 1 op e 2 , m i ↓ v Figure 3. Seman tics of expressions h skip , m i− → h stop , m i h e, m i ↓ v h x := e, m i− → ( x,v ) h stop , m [ x 7→ v ] i h c 1 , m i− → t h c 0 1 , m 0 i h c 1 ; c 2 , m i− → t h c 0 1 ; c 2 , m 0 i h c 1 , m i− → t h stop , m 0 i h c 1 ; c 2 , m i− → t h c 2 , m 0 i h e, m i ↓ n n 6 = 0 h if e then c 1 else c 2 , m i− → h c 1 , m i h e, m i ↓ n n = 0 h if e then c 1 else c 2 , m i− → h c 2 , m i h e, m i ↓ n n 6 = 0 h while e do c, m i− → h c ; while e do c, m i h e, m i ↓ n n = 0 h while e do c, m i− → h stop , m i Figure 4. Seman tics of commands as the least upp er b ound of ` 1 and ` 2 , and me et ` 1 u ` 2 as the greatest low er b ound of ` 1 and ` 2 . All four lattice elements are meaningful; for example, it is p ossible for information to b e b oth secret and untrusted when it dep ends on b oth secret and untrusted (i.e., attac ker- con trolled) v alues. This lattice is the simplest p ossible choice for exploring the topics of this pap er; ho wev er, the results of this pap er straigh tforw ardly generalize to the richer security lattices used in other work on robustness [10]. L anguage and semantics. W e consider a simple imp erative language with syntax presen ted in Figure 2. The seman tics of the language is fairly standard and is given in Figures 3 and 4. F or expressions, w e define big-step ev aluation of the form h e, m i ↓ v , where v is the result of ev aluating expression e in memory m . F or commands, w e define a small-step op erational seman tics, in whic h a single transition is written as h c, m i− → t h c 0 , m 0 i , where c and m are the initial command and memory , and c 0 and m 0 are the resulting command and memory . The only un usual feature is the annotation t on each transition, which we call an event . Even ts record assignmen ts: an assignment to v ariable x of v alue v is recorded b y an ev en t ( x, v ) . This corresp onds to our attack er mo del, in whic h the attack er may A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 5 only observe assignments to public v ariables. W e write h c, m i− → ∗  t to mean that trace ~ t is pro duced starting from h c, m i using zero or more transitions. Each trace ~ t is comp osed of individual even ts t 1 · t 2 · · · t k · · · , and a pr efix of ~ t up to the i -th even t is denoted as ~ t i ; w e use the operator · to denote the concatenation of t wo traces or even ts. If a transition do es not affect memory , its even t is empty , whic h is either written as  or is omitted, e.g.: h c, m i− → h c 0 , m 0 i . Finally , w e assume that the se curity envir onment Γ maps program v ariables to their securit y levels. Giv en a memory m , we write m P for the public part of the memory; similarly , m T is the trusted part of m . W e write m = T m 0 when memories m and m 0 agree on their trusted parts, and m = P m 0 when m and m 0 agree on their public parts. 2.1. A ttac ker kno wledge This section pro vides bac kground on the attack er-centric model for information flow secu- rit y [1]. W e recall definitions of attac k er kno wledge, progress knowledge, and divergence kno wledge, and introduce progress-(in)sensitive r ele ase events . L ow events. Among the even ts that are generated during a trace, we distinguish a sequence of lo w (or public) ev ents. Low even ts corresp ond to observ ations that an attac k er can mak e during a run of the program. W e assume that the attack er may observe individual assignmen ts to public v ariables. F urthermore, if the program terminates, we assume that a termination even t ⇓ may also b e observ ed by the attack er. If attack er can detect div ergence of programs (cf. Definition 2.3) then divergence ⇑ is also a low ev en t. Giv en a trace ~ t , low ev en ts in that trace are denoted as ~ t P . A single low even t is often denoted as ` , and a sequence of low ev en ts is denoted as ~ ` . W e ov erload the notation for semantic transitions, writing h c, m i− → ∗   if only low even ts pro duced from configuration h c, m i are relev ant; that is, there is a trace ~ t suc h that h c, m i− → ∗  t ∧ ~ t P = ~ ` . Lo w even ts are the key element in the definition of attacker know le dge [1]. The knowledge of the attack er is describ ed by the set of initial memories compatible with low observ ations. An y reduction in this set means the attack er has learned something ab out secret parts of the initial memory . Definition 2.1 (A ttac k er knowledge) . Given a sequence of low ev en ts ~ ` , initial low memory m P , and program c , attacker know le dge is k ( c, m P , ~ ` ) , { m 0 | m P = m 0 P ∧ h c, m 0 i− → ∗   } A ttac k er knowledge gives a handle on what information the attac k er learns with every low ev en t. The smaller the knowledge set, the more precise is the attack er’s information ab out secrets. Knowledge is monotonic in the num ber of low even ts: as the program pro duces lo w ev en ts, the attack er may learn more ab out secrets. T wo extensions of attac k er knowledge are useful: pr o gr ess know le dge [3, 2] and diver genc e know le dge [3]. Definition 2.2 (Progress knowledge) . Given a sequence of lo w even ts ~ ` , initial low memory m P , and a program c , define pr o gr ess know le dge k → ( c, m P , ~ ` ) as k → ( c, m P , ~ ` ) , { m 0 | m 0 P = m P ∧ ∃ ` 0 . h c, m 0 i− → ∗   h c 00 , m 00 i− → ∗  0 } 6 A. ASKAROV AND A. C. MYERS Progress knowledge represents the information the attac k er obtains by seeing public even ts ~ ` follo w ed b y some other public even t. Progress knowledge and attac k er knowledge are related as follows: given a program c , memory m and a sequence of lo w even ts ` 1 · · · ` n obtained from h c, m i , we ha v e that for all i < n , k ( c, m P , ~ ` i ) ⊇ k → ( c, m P , ~ ` i ) ⊇ k ( c, m P , ~ ` i +1 ) T o illustrate this with an example, consider program l := 0; ( while h = 0 do skip ); l := h with initial memory m ( h ) = 7 . This program pro duces a sequence of tw o low ev en ts ( l, 0) · ( l, 7) . The knowledge after the first even t k ( c, m P , ( l , 0)) is a set of all p ossible memories that agree with m on the public parts and can produce the lo w ev ent ( l , 0) . Note that no lo w even ts are p ossible after the first assignment unless h is non-zero. Progress kno wledge reflects this: k → ( c, m P , ( l , 0)) is a set of memories suc h that h 6 = 0 . Finally , the kno wledge after tw o even ts k ( c, m P , ( l , 0) · ( l , 7)) is a set of memories where h = 7 . Using attack er kno wledge, one can express many confiden tialit y p olicies [7, 2, 8]. F or example, a strong notion of pr o gr ess-sensitive noninterfer enc e [13] can b e expressed b y de- manding that knowledge b etw een low ev en ts do es not change: k ( c, m P , ~ ` i ) = k ( c, m P , ~ ` i +1 ) Progress knowledge enables expressing more p ermissiv e p olicies, suc h as pr o gr ess-insensitive noninterfer enc e , which allo ws leak age of information, but only via termination channels (in [3] it is called termination-insensitive ). This is expressed by requiring equiv alence of the progress knowledge after seeing i even ts with the kno wledge obtained after i + 1 -th even t: k → ( c, m P , ~ ` i ) = k ( c, m P , ~ ` i +1 ) In the example l := 0; ( while h = 0 do skip ); l := 1 , the knowledge inclusion b et w een the t w o even ts is strict: k ( c, m P , ( l , 0)) ⊃ k ( c, m P , ( l , 0) · ( l , 1)) . Therefore, the example do es not satisfy progress-sensitiv e noninterfere nce. On the other hand, the low even t that follo ws the while lo op do es not reveal more information than the kno wledge ab out the existence of that even t. F ormally , k → ( c, m P , ( l , 0)) = k ( c, m P , ( l , 0) · ( l , 1)) , hence the program satisfies progress-insensitiv e noninterference. These definitions also allow us to reason ab out kno wledge changes along p arts of the tr ac es . W e sa y that kno wledge is preserv ed in a progress-(in)sensitive wa y along a part of a trace, assuming that the resp ective knowledge equality holds for the low even ts that corresp ond to that part. Next, w e extend p ossible observ ations to a div ergence ev en t ⇑ (w e write h c, m i ⇑ to mean configuration h c, m i diverges). F or attac k ers that can observe program divergence ⇑ , w e define knowledge on the sequence of low ev en ts that includes divergence: Definition 2.3 (Divergence kno wledge) . k ( c, m P , ~ ` ⇑ ) , { m 0 | m 0 P = m P ∧ h c, m 0 i− → ∗   h c 00 , m 00 i ∧ h c 00 , m 00 i ⇑} Note that the abov e definition does not require div ergence immediately after ~ ` — it allo ws for more low even ts to b e pro duced after ~ ` . Divergence knowledge is used in Section 4. Let us consider even ts at whic h knowledge preserv ation is broken. W e call these even ts r ele ase events . Definition 2.4 (Release even ts) . Given a program c and a memory m , such that h c, m i− → ∗   h c 0 , m 0 i− → ∗ r A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 7 • r is a pr o gr ess-sensitive r ele ase event , if k ( c, m P , ~ ` ) ⊃ k ( c, m P , ~ ` · r ) • r is a pr o gr ess-insensitive r ele ase event , if k → ( c, m P , ~ ` ) ⊃ k ( c, m P , ~ ` · r ) It is easy to v alidate that a progress-insensitiv e release ev en t is also a progress-sensitive ev en t. F or example, in the program low := 1; low 0 := h , the second assignmen t is b oth a progress-sensitiv e and a progress-insensitive release even t. The reverse is not true — in the program while h = 0 do skip ; low := 1 the assignmen t to low is a progress-sensitiv e release ev en t, but is not a progress-insensitive release even t. 3. Attac ks T o reason about program securit y in the presence of activ e attacks, we introduce a formal mo del of the attack er. Our formalization follows that in [16], where attack er-pro vided co de can b e injected into the program. This section provides examples of how attack er-injected co de may affect attack er knowledge, follow ed by a semantic characterization of the attack er’s influence on knowledge. First, we extend the syn tax to allow execution of attack er-con trolled co de: c [ ~ • ] ::= . . . | [ • ] Next, we introduce notation [ ~ t ] to highligh t that the trace ~ t is pro duced b y attac k er- injected co de. The seman tics of the language is extended accordingly . h a, m i− → t h a 0 , m 0 i h [ a ] , m i− → [ t ] h [ a 0 ] , m 0 i h [ stop ] , m i− → h stop , m i W e limit attacks that can b e substituted into holes to so-called fair attacks , whic h represent reasonable limitations on the impact of the attac k er. Unlike earlier approac hes, where fair attac ks are defined syntactically [16, 10], w e define them semantically . This allows us to include a larger set of attacks. T o ensure that we include all syntactic attac ks we mak e use of a reachabilit y translation, explained b elow. Roughly , w e require a fair attack to not give new knowledge and to not mo dify trusted v ariables. A refinement of this idea is that an attack is fair if it giv es new kno wledge but only because the reachabilit y of the attac k dep ends on a secret. T o capture this refinemen t, w e define an auxiliary translation to make reachabilit y of attac ks explicit. W e assume a trusted, public v ariable reach that do es not appear in the source of c [ ~ • ] . Let op erator T b e a source-to-source transformation of c [ ~ • ] that makes reachabilit y of attac ks explicit. Definition 3.1 (Explicit reachabilit y translation) . Giv en a program c [ ~ • ] , define ( T ( c [ ~ • ]) as follows: • T ([ • ]) = ⇒ reach := reach + 1; [ • ] • T ( c 1 ; c 2 ) = ⇒ T ( c 1 ); T ( c 2 ) • T ( if e then c 1 else c 2 ) = ⇒ if e then T ( c 1 ) else T ( c 2 ) • T ( while e do c ) = ⇒ while e do T ( c ) • T ( c ) = ⇒ c for all other commands c The formal definition uses that an y trace ~ t can b e represented as a sequence of subtraces ~ t 1 · [ ~ t 2 ] · · · ~ t 2 ∗ n − 1 · [ ~ t 2 ∗ n ] , where even-n umbered subtraces corresp ond to the even ts pro duced b y attack er-controlled co de. Giv en a trace ~ t , we denote the trusted even ts in the trace as ~ t T . W e use notation t  for a single trusted even t, and ~ t  for a sequence of trusted ev en ts. 8 A. ASKAROV AND A. C. MYERS Definition 3.2 (F air attack) . Given a program c [ ~ • ] , suc h that T ( c [ ~ • ]) = ⇒ c [ ~ • ] , sa y that ~ a is a fair attack on c [ ~ • ] if for all memories m , suc h that h c [ ~ a ] , m i− → ∗  t and ~ t = ~ t 1 · [ ~ t 2 ] · · · ~ t 2 ∗ n − 1 · [ ~ t 2 ∗ n ] , i.e., there are 2 n in termediate configurations h c j , m j i , 1 ≤ j ≤ 2 n , for whic h h c [ ~ a ] , m i− → ∗  t 1 h c 1 , m 1 i− → ∗ [  t 2 ] h c 2 , m 2 i− → ∗  t 3 . . . − → ∗ [  t 2 n ] h c 2 n , m 2 n i . . . then for all i , 1 ≤ i ≤ n , it holds that k ( c [ ~ a ] , m, ~ t 1 · · · ~ t 2 i − 1 ) = k ( c [ ~ a ] , m, ~ t 1 · · · [ ~ t 2 i ]) and ~ t 2 i  =  . F or example, in the program if h > 0 then [ • ] else skip the attac ks a 1 = [ low := 1] and attack a 2 = [ low := h > 0] are fair, but attack a 3 = [ low := h ] is not. 3.1. Examples of attack er influence This section presen ts a few examples of attac k er influence on knowledge. W e also introduce pure av ailabilit y attacks and progress attacks, to which w e refer later in this section. In the examples b elow, we use notation [( u, v )] when a lo w even t ( u, v ) is generated b y attac k er-injected co de. Consider program [ • ]; low := u > h ; where h is a secret v ariable, and u is an untrusted public v ariable. The attack er’s co de executes b efore the low assignmen t and may change the v alue of u . Consider memory m , where m ( h ) = 7 , and the tw o attacks a 1 = u := 0 and a 2 = u := 10 . These attac ks result in differen t v alues b eing assigned to v ariable low . The first trace results in low ev en ts [( u, 0)] · ( low , 0) , while the second trace results in low even ts [( u, 10)] · ( low , 1) . Therefore, the kno wledge about the secret is differen t in eac h trace. W e ha v e k ( c [ a 1 ] , m P , [( u, 0)] · ( low , 0)) = { m 0 | m 0 ( h ) ≥ 0 } k ( c [ a 2 ] , m P , [( u, 10)] · ( low , 1)) = { m 0 | m 0 ( h ) < 10 } Clearly , this program gives the attack er some control ov er what information ab out secrets he learns. Observ e that it is not necessary for the last assignment to differ in order for the kno wledge to b e different. F or example, consider attack a 3 = u := 5 . This attack results in low ev en ts [( u, 5)] · ( low , 0) , whic h do the same assignmen t to low as a 1 do es. Attac ker kno wledge, how ever, is different from that obtained by a 1 : k ( c [ a 3 ] , m P , [( u, 5)] · ( low , 0)) = { m 0 | m 0 ( h ) ≥ 5 } Next, consider program [ • ]; low := h . This program giv es aw ay knowledge ab out the v alue of h indep endently of untrusted v ariables. The only wa y for the attack er to influence what information he learns is to prev en t that assignment from happ ening at all, which, as a result, will preven t him from learning that information. This can b e done by an attack such as a = while true do skip , which mak es the program diverge before the assignmen t is reached. W e call attac ks like this pur e availability attacks . Another example of a pure a v ailabilit y attac k is in the program [ • ]; ( while u = 0 do skip ); low := h . In this program, an y attack that sets u to 0 prev en ts the assignment from happ ening. Consider another example: [ • ]; while u < h 0 do skip ; low := 1 . As in the previous example, the v alue of u may change the reachabilit y of low := 1 . Assuming the attack er can observ e div ergence, this is not a pure a v ailability attac k, b ecause div erging before the last assignmen t giv es the attac k er additional secret information, namely that u < h 0 . New A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 9 x, v y , u z, w uncertainty uncertainty low events low events Figure 5. Similar attacks and traces information is also obtained if the attac k er sees the low assignment. W e name attac ks lik e this pr o gr ess attacks . In general, a progress attack is an attack that leads to program div ergence in a w a y that observing that div ergence (i.e., detecting there is no progress) gives new knowledge to the attac k er. 3.2. A ttac ker con trol W e represent attack er control as a set of attac ks that are similar in their influence on knowl- edge. Intuitiv ely , if a program leaks no information to the attack er, the control corresp onds to all p ossible attac ks. In general, the more attacks are similar, the less influence the at- tac k er has. Moreo v er, the con trol is a temp oral prop ert y and dep ends on the trace that has b een currently pro duced. The longer a trace is, the more influence an attack ma y hav e, and the smaller the control set is. Similar attacks. The key element in the definition of control is sp ecifying when tw o attacks are similar. Given a program c [ ~ • ] , memory m , consider tw o attacks ~ a and ~ b that pro duce traces ~ t and ~ q respectively: h c [ ~ a ] , m i− → ∗  t and h c [ ~ b ] , m i− → ∗  q W e compare ~ a and ~ b based on how they c hange attac k er kno wledge along their resp ective traces. First, if kno wledge is preserved along a subtrace of one of the traces, sa y ~ t , it must b e preserv ed along a subtrace of ~ q as w ell. Second, if at some point in ~ t there is a release ev en t ( x, v ) , there must b e a matching lo w ev en t ( x, v ) in ~ q , and the attac ks are similar along the rest of the traces. Visually , this requirement is describ ed b y the tw o diagrams in Figure 5. Eac h diagram sho ws the change of kno wledge as more low ev en ts are pro duced. Here the x -axis corre- sp onds to lo w ev en ts, and the y -axis reflects the attac k er’s uncertaint y ab out initial secrets. Whenev er one of the traces reaches a release even t, depicted b y vertical drops, there m ust b e a corresp onding low even t in the other trace, such that the t w o even ts agree. This is depicted by the dashed lines b et ween the tw o diagrams. F ormally , these requiremen ts are stated using the follo wing definitions. Definition 3.3 (Kno wledge segmentation) . Given a program c , memory m , and a trace ~ t , a sequence of indices p 1 . . . p N suc h that p 1 < p 2 · · · < p N and ~ t P = ` 1 ...p 1 · ` p 1 +1 ...p 2 · · · ` p N − 1 +1 ...p N is called • pr o gr ess-sensitive know le dge se gmentation of size N , if ∀ j ≤ N , ∀ i . p j − 1 + 1 ≤ i < p j . k ( c, m P , ~ ` i ) = k ( c, m P , ~ ` i +1 ) , denoted by S eg ( c, m, ~ t, p 1 . . . p N ) . 10 A. A SKAROV AND A. C. MYERS • pr o gr ess-insensitive know le dge se gmentation of size N if ∀ j ≤ N , ∀ i . p j − 1 + 1 ≤ i < p j . k → ( c, m P , ~ ` i ) = k ( c, m P , ~ ` i +1 ) , denoted by S eg → ( c, m, ~ t, p 1 . . . p N ) . Lo w even ts p i + 1 for 1 ≤ i < N are called se gmentation events . Note that given a trace, there can be more than one wa y to segmen t it, and for ev ery trace consisting of n lo w ev ents, this can b e trivially ac hiev ed by a segmentation of size n . W e use kno wledge segmentation to define attack similarity : Definition 3.4 (Similar attacks and traces ∼ c [  • ] ,m ) . Given a program c [ ~ • ] , memory m , and t w o attacks ~ a and ~ b that pro duce traces ~ t and ~ q , define ~ a and ~ b as similar along ~ t and ~ q for the progress-sensitiv e attack er, if there are t w o segmentations p 1 . . . p N and p 0 1 . . . p 0 N (for some N ) suc h that • S eg ( c [ ~ a ] , m, ~ t, p 1 . . . p N ) , • S eg ( c [ ~ b ] , m, ~ q , p 0 1 . . . p 0 N ) , and • ∀ i . 1 ≤ i < N . t P p i +1 = q P p 0 i +1 . F or the progress-insensitiv e attack er, the definition is similar except that it uses progress- insensitiv e segmentation S eg → . If tw o attac k–trace pairs are similar, we write ( ~ a, ~ t ) ∼ c [  • ] ,m ( ~ b, ~ q ) (for progress-insensitiv e similarity , ( ~ a, ~ t ) ∼ c [  • ] ,m → ( ~ b, ~ q )) . The construction of Definitions 3.3 and 3.4 can b e illustrated by program [ • ]; if u then ( while h ≤ 100 do skip ) else skip ; low 1 := 0; low 2 := h > 100 Consider memory with m ( h ) = 555 , and t wo attac ks a 1 = u := 1 , and a 2 = u := 0 . Both attac ks reac h the assignments to low v ariables. Ho w ev er, for a 2 the assignmen t to low 2 is a progress-insensitiv e release ev en t, while for a 1 the kno wledge c hanges at an earlier assignmen t. A ttacker c ontr ol. W e define attac k er con trol with respect to an attac k ~ a and a trace ~ t as the set of attacks that are similar to the given attac k in its influence on knowledge. Definition 3.5 (Attac ker control (progress-sensitiv e)) . R ( c [ ~ • ] , m, ~ a, ~ t ) , { ~ b | ∃ ~ q . ( ~ a, ~ t ) ∼ c [  • ] ,m ( ~ b, ~ q ) } T o illustrate ho w attack er control changes, consider program [ • ]; low := u < h ; low 0 := h where u is an untrusted v ariable and h is a secret trusted v ariable. T o understand attack er con trol of this program, w e consider an initial memory m ( h ) = 7 and attac k a = u := 5 . The low even t ( low , 1) in this trace is a release even t. The attack er con trol is the set of all attac ks that are similar to a and trace [( u := 5)] , ( low , 1) in its influence on knowledge. This corresp onds to attacks that set u to v alues such that u < 7 . The assignment to low 0 c hanges attac k er knowledge as w ell, but the information that the attac ker gets do es not dep end on the attack: any trace starting in m and reac hing the second assignment produces the low ev en t ( low 0 , 7) ; hence, the attac k er control do es not c hange at that even t. Consider the same example but with the tw o assignments sw app ed: [ • ]; low 0 := h ; low := u < h . The assignment to low 0 is a release even t that the attack er cannot affect. Hence the control includes all attacks that reach this assignment. The result of the assignment to low dep ends on u . How ev er, this result does not c hange attack er kno wledge. Indeed, in this A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 11 release events attacks R ⊲ R (a) Release control release events attacks R ⊲ R (b) Robustness Figure 6. Release control and robustness program, the second assignment is not a release ev en t at all. Therefore, the attac k er control is simply all attacks that reach the first assignmen t. Pr o gr ess-insensitive c ontr ol. F or progress-insensitive security , attac ker control is defined similarly using the progress-insensitive comparison of attacks. Definition 3.6 (Attac ker control (progress-insensitiv e)) . R → ( c [ ~ • ] , m, ~ a, ~ t ) , { ~ b | ∃ ~ q . ( ~ a, ~ t ) ∼ c [  • ] ,m → ( ~ b, ~ q )) } Consider program [ • ]; while u < h do skip ; low := 1 . Here, an y attac k pro duces a trace that preserves progress-insensitive noninterference. If the lo op is tak en, the program pro duces no low even ts, hence, it gives no new knowledge to the attack er. If the lo op is not tak en, and the lo w assignment is reac hed, this assignment preserves attack er kno wledge in a progress-insensitive wa y . Therefore, the attack er control is all attac ks. 4. Robustness R ele ase c ontr ol. This section defines r ele ase c ontr ol R  , which captures the attack er’s influ- ence on release even ts. In tuitiv ely , release control expresses the exten t to which an attack er can affect the de cision to pro duce some release ev en t. Definition 4.1 (Progress-sensitive release control ) . R  ( c [ ~ • ] , m, ~ a, ~ t ) , { ~ b | ∃ ~ q . ( ~ a, ~ t ) ∼ c [  • ] ,m ( ~ b, ~ q ) ∧ ( ∃ ~ r 0 . k ( c [ ~ b ] , m P , ~ q P ) ⊃ k ( c [ ~ b ] , m P , ~ q P · ~ r 0 P ) ∨ k ( c [ ~ b ] , m P , ~ q P ) ⊃ k ( c [ ~ b ] , m P , ~ q P ⇑ ) ∨ h c [ ~ b ] , m i ⇓ ) } The definition for release control is based on the one for attac k er con trol with the three additional clauses, explained below. These clauses restrict the set of attack s to those that either terminate or pro duce a release ev en t. Because the progress-sensitive attack er can also learn new information by observing divergence, the definition con tains an additional clause (on the third line) that uses divergence knowledge to reflect that. Figure 6a depicts the relationship betw een release control and attac ker con trol, where the x -axis corresponds to low even ts, and the y -axis corresp onds to attacks. The solid line 12 A. A SKAROV AND A. C. MYERS depicts attack er control R , where vertical lines corresp ond to release even ts. The gra y area denotes release control R  . In general, for a giv en attac k ~ a and a corresp onding trace ~ t · ~ r , where ~ r con tains a release even t, w e ha v e the follo wing relation b et w een release con trol and attac k er control: R ( c [ ~ • ] , m, ~ a, ~ t ) ⊇ R  ( c [ ~ • ] , m, ~ a, ~ t ) ⊇ R ( c [ ~ • ] , m, ~ a, ~ t · ~ r ) (4.1) Note the white gaps and the gra y release con trol abov e the dotted lines on Figure 6a. The white gaps corresp ond to difference R ( c [ ~ • ] , m, ~ a, ~ t ) \ R  ( c [ ~ • ] , m, ~ a, ~ t ) . This is a set of attac ks that do not pro duce further release even ts and that diverge without giving any new information to the attac k er—pure a v ailabilit y attacks. The gray zones ab o v e the dotted lines are more in teresting. Every such zone corresp onds to the difference R  ( c [ ~ • ] , m, ~ a, ~ t ) \ R ( c [ ~ • ] , m, ~ a, ~ t · ~ r ) . In particular, when this set is non-empty , the attack er can launch attac ks corresp onding to each of the last three lines of Definition 4.1: (1) either trigger a different release even t ~ r 0 , or (2) cause program to diverge in a wa y that also releases information, or (3) prev en t a release even t from happ ening in a wa y that leads to program termination Absence of suc h attacks constitutes the basis for our securit y conditions in Definitions 4.3 and 4.4. Before moving on to these definitions, we introduce the progress-insensitive v ariant of release control. Definition 4.2 (Release control (progress-insensitiv e)) . R  → ( c [ ~ • ] , m, ~ a, ~ t ) , { ~ b | ∃ ~ q . ( ~ a, ~ t ) ∼ c [  • ] ,m → ( ~ b, ~ q ) ∧ ( ∃ ~ r 0 . k → ( c [ ~ b ] , m P , ~ q P ) ⊃ k ( c [ ~ b ] , m P , ~ q P · ~ r 0 P ) ∨ h c [ ~ b ] , m i ⇓ ) } This definition uses the progress-insensitiv e v ariants of similar attacks and release ev en ts. It also do es not account for kno wledge obtained from divergence. With the definition of release control at hand w e can now define semantic conditions for robustness. The in tuition is that all attac ks leading to release even ts should lead to the same release ev ent. F ormally , this is defined as inclusion of release control into attac k er con trol, where release control is computed on the prefix of the trace without a release even t. Definition 4.3 (Progress-sensitive robustness) . Program c [ ~ • ] satisfies pr o gr ess-sensitive r o- bustness if for all memories m , attac ks ~ a , and traces ~ t ~ r , such that h c [ ~ a ] , m i− → ∗  t h c 0 , m 0 i− → ∗  r and ~ r con tains a release even t, i.e., k ( c [ ~ a ] , m P , ~ t P ) ⊃ k ( c [ ~ a ] , m P , ~ t P · ~ r P ) , we hav e R  ( c [ ~ • ] , m, ~ a, ~ t ) ⊆ R ( c [ ~ • ] , m, ~ a, ~ t · ~ r ) Note that b ecause of Equation 4.1, set inclusion in the ab ov e definition could b e replaced with strict equalit y , but w e use ⊆ for compatibilit y with future definitions. Figure 6b illustrates the relation b etw een release control and attack er con trol for robust programs. Note how release control is b ounded by the attac k er control at the next release even t. A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 13 Examples. W e illustrate the definition of robustness with a few examples. Consider program [ • ]; low := u < h , and memory such that m ( h ) = 7 . This program is rejected b y Definition 4.3. T o see this, pic k an a = u := 5 , and consider the part of the trace preceding the low assignment. Release con trol R  ( c [ ~ • ] , m, a, [( u, 5)]) is all attac ks that reach the assignment to low . On the other hand, the attack er con trol R ( c [ ~ • ] , m, a, [( u, 5)] · ( low , 1)) is the set of all attacks where u < 7 , whic h is smaller than R  . Therefore this program does not satisfy the condition. Program [ • ]; l ow := h ; low 0 := u < h satisfies robustness. The only release even t here corresp onds to the first assignment. How ever, b ecause the kno wledge given by that assignmen t do es not depend on untrusted v ariables, the release control includes all attac ks that reach the assignment. Program [ • ]; if u > 0 then low := h else skip is rejected. Consider memory m ( h ) = 7 , and attack a = u := 1 that leads to low trace [( u, 1)] · ( low , 7) . The attack er control for this attac k and trace is the set of all attacks such that u > 0 . On the other hand, release control R  ( c [ ~ • ] , m, ~ a, [( u, 1)]) is the set of all attacks that lead to termination, which includes attac ks suc h that u ≤ 0 . Therefore, the release control corresp onds to a bigger set than the attack er con trol. Program [ • ]; while u > 0 do skip ; low := h is accepted. Depending on the attack er- con trolled v ariable the release ev ent is reac hed. How ever, this is an example of av ailabilit y attac k, which is ignored b y Definition 4.3. Program [ • ]; while u > h do skip ; low := 1 is rejected. An y attack leading to the lo w assignment restricts the control to attacks suc h that u ≤ h . Ho wev er, release control includes attacks u > h , b ecause the attack er learns information from div ergence. The definition of progress-insensitive robustness is similar to Definition 4.3, but uses progress-insensitiv e v ariants of release ev ents, con trol, and release con trol. As a result, program [ • ]; while u > h do skip ; low := 1 is accepted: attack er control is all attacks. Definition 4.4 (Progress-insensitive robustness) . Program c [ ~ • ] satisfies pr o gr ess-insensitive r obustness if for all memories m , attacks ~ a , and traces ~ t ~ r , such that h c [ ~ a ] , m i− → ∗  t h c 0 , m 0 i− → ∗  r and ~ r con tains a release even t, i.e., k → ( c [ ~ a ] , m P , ~ t P ) ⊃ k ( c [ ~ a ] , m P , ~ t P · ~ r P ) , we hav e R  → ( c [ ~ • ] , m, ~ a, ~ t ) ⊆ R → ( c [ ~ • ] , m, ~ a, ~ t · ~ r ) 5. Endorsement This section extends the seman tic p olicies for robustness in a wa y that allo ws endorsing attac k er-pro vided v alues. Syntax and semantics. W e add endorsemen t to the language: c [ ~ • ] ::= . . . | x := endorse η ( e ) W e assume that ev ery endorsement in the program source has a unique endorsement lab el η . Semantically , endorsements pro duce endorsement events , denoted by endorse ( η, v ) , whic h record the lab el of the endorsement statemen t η together with the v alue v that is endorsed. h e, m i ↓ v h x := endorse η ( e ) , m i− → endorse ( η,v ) h stop , m [ x 7→ v ] i 14 A. A SKAROV AND A. C. MYERS Whenev er the endorsement lab el is unimportant, w e omit it from the examples. Note that endorse ( η , v ) even ts need not men tion v ariable name x since that information is implied by the unique lab el η . Consider example program [ • ]; low := endorse η 1 ( u < h ) . This program do es not satisfy Definition 4.3. The reasoning for this is exactly the same as for program [ • ]; low := u < h from Section 4. Irr elevant attacks. Endorsement of certain v alues gives attac ker some control o ver the kno wledge. The key technical elemen t of this section is the notion of irr elevant attacks , whic h defines the set of attacks that are endorsed, and that are therefore exclude d when comparing attack er control with release con trol. W e define irrelev an t attacks formally b e- lo w, based on the trace that is pro duced by a program. Giv en a program c [ • ] , starting memory m , and a trace ~ t , irrelev ant attac ks, denoted here by Φ( c [ ~ • ] , m, ~ t ) , are the attacks that lead to the same sequence of endorsement even ts as in ~ t , until they necessarily disagree on one of the endorsements. Because the influence of these attac ks is reflected at endorsement even ts, w e exclude them from consideration when comparing with attack er control. W e start by defining irr elevant tr ac es . Given a trace ~ t , irrelev an t traces for ~ t are all traces ~ t 0 that agree with ~ t on some prefix of endorsement ev en ts until they necessarily disagree on some endorsement. W e define this set as follows. Definition 5.1 (Irrelev an t traces) . Given a trace ~ t , where endorsemen ts are mark ed as endorse ( η j , v j ) , define a set of irrelev an t traces based on the num ber of endorsements in ~ t as φ i ( ~ t ) : φ 0 ( ~ t ) = ∅ , and φ i ( ~ t ) = { ~ t 0 | ~ t 0 = ~ q · endorse ( η i , v 0 i ) · ~ q 0 } such that ~ q is a prefix of ~ t 0 with i − 1 even ts all of whic h agree with endorse even ts in ~ t , and v i 6 = v 0 i Define φ ( ~ t ) , S i φ i ( ~ t ) as a set of irr elevant tr ac es w.r.t. ~ t . With the definition of irrelev ant traces at hand, w e can define irrelev an t attac ks: irrel- ev an t attacks are attacks that lead to irrelev ant traces. Definition 5.2 (Irrelev an t attac ks) . Giv en a program c [ ~ • ] , initial memory m , and a trace ~ t , such that h c [ ~ • ] , m i− → ∗  t , define irr elevant attacks Φ( c [ ~ • ] , m, ~ t ) as Φ( c [ ~ • ] , m, ~ t ) , { ~ a | h c [ ~ a ] , m i− → ∗  t 0 ∧ ~ t 0 ∈ φ ( ~ t ) } Se curity. The security conditions for robustness can now b e adjusted to accommo date en- dorsemen ts that happ en along traces. The idea is to exclude irrelev ant attac ks from the left-hand side of Definitions 4.3 and 4.4. This security condition, which has both progress- sensitiv e and progress-insensitive versions, expresses roughly the same idea as qualifie d r o- bustness [16], but in a more natural and direct wa y . Definition 5.3 (Progress-sensitive robustness with endorsements) . Program c [ ~ • ] satisfies pr o gr ess-sensitive r obustness with endorsement if for all memories m , attac ks ~ a , and traces A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 15 release events attacks Φ (a) Irrelev ant attacks release events attacks R R � (b) Robustness w/o endorsemen ts (unsat- isfied) release events attacks Φ R R � (c) Robustness with endorse- men ts (satisfied) Figure 7. Irrelev an t attacks and robustness with endorsements ~ t ~ r , such that h c [ ~ a ] , m i− → ∗  t h c 0 , m 0 i− → ∗  r and ~ r con tains a release even t, i.e., k ( c [ ~ a ] , m P , ~ t P ) ⊃ k ( c [ ~ a ] , m P , ~ t P · ~ r P ) , we hav e R  ( c [ ~ • ] , m, ~ a, ~ t ) \ Φ( c [ ~ • ] , m, ~ t · ~ r ) ⊆ R ( c [ ~ • ] , m, ~ a, ~ t · ~ r ) W e refer to the set R  ( c [ ~ • ] , m, ~ a, ~ t ) \ Φ( c [ ~ • ] , m, ~ t · ~ r ) as a set of r elevant attacks . Figures 7a to 7c visualize irrelev ant attacks and the semantic condition of Definition 5.3. Figure 7a sho ws the set of irrelev ant attacks, depicted by the shaded gray area. This set increases at endorsemen t even ts mark ed by stars. Figure 7b shows an example trace where robustness is not satisfied — the gray area corresp onding to release control R  exceeds the attack er control (depicted b y the solid line). Finally , in Figure 7c, we sup erimp ose Figures 7a and 7b. This illustrates that when the set of irrelev an t attac ks is excluded from the release control (the area under white dashed lines), the program is accepted by robustness with endorsements. Examples. Program [ • ]; low := endorse η 1 ( u < h ) is accepted by Definition 5.3. Consider initial memory m ( h ) = 7 , and an attac k u := 1 ; this pro duces a trace [( u, 1)] endorse ( η 1 , 1) . The endorsed assignment also pro duces a release even t. W e hav e that • Release control R  is the set of all attac ks that reach the lo w assignment. • Irrelev an t traces φ ([( u, 1)] endorse ( η 1 , 1)) is a set of traces that end in endorsement even t endorse ( η 1 , v ) such that v 6 = 1 . Th us, irrelev an t attac ks Φ([ • ]; low := endorse η 1 ( u < h ) , m, [( u, 1)] endorse ( η 1 , 1)) must consist of attac ks that reach the lo w assignment and set u to v alues u ≥ 7 . • The left-hand side of Definition 5.3 is therefore the set of attacks that reach the endorse- men t and set u to u < 7 . • As for the attack er control on the righ t-hand side, it consists of attacks that set u < 7 . Hence, the set inclusion of Definition 5.3 holds and the program is accepted. Program [ • ]; low := endorse η 1 ( u ); low 0 := u < h 00 is accepted. The endorsement in the first assignmen t implies that all relev ant attacks must agree on the v alue of u , and, consequently , they agree on the v alue of u < h 00 , which gets assigned to low 0 . This also means that relev ant attac ks b elong to the attack er control (whic h contains all attac ks that agree on u < h 00 ). Program [ • ]; low := endorse η 1 ( u < h ); low 0 := u < h 00 is rejected. T ake initial memory suc h that m ( h ) 6 = m ( h 0 ) . The set of relev an t attacks after the second assignment contains attac ks that agree on u < h (due to the endorsemen t), but not necessarily on u < h 00 . The latter, how ever, is the requirement for the attacks that b elong to the attack er control. 16 A. A SKAROV AND A. C. MYERS Program [ • ]; if u > 0 then h 0 := endorse ( u ) else skip ; low := h 0 < h is rejected. Assume initial memory where m ( h ) = m ( h 0 ) = 7 . Consider attac k a 1 that sets u := 1 and consider the trace ~ t 1 that it gives. This trace endorses u in the then branch, o v erwrites the v alue of h 0 with 1 , and pro duces a release even t ( low , 1) . Consider another attack a 2 that sets u := 0 , and consider the corresponding trace ~ t 2 . This trace con tains release ev en t ( low , 0) without an y endorsements. Now, attac ker control R ( c [ ~ • ] , m, a 2 , ~ t 2 ) excludes a 1 , b ecause of the disagreement at the release ev en t. At the same time, a 1 is a relev ant attac k for a 2 , b ecause no endorsements happ en along ~ t 2 . Consider program c [ ~ • ] , which contains no endorsements. In this case, for all p ossible traces ~ t , w e hav e that φ ( ~ t ) = φ 0 ( ~ t ) = ∅ . Therefore, by Definition 5.2 it must b e that Φ( c [ ~ • ] , m, ~ t ) = ∅ for all memories m and traces ~ t . This indicates that for programs without endorsemen ts, progress-sensitiv e robustness with endorsements (Definition 5.3) conserv a- tiv ely reduces to the earlier definition of progress-sensitive robustness (Definition 4.3). Progress-insensitiv e robustness with endorsement is defined similarly . The intuition for the definition remains the same, while we use progress-insensitive v ariants of progress control and control: Definition 5.4 (Progress-insensitiv e robustness with endorsemen t) . Program c [ ~ • ] satisfies pr o gr ess-insensitive r obustness with endorsement if for all memories m , attacks ~ a , and traces ~ t · ~ r , such that h c [ ~ a ] , m i− → ∗  t h c 0 , m 0 i− → ∗  r , and ~ r contains a release ev en t, i.e., k → ( c [ ~ a ] , m P , ~ t P ) ⊃ k ( c [ ~ a ] , m P , ~ t P · ~ r P ) , we hav e R  → ( c [ ~ • ] , m, ~ a, ~ t ) \ Φ( c [ ~ • ] , m, ~ t · ~ r ) ⊆ R → ( c [ ~ • ] , m, ~ a, ~ t · ~ r ) As a final note in this section, observ e that because of the particular use of irrelev an t attac ks in Definitions 5.3 and 5.4 it is sufficien t for us to define irrelev an t traces so that they only match at the endorsement ev en ts. A slightly more generalized notion of irrelev ance w ould require ~ q in Definition 5.1 to b e similar to a prefix of ~ t 0 . 6. Enforcement W e no w explore ho w to enforce robustness using a security type system. While this section fo cuses on progress-insensitive enforcement, it is possible to refine the t yp e system to deal with progress sensitivity (mo dulo a v ailability attac ks) [26, 19]. Figures 8 and 9 display t yping rules for expressions and commands. This type system is based on the one of [16] and is similar to many standard security type systems. De classific ation. W e extend the language with a language construct for de classific ation of expressions declassify ( e ) . Whereas in earlier examples, w e considered an assignment l := h to be secure if it did not violate robustness, w e no w require information flo ws from public to secret to b e mediated by declassification. W e note that declassification has no additional seman tics and, in the context of our simple language, can b e inferred automatically . This ma y b e achiev ed by placing declassifications in public assignments that app ear in trusted co de, i.e., in non- • parts of the program. Moreov er, making declassification explicit has the follo wing motiv ations: (1) On the enforcement lev el, the type system conv eniently ensures that a non-progress re- lease even t ma y happ en only at declassification. All other assignments preserve progress- insensitiv e knowledge. A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 17 Γ ` n : `, ∅ Γ ` x : Γ( x ) , ∅ Γ ` e 1 : ` 1 , D 1 Γ ` e 2 : ` 2 , D 2 Γ ` e 1 op e 2 : ` 1 t ` 2 , D 1 ∪ D 2 (T-DECL) Γ ` e : `, D Γ ` declassify ( e ) : ` u ( P , U ) , vars ( e ) Figure 8. T yp e system: expressions (T-SKIP) Γ , p c ` skip (T-SEQ) Γ , p c ` c 1 Γ , p c ` c 2 Γ , p c ` c 1 ; c 2 (T-ASGMT) Γ ` e : `, D ` t p c v Γ( x ) ∀ y ∈ D . Γ( y ) v ( S , T ) D 6 = ∅ = ⇒ p c v ( P , T ) Γ , p c ` x := e (T-IF) Γ ` e : `, ∅ Γ , p c t ` ` c 1 Γ , p c t ` ` c 2 Γ , p c ` if e then c 1 else c 2 (T-WHILE) Γ ` e : `, ∅ Γ , p c t ` ` c Γ , p c ` while e do c (T-HOLE) p c v ( P , U ) Γ , p c ` • (T-ENDORSE) p c t Γ( x ) v ( S , T ) p c v Γ( x ) Γ ` e : `, ∅ ` u ( S , T ) v Γ( x ) Γ , p c ` x := endorse ( e ) Figure 9. T yp e system: commands (2) Muc h of the related w ork on language-based declassification p olicies uses similar type systems. Showing our security p olicies can b e enforced using such systems makes the results more general. T yping of expr essions. Type rules for expressions hav e form Γ ` e : `, D where ` is the level of the expression, and D is a set of v ariables that may b e declassified. The declassification is the most interesting rule among expressions. It downgrades the confiden tialit y lev el of the expression by returning ` u ( P , U ) , and counts all v ariables in e as declassified. T yping of c ommands. The typing judgments for commands hav e the form Γ , p c ` c . The rules are standard for a securit y type system. W e highlight typing of assignmen ts, endorse- men t, and holes. Assignmen ts hav e t w o extra clauses for when the assigned expression con tains a de- classification ( D 6 = ∅ ) . The rule (T-ASGMT) requires all v ariables that can be declassified ha v e high integrit y . The rule also bounds the p c -label by ( P , T ) , which enforces that no declassification happ ens in un trusted or secret contexts. These requiremen ts guarantee that the information released b y the declassification do es not directly dep end on the attack er- con trolled v ariables. 18 A. A SKAROV AND A. C. MYERS Sequential composition 1 Sequential composition 2 Advancement Lemma Control backbone Lemma Proposition 1 Figure 10. High-lev el structure of pro of of Prop osition 6.1 The typing rule for endorsement (T-ENDORSE) requires that the p c -lab el is trusted and that the result of the endorsement is stored in a trusted v ariable: p c t Γ( x ) v ( S , T ) . Note that requiring a trusted p c -lab el is crucial, while the restriction that x is trusted could easily b e lifted, since trusted v alues may flow into untrusted v ariables. Because endorsed expressions preserve their confiden tiality lev el, we also chec k that x has the right securit y lev el to store the result of the expression. This is done b y demanding that ` u ( S , T ) v Γ( x ) , where taking meet of ` and ( S , T ) b o osts integrit y , but k eeps the confidentialit y level of ` . The rule for holes forbids placing attack er-provided co de in high confidentialit y contexts. F or simplicity , we disallo w declassification in the guards of if and while . 6.1. Soundness This section shows that the t yp e system of Figures 8 and 9 is sound. W e form ulate top-level soundness in Prop osition 6.1. The pro of of Prop osition 6.1 app ears in the end of the section. Prop osition 6.1. If Γ , p c ` c [ ~ • ] then for al l attacks ~ a , memories m , and tr ac es ~ t · ~ r pr o duc e d by h c [ ~ a ] , m i , wher e k → ( c [ ~ a ] , m P , ~ t P ) ⊃ k ( c [ ~ a ] , m P , ~ t P · ~ r P ) , we have that R  → ( c [ ~ • ] , m, ~ a, ~ t ) \ Φ( c [ ~ • ] , m, ~ t · ~ r ) ⊆ R → ( c [ ~ • ] , m, ~ a, ~ t · ~ r ) A uxiliary definitions. W e introduce an auxiliary definition of progress-insensitiv e noninter- ference along a part of a trace, abbreviated PINI, whic h w e will use in the proof of Prop o- sition 6.1. Figure 10 sho ws the high-lev el structure of the proof. W e define de classific ation events to b e low ev en ts that inv olve declassifications. The central prop erty of this pro of — the control backbone lemma (Lemma 6.8) — captures the b ehavior of similar attac ks and traces that are generated b y well-t yped commands. T ogether with the Ad v ancement Lemma, it sho ws that declassification ev en ts soundly approximate release ev en ts. The pro of of Prop osition 6.1 follows directly from the Control Backbone and Adv ancemen t lemmas. Definition 6.2 (Progress-insensitiv e nonin terference along a part of a trace) . Given a pro- gram c , memory m , and tw o traces ~ t and ~ t + suc h that ~ t + is an extension of ~ t , we say that c satisfies pr o gr ess-insensitive noninterfer enc e along the part of the trace from ~ t to ~ t + , de- noted b y PINI( c, m, ~ t, ~ t + ) whenev er for the lo w even ts in the corresp onding traces ~ ` n , ~ t P and ~ ` N , ~ t + P , n ≤ N , it holds that ∀ i . n < i < N . k → ( c, m P , ~ ` i ) ⊆ k ( c, m P , ~ ` i +1 ) Lemma 6.3 (Nonin terference for no declassifications) . Given a pr o gr am c without de clas- sific ations such that Γ , p c ` c then for al l memories m and p ossible low events ~ ` · ` 0 such that h c, m i− → ∗   h c 0 , m 0 i− → ∗  0 h c 00 , m 00 i it holds that k → ( c, m, ~ ` ) ⊆ k ( c, m, ~ ` · ` 0 ) . A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 19 Pro of . By induction on c (cf. [1]). 2 Lemma 6.4 (Noninterference for the tail of sequen tial comp osition) . Assume a pr o gr am c such that for al l memories m and low events ` , such that h c, m i− → ∗  h c 0 , m 0 i , it holds that k → ( c, m P ,  ) ⊆ k ( c, m P , ` ) . Then for al l pr o gr ams c 0 , initial memories i , and low events ~ l 0 , such that h c 0 ; c, i i− → ∗   0 h c, i 0 i− → ∗  0 we have k → ( c 0 ; c, i P , ~ ` 0 ) ⊆ k ( c 0 ; c, i P , ~ ` 0 · ` 0 ) . Pro of . Assume the set inclusion of the lemma’s statement do es not hold. By Definition 2.2, there must exist an initial memory m , suc h that m = P i and h c 0 ; c, m i− → ∗   0 h c, m 0 i− → ∗  00 , but ` 0 6 = ` 00 . Because m = P i and both traces pro duce ~ ` 0 , it m ust also b e that m 0 = P i 0 . But this also implies that m 0 6∈ k ( c, i 0 P , ` 0 ) , that is, k → ( c, i 0 P ,  ) 6⊆ k ( c, i 0 P , ` 0 ) , which contradicts the main assumption ab out c . 2 The follo wing tw o help er lemmas corresp ond to the sequential comp osition sub-cases of the Adv ancement Lemma. Lemma 6.5 captures the special case when the first command in the sequential comp osition c 1 [ • ]; c 2 [ • ] do es not pro duce a declassification even t, while Lemma 6.6 considers the general case when a declassification even t may b e pro duced b y either of c 1 [ • ] or c 2 [ • ] . Lemma 6.5 (Sequential comp osition 1) . Given • pr o gr am ~ c 0 [ ~ • ] such that Γ , p c ` c 0 [ ~ • ] , • initial memory m 0 , • two initial attacks ~ a 0 , ~ b 0 , • two interme diate c onfigur ations h c 1 [ ~ a 1 ]; c 2 [ ~ a 2 ] , m i and h c 1 [ ~ b 1 ]; c 2 [ ~ b 2 ] , s i such that • h c 0 [ ~ a 0 ] , m 0 i− → ∗  t 0 h c 1 [ ~ a 1 ]; c 2 [ ~ a 2 ] , m i− → ∗  t α h c 2 [ ~ a 2 ] , m 0 i− → ∗  t β · r • h c 0 [ ~ b 0 ] , m 0 i− → ∗  q 0 h c 1 [ ~ b 1 ]; c 2 [ ~ b 2 ] , s i− → ∗  q 00 · r 0 • PINI( c 0 [ ~ a 0 ] , m 0 , ~ t 0 , ~ t 0 · ~ t α · ~ t β ) • PINI( c 0 [ ~ b 0 ] , m 0 , ~ q 0 , ~ q 0 · ~ q 00 ) • r and r 0 ar e de classific ation events • ~ b 0 6∈ Φ( c 0 [ ~ • ] , m 0 , ~ t 0 · ~ t α · ~ t β · r ) • ~ t 0  = ~ q 0  • m = T s then ~ q 00 = ~ q α · ~ q β such that • h c 1 [ ~ a 1 ]; c 2 [ ~ a 2 ] , s i− → ∗  q α h c 2 [ ~ a 2 ] , s 0 i− → ∗  q β · r • ~ t α  = ~ q α  • m 0 = T s 0 Pro of . By induction on the structure of c 1 [ ~ • ] . Case skip is immediate. Consider the other cases. • case [ ~ • ] In this case ~ a 1 = a 1 and ~ b 1 = b 1 . By assumption, a 1 and b 1 are fair attacks, which means that ~ t α has no release even ts and no assignments to trusted v ariables. Similarly , b ecause no lo w assignments can b e produced when running ~ b 1 , then b y Definition 3.2 there must b e s 0 and ~ q α that would satisfy the demand of the lemma. 20 A. A SKAROV AND A. C. MYERS • case x := e W e consider confiden tialit y and integrit y prop erties separately . Confiden tialit y: W e sho w that even if a low even t is p ossible, it is not a release even t. W e hav e tw o cases, based on the confidentialit y level of x . (a) Γ( x ) = ( P , _ ) A low ev ent is generated by the low assignmen t. By Lemma 6.3 and Lemma 6.4 the assignment must not b e a release even t. (b) Γ( x ) = ( S , _ ) In this case no low ev en ts are generated. In tegrit y: Next, we show that the resulting memories agree on trusted v alues. The t w o cases are (a) Γ( x ) = ( _ , T ) In this case it m ust be that Γ( e ) = ( _ , T ) and, hence, m ( e ) = s ( e ) . Therefore m 0 = T s 0 . (b) Γ( x ) = ( _ , U ) Assignment to x do es not change how memories agree on trusted v alues. • case x := endorse η ( e ) W e consider the confidentialit y and integrit y prop erties of this command separately . Confiden tialit y: Similar to the case for assignment. In tegrit y: W e consider tw o cases. (a) Γ( x ) = ( _ , T ) In this case, the trace pro duces an even t endorse ( η , v ) . W e note ~ b 0 6∈ Φ( c 0 [ ~ • ] , m 0 , ~ t 0 · ~ t α · ~ t β · r ) . In particular, we hav e that ~ q 0 · ~ q 00 · r 0 6∈ φ ( ~ t 0 · ~ t α · ~ t β · r ) . If we assume that the curren t command is the i -th endorsement in the trace, w e ha v e that ~ q 0 · ~ q 00 · r 0 6∈ φ i ( ~ t 0 · ~ t α · ~ t β · r ) . But w e also know that ~ t 0  = T ~ q 0  . Because, b y the rule (T-ENDORSE), the result of endorsement is assigned to trusted v ariables, this implies that b oth ~ q 0 and ~ t 0 m ust agree on the endorsed v alues. Therefore, the only p ossibilit y with whic h ~ q 0 · ~ q 00 · r 0 6∈ φ i ( ~ t ) is that ~ q 00 generates endorse ( η , v ) as well. This implies that v alue v is assigned to x in b oth cases, which guaran tees that m 0 = T s 0 . (b) Γ( x ) = ( _ , U ) Not applicable b y (T-ENDORSE). • case c α ; c β By tw o applications of induction hypothesis: one to c α ; ( c β ; c 2 [ ~ • ]) and the other one to c β ; c 2 [ ~ • ] . • case if e then c true else c false W e hav e the follo wing cases based on the type of expression e . (a) Γ( e ) = ( _ , T ) In this case b oth branc hes are taking the same branc h and w e are done b y induction h yp othesis. (b) Γ( e ) = ( _ , U ) In this case neither of c true or c false con tain declassifications or high integrit y assign- men ts. This guarantees that m 0 = T s 0 . • case while e do c loop Similar to sequential comp osition and conditionals. 2 Lemma 6.6 (Sequential comp osition 2) . Given • pr o gr am c 0 [ ~ • ] such that Γ , p c ` c 0 [ ~ • ] A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 21 • initial memory m 0 • two initial attacks ~ a 0 , ~ b 0 • two interme diate c onfigur ations h c 1 [ ~ a 1 ]; c 2 [ ~ a 2 ] , m i and h c 1 [ ~ b 1 ]; c 2 [ ~ b 2 ] , s i such that • h c 0 [ ~ a 0 ] , m 0 i− → ∗  t 0 h c 1 [ ~ a 1 ]; c 2 [ ~ a 2 ] , m i− → ∗  t 00 h c 0 1 [ ~ a 0 1 ]; c 2 [ ~ a 2 ] , m 0 i− → ( x,v ) h c 00 1 [ ~ a 00 1 ]; c 2 [ ~ a 2 ] , m 00 i • h c 0 [ ~ b 0 ] , m 0 i− → ∗  q 0 h c 1 [ ~ b 1 ]; c 2 [ ~ b 2 ] , s i− → ∗  q 00 h d 0 , s 0 i− → ( y ,u ) h d 00 , s 00 i • PINI( c 0 [ ~ a 0 ] , m 0 , ~ t 0 , ~ t 0 · ~ t 00 ) • PINI( c 0 [ ~ b 0 ] , m 0 , ~ q 0 , ~ q 0 · ~ q 00 ) • ( x, v ) and ( y , u ) ar e de classific ation events • ~ b 0 6∈ Φ( c 0 [ ~ • ] , m 0 , ~ t 0 · ~ t 00 · ( x, v )) • ~ t 0  = ~ q 0  • m = T s then • ~ a 0 1 = ~ a 00 1 • ther e is b 0 1 such that • d 0 = c 0 1 [ ~ b 0 1 ]; c 2 [ ~ b 2 ] • d 00 = c 00 1 [ ~ b 0 1 ]; c 2 [ ~ b 2 ] • ~ t 00  = ~ q 00  • m 0 = T s 0 • m 00 = T s 00 • ( x, v ) = ( y, u ) . Pro of . By induction on the structure of c 1 [ ~ • ] . In the cases of [ ~ • ] , skip , and x := endorse ( e ) no declassification even ts may b e pro duced, so these cases are imp ossible. • x := e . When D = ∅ , no declassification even ts may b e produced. When D 6 = ∅ , a declassification ev en t is pro duced by b oth traces. Also, ~ t 00 = ~ q 00 =  , and m 0 = m and s 0 = s . Because m = T s and Γ( e ) = ( _ , T ) w e ha ve that b oth traces pro duces the same declassification even t ( x, v ) , and therefore, m 00 = T s 00 . • case c α [ a α ]; c β [ a β ] W e hav e tw o cases dep ending on whether c α [ a α ] generates low even ts: (a) h c α [ a α ]; ( c β [ a β ]; c 2 [ a 2 ]) , m i− → ∗  1 ···  N h c β [ a β ]; c 2 [ a 2 ] , m 0 i In this case by Lemma 6.5 it m ust b e that h c α [ b α ]; ( c β [ b β ]; c 2 [ b 2 ]) , m i− → ∗  1 ···  0 N h c β [ b β ]; c 2 [ b 2 ] , s 0 i such that m 0 = T s 0 . Then we can apply the induction hypothesis to c β [ • ] . (b) In this case ( x, v ) is pro duced b y c α [ a α ] and we are done by application of induction h yp othesis to c α [ • ] . • case if e then c true else c false W e hav e tw o cases: (a) Γ( e ) = ( _ , T ) In this case b oth branches take the same command, and we are done by the induction h yp othesis. (b) Γ( e ) = ( _ , U ) . Imp ossible, b ecause declassification even ts are not allow ed in untrusted in tegrit y con- texts. • case while e do c loop Similar to sequential comp osition and conditionals. 22 A. A SKAROV AND A. C. MYERS 2 Lemma 6.7 (Adv ancement) . Given • pr o gr am c 0 [ ~ • ] such that Γ , p c ` c 0 [ ~ • ] • initial memory m 0 • two initial attacks ~ a 0 , ~ b 0 • two interme diate c onfigur ations h c [ ~ a ] , m i and h c [ ~ b ] , s i such that • h c 0 [ ~ a 0 ] , m 0 i− → ∗  t 0 h c [ ~ a ] , m i− → ∗  t 00 h c 0 [ ~ a 0 ] , m 0 i− → ( x,v ) h c 00 [ ~ a 00 ] , m 00 i • h c 0 [ ~ b 0 ] , m 0 i− → ∗  q 0 h c [ ~ b ] , s i− → ∗  q 00 h d 0 , s 0 i− → ( y ,u ) h d 00 , s 00 i • PINI( c 0 [ ~ a 0 ] , m 0 , ~ t 0 , ~ t 0 · ~ t 00 ) • PINI( c 0 [ ~ b 0 ] , m 0 , ~ q 0 , ~ q 0 · ~ q 00 ) • ( x, v ) and ( y , u ) ar e de classific ation events • ~ b 6∈ Φ( c 0 [ ~ • ] , m 0 , ~ t 0 · ~ t 00 · ( x, v )) • ~ t 0  = ~ q 0  • m = T s then • ~ a 0 = ~ a 00 • ther e is b 0 such that • d 0 = c 0 [ ~ b 0 ] • d 00 = c 00 [ ~ b 0 ] • ~ t 00  = ~ q 00  • m 0 = T s 0 • m 00 = T s 00 • ( x, v ) = ( y, u ) . Pro of . By induction on c [ ~ • ] . In the cases of [ ~ • ] , skip , and x := endorse ( e ) , no declassifi- cation even ts may b e pro duced, so these cases are imp ossible. • x := e . In case D = ∅ , no declassification ev en ts ma y b e pro duced. When D 6 = ∅ , a declassification ev en t is pro duced by b oth traces. Also, ~ t 00 = ~ q 00 =  , and m 0 = m and s 0 = s . Because m = T s and Γ( e ) = ( _ , T ) w e ha ve that b oth traces pro duces the same declassification even t ( x, v ) , and therefore, m 00 = T s 00 . • case c α ; c β By Lemma 6.6. • case if e then c true else c false W e hav e tw o cases: (a) Γ( e ) = ( _ , T ) In this case b oth branc hes tak e the same command and we are done by induction h yp othesis. (b) Γ( e ) = ( _ , U ) . Imp ossible, b ecause declassification even ts are not allow ed in untrusted in tegrit y con- texts. • case while e do c loop Similar to sequential comp osition and conditionals. 2 A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 23 Lemma 6.8 (Con trol Bac kb one) . Given Γ , p c ` c [ • ] , memory m , an initial attack ~ a and a tr ac e ~ t , such that h c [ ~ a ] , m i− → ∗  t 1 h c 1 [ ~ a 1 ] , m 1 i− → r 1 h c 0 1 [ ~ a 0 1 ] , m 0 1 i− → ∗  t 2 ··· r i − 1 h c 0 i − 1 [ ~ a 0 i − 1 ] , m 0 i − 1 i− → ∗  t i h c i [ ~ a i ] , m i i− → r i h c 0 i [ ~ a 0 i ] , m 0 i i − → ∗ . . . wher e r i ar e de classific ation events, then for al l ~ b, ~ q such that ( ~ a, ~ t ) ∼ c [  • ] ,m → ( ~ b, ~ q ) and ~ b 6∈ Φ( c [ ~ • ] , m, ~ t ) , it holds that the r esp e ctive c onfigur ations (highlighte d in b oxes her e) match at the de classific ation events, that is h c [ ~ b ] , m i− → ∗  q 1 h c 1 [ ~ b 1 ] , s 1 i− → r 1 h c 0 1 [ ~ b 0 1 ] , s 0 1 i− → ∗  q 2 ··· r i − 1 h c 0 i − 1 [ ~ b 0 i − 1 ] , s 0 i − 1 i− → ∗  q i h c i [ ~ b i ] , s i i− → r i h c 0 i [ ~ b 0 i ] , s 0 i i − → ∗ . . . wher e i r anges over the numb er of de classific ation events in ~ t , and mor e over • m i = T s i and m 0 i = T s 0 i • ~ q i  = ~ t i  Pro of . By induction on the n um b er of declassification even ts. The base case, where n = 0 , is immediate. F or the inductiv e case, assume the prop osition holds for the first n declassifi- cation even ts in ~ t , and apply Lemma 6.7. 2 W e conclude this section with the pro of of Prop osition 6.1. Pro of of Prop osition 6.1 Consider ~ b ∈ R  → ( c [ ~ • ] , m, ~ a, ~ t ) \ Φ( c [ ~ • ] , m, ~ t · ~ r ) . W e w an t to sho w that ~ b ∈ R → ( c [ ~ • ] , m, ~ a, ~ t · ~ r ) . Because ~ b ∈ R  → ( c [ ~ • ] , m, ~ a, ~ t ) , we hav e that ~ b ∈ { ~ b | ∃ ~ q . ( ~ a, ~ t ) ∼ c [  • ] ,m → ( ~ b, ~ q ) ∧ ( ∃ ~ r 0 . k → ( c [ ~ b ] , m P , ~ q P ) ⊃ k ( c [ ~ b ] , m P , ~ q P · ~ r 0 P ) ∨ h c [ ~ b ] , m i ⇓ ) } . W e consider the tw o cases (1) ~ b ∈ { ~ b | ∃ ~ q . ( ~ a, ~ t ) ∼ c [  • ] ,m → ( ~ b, ~ q ) ∧ ∃ ~ r 0 . k → ( c [ ~ b ] , m P , ~ q P ) ⊃ k ( c [ ~ b ] , m P , ~ q P · ~ r 0 P ) } By definition of Φ( c [ ~ • ] , m, ~ t · ~ r ) , we hav e that ~ b 6∈ Φ( c [ ~ • ] , m, ~ t · ~ r ) = ⇒ ~ b 6∈ Φ( c [ ~ • ] , m, ~ t ) . By the Control Backbone Lemma 6.8, we hav e that t w o traces agree on the declassi- fication p oin ts up to the length of ~ t , and in particular there are ~ t 0 , ~ t 1 , ~ q 0 , ~ q 1 suc h that ~ t = ~ t 0 · ~ t 1 · ~ r and ~ q = ~ q 0 · ~ q 1 and that there are no release even ts along ~ t 1 and ~ q 1 , for which it holds that h c [ ~ a ] , m i− → ∗  t 0 h c 0 [ ~ a 0 ] , m 0 i− → ∗  t 1 ·  r and h c [ ~ b ] , m i− → ∗  q 0 h c 0 [ ~ b 0 ] , s 0 i− → ∗  q 1 ·  r 0 where ~ t  = ~ q  and m 0 = T s 0 . By A dv ancemen t Lemma 6.7, we obtain that b oth traces m ust agree on ~ r and ~ r 0 . This is sufficient to extend the original partitioning of ( ~ a, ~ t ) and ( ~ b, ~ q ) to ( ~ a, ~ t · ~ r ) and ( ~ b, ~ q 0 · ~ q 1 · ~ r 0 ) such that ( ~ a, ~ t · ~ r ) ∼ c [  • ] ,m → ( ~ b, ~ q 0 · ~ q 1 · ~ r 0 ) . (2) ~ b ∈ { ~ b | ∃ ~ q . ( ~ a, ~ t ) ∼ c [  • ] ,m → ( ~ b, ~ q ) ∧ h c [ ~ b ] , m i ⇓} This case is imp ossible. By the Control Backbone Lemma 6.8 there m ust b e t w o resp ectiv e configurations h c 0 [ ~ a 0 ] , m 0 i and h c 0 [ ~ b 0 ] , s 0 i where m 0 = T s 0 , such that h c 0 [ ~ a 0 ] , m 0 i 24 A. A SKAROV AND A. C. MYERS leads to a release even t, but h c 0 [ ~ b 0 ] , s 0 i terminates without release even ts. By analysis of c 0 , similar to the Adv ancement Lemma, we conclude that none of the cases is p ossible. 2 7. Check ed endorsement Realistic applications endorse attack er-provided data based on certain conditions. F or in- stance, an SQL string that dep ends on user-pro vided input is executed if it passes saniti- zation, a new password is accepted if the user can provide an old one, and a secret k ey is accepted if nonces match. Because this is a recurring pattern in security-critical applications, w e argue for language supp ort in the form of che cke d endorsements . This section extends the language with chec ked endorsemen ts and deriv es b oth security conditions and a t yping rule for them. Moreov er, w e show chec ked endorsements can b e decomp osed into a sequence of direct endorsements, and prov e that for w ell-t yp ed programs, the semantic conditions for robustness are the same with chec ked endorsements and with unc hec k ed endorsements. Syntax and semantics. In the scop e of this section, we assume c heck ed endorsements are the only endorsement mec hanism in the language. W e introduce a syntax for chec ked en- dorsemen ts: c [ ~ • ] ::= . . . | endorse η ( x ) if e then c else c The semantics of this command is that a v ariable x is endorsed if the expression e ev aluates to true. If the chec k succeeds, the then branch is taken, and x is assumed to ha v e high in tegrit y there. If the c hec k fails, the else branc h is taken. As with direct endorsements, w e assume c heck ed endorsemen ts in program text ha v e unique labels η . These labels may b e omitted from the examples, but they are explicit in the semantics. Endorsement events. Check ed endorsement ev en ts che cke d ( η , v , b ) record the unique lab el of the endorsement command η , the v alue of v ariable that can p otentially b e endorsed v , and a result of the c hec k b , which can b e either 0 or 1. m ( e ) ↓ v v 6 = 0 h endorse η ( x ) if e then c 1 else c 2 , m i che cked ( η,m ( x ) , 1) − → h c 1 , m i m ( e ) ↓ v v = 0 h endorse η ( x ) if e then c 1 else c 2 , m i che cked ( η,m ( x ) , 0) − → h c 2 , m i Irr elevant attacks. F or c hec k ed endorsement w e define a suitable notion of irrelev an t at- tac ks. The reasoning b ehind this is the following. (1) Both ~ t and ~ t 0 reac h the same endorsement statemen t: η i = η 0 i . (2) A t least one of them results in the p ositive endorsemen t: b i + b 0 i ≥ 1 . This ensures that if b oth traces do not take the branch then none of the attacks are ignored. (3) The endorsed v alues are differen t: v i 6 = v 0 i . Otherwise, there should b e no further difference in what the attack er can influence along the trace. The following definitions formalize the ab o ve construction. A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 25 Definition 7.1 (Irrelev ant traces) . Given a trace ~ t , where endorsemen ts are lab eled as che cke d ( η j , v j , b j ) , define a set of irrelev ant traces based on the n um b er of chec ked endorse- men ts in ~ t as ψ i ( ~ t ) . Then ψ 0 ( ~ t ) = ∅ , and ψ i ( ~ t ) = { ~ t 0 | ~ t 0 = ~ q · che cke d ( η i , v 0 i , b 0 i ) · ~ q 0 } such that ~ q is a prefix of ~ t 0 with i − 1 che cke d ev en ts, all of which agree with che cke d ev en ts in ~ t, ( b i + b 0 i ≥ 1) ∧ ( v i 6 = v 0 i ) , and ~ q 0 con tains no che cke d ev en ts Define ψ ( ~ t ) , S i ψ i ( ~ t ) as a set of irr elevant tr ac es w.r.t. ~ t . Definition 7.2 (Irrelev ant attacks ) . Ψ( c [ ~ • ] , m, ~ t ) , { ~ a | h c [ ~ a ] , m i− → ∗  t 0 ∧ ~ t 0 ∈ ψ ( ~ t ) } Using this definition, we can define security conditions for c hec k ed robustness. Definition 7.3 (Progress-sensitiv e robustness with c heck ed endorsemen t) . Program c [ ~ • ] satisfies pr o gr ess-sensitive r obustness with che cke d endorsement if for all memories m and all attacks ~ a , such that h c [ ~ a ] , m i− → ∗  a h c 0 , m 0 i− → ∗  r , and ~ r contains a release even t, i.e., k ( c [ ~ a ] , m P , ~ t P ) ⊃ k ( c [ ~ a ] , m P , ~ t P · ~ r P ) , we hav e R  ( c [ ~ • ] , m, ~ a, ~ t ) \ Ψ( c [ ~ • ] , m, ~ t · ~ r ) ⊆ R ( c [ ~ • ] , m, ~ a, ~ t · ~ r ) The progress-insensitive version is defined similarly , using progress-insensitive definition for release even ts and progress-insensitive ve rsions of control and release con trol. Example. In program [ • ]; endorse η 1 ( u ) if u = u 0 then low := u < h else skip , the attac k er can mo dify u and u 0 . This program is insecure because the unendorsed, attac k er- con trolled v ariable u 0 influences the decision to declassify . T o see that Definition 7.3 rejects this program, consider running it in memory with m ( h ) = 7 , and tw o attacks: a 1 , where attac k er sets u := 5; u 0 := 0 , and a 2 , where attac ker sets u := 5; u 0 = 5 . Denote the corre- sp onding traces up to endorsemen t by ~ t 1 and ~ t 2 . W e hav e ~ t 1 = [( u, 5) · ( u 0 , 0)] · che cke d ( η 1 , 5 , 0) and ~ t 2 = [( u, 5) · ( u 0 , 5)] · che cke d ( η 1 , 5 , 1) . Because endorsement in the second trace suc- ceeds, this trace also contin ues with a low ev ent ( low , 1) . F ollowing Definition 7.1 w e ha v e that t 1 6∈ ψ ( ~ t 2 · ( low , 1)) , implying a 1 6∈ Ψ( c [ ~ • ] , m, ~ t 2 · ( low , 1)) . Therefore, a 1 ∈ R  ( c [ ~ • ] , m, ~ a 2 , ~ t 2 ) \ Ψ( c [ ~ • ] , m, ~ t 2 · ( low , 1)) . On the other hand, a 1 6∈ R ( c [ ~ • ] , m, ~ a 2 , ~ t 2 · ( low , 1)) b ecause a 1 can pro duce no low even ts corresp onding to ( low , 1) . Endorsing multiple variables. The syn tax for chec ked endorsemen ts can b e extended to m ultiple v ariables with the follo wing syn tactic sugar, where η i is an endorsemen t lab el corresp onding to v ariable x i : endorse ( x 1 , . . . x n ) if e then c 1 else c 2 = ⇒ endorse η 1 ( x 1 ) if e then endorse η 2 ( x 2 ) if true then . . . c 1 else skip . . . else c 2 Note that in this enco ding the condition is c hec k ed as early as possible; an alternativ e enco ding here would c hec k the condition in the end. While suc h enco ding w ould hav e an adv an tage of type chec king immediately , w e b elieve that chec king the condition as early as 26 A. A SKAROV AND A. C. MYERS p ossible a v oids spurious (alb eit harmless in this simple context) endorsements of all but the last v ariable, and is therefore more faithful semantically . T yping che cke d endorsements. T o enforce programs with chec ked endorsemen ts, we extend the type system with the follo wing general rule: (T-CHECKED) Γ 0 , Γ[ x i 7→ Γ( x i ) u ( S , T )] Γ 0 ` e : ` 0 , D 0 p c 0 , p c t ` 0 p c 0 v ( S , T ) Γ 0 , p c 0 ` c 1 Γ , p c 0 ` c 2 Γ , p c ` endorse ( x 1 , . . . , x n ) if e then c 1 else c 2 The expression e is t yp e-c hec k ed in an en vironmen t Γ 0 in which endorsed v ariables x 1 , . . . x n ha v e trusted integrit y; its lab el ` 0 is joined to form auxiliary p c -label p c 0 . The lev el of p c 0 m ust b e trusted, ensuring that endorsemen ts happ en in a trusted context, and that no declassification in e dep ends on un trusted v ariables other than the x i (this effectively subsumes the need to chec k individual v ariables in D 0 ). Each of the branc hes is type-chec ked with the program lab el set to p c 0 ; ho wev er, for c 1 w e use the auxiliary typing environmen t Γ 0 , since the x i are trusted there. Program [ • ]; endorse ( u ) if u = u 0 then low := declassify ( u < h ) else skip is rejected b y this t ype system. Because v ariable u 0 is not endorsed, the auxiliary p c -label has un trusted integrit y . 7.1. Relation to direct endorsements Finally , for well-t yp ed programs we can safely translate chec ked endorsements to direct en- dorsemen ts using a translation in which a c heck ed endorsemen t of n v ariables is translated to n + 1 direct endorsements. First, w e unconditionally endorse the result of the c hec k. The rest of the endorsemen ts happ en in the then branc h, b efore translation of c 1 . W e sa ve the results of the endorsements in temp orary v ariables t 1 . . . t n and replace all o ccurrences of x 1 . . . x n within c 1 with the temp orary ones (we assume that each t i has the same confi- den tialit y lev el as the corresp onding original x i , and t 0 has the confidentialit y level of the expression e ). All other commands are translated to themselv es. Definition 7.4 (Lab eled translation from chec k ed endorsements to direct endorsements) . Giv en a program c [ ~ • ] that only uses chec k ed endorsements, w e define its lab eled translation to direct endorsements J c [ ~ • ] K inductively: • J endorse η ( x 1 , . . . x n ) if e then c 1 else c 2 K = ⇒ t 0 := endorse η 0 ( e ); if t 0 then t 1 := endorse η 1 ( x 1 ); . . . t n := endorse η n ( x n ); J c 1 [ t i /x i ] K else J c 2 K • J c 1 ; c 2 K = ⇒ J c 1 K ; J c 2 K • J if e then c 1 else c 2 K = ⇒ if e then J c 1 K else J c 2 K • J while e do c K = ⇒ while e do J c K • J c K = ⇒ c , for other commands c . A de quacy of tr anslation for che cke d endorsements for wel l-typ e d pr o gr ams. Next w e sho w adequacy of the lab eled translation of Definition 7.4 for well-t yp ed programs. Note that for non-t yp ed programs this adequacy does not hold, as shown by an example in the end of the section. A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 27 Without loss of generalit y , we assume chec ked endorsemen ts hav e only one v ariable ( n = 1 in the translation of chec ked endorsement in Definition 7.4). W e adopt an indexing con v en tion where chec k ed endorsemen t with the lab el η i , corresp onds to t w o direct endorse- men ts with the lab els η 2 i − 1 and η 2 i . The following lemma establishes a connection b etw een irrelev an t attacks of the source and translated runs. Lemma 7.5 (Synchronized endorsements) . Given a pr o gr am c [ ~ • ] that only uses che cke d endorsements, such that Γ , p c ` c [ ~ • ] , memory m , and attack ~ a , such that h c [ ~ a ] , m i− → ∗  t and h J c [ ~ a ] K , m i− → ∗ ˆ  t wher e • ~ t = ~ t 0 · che cke d ( η i , v i , b i ) and • ˆ ~ t = ˆ ~ t 0 · endorse ( η 2 i − 1 , 0) or ˆ ~ t = ˆ ~ t 0 · endorse ( η 2 i − 1 , 1) · endorse ( η 2 i , v ) • k is a numb er of che cke d endorse events in ~ t and we have that • R ( c [ ~ • ] , m, ~ a, ~ t ) = R ( J c [ ~ • ] K , m, ~ a, ˆ ~ t ) . • R → ( c [ ~ • ] , m, ~ a, ~ t ) = R → ( J c [ ~ • ] K , m, ~ a, ˆ ~ t ) . • Φ( J c [ ~ • ] K , m, ˆ ~ t ) = Ψ( c [ ~ • ] , m, ~ t ) Pro of . The first t w o items follow from the definition of the translation, b ecause the trans- lation do es not generate new release ev en ts. T o pro v e the second item, we consider partitions of irrelev ant traces generated by every k -th c hec k ed endorsemen t and the direct endorsement(s) that corresp ond to it. W e pro ceed b y induction on k . F or the base case, k = 0 , i.e., neither ~ t nor ˆ ~ t con tain endorsemen ts, it holds that Φ( J c [ ~ • ] K , m, ˆ ~ t ) = Ψ( c [ ~ • ] , m, ~ t ) = ∅ . F or the inductiv e case, define a pair of auxiliary sets F k , Φ( J c [ ~ • ] K , m, ˆ ~ t ) \ Φ( J c [ ~ • ] K , m, ˆ ~ t 0 ) P k , Ψ( c [ ~ • ] , m, ~ t ) \ Ψ( c [ ~ • ] , m, ~ t 0 ) By the induction hypothesis, Φ( c [ ~ • ] , m, ˆ ~ t 0 ) = Ψ( c [ ~ • ] , m, ~ t 0 ) . By Definitions 5.2 and 7.2, we kno w that Φ( J c [ ~ • ] K , m, ˆ ~ t ) ⊇ Φ( J c [ ~ • ] K , m, ˆ ~ t 0 ) and Ψ( c [ ~ • ] , m, ~ t ) ⊇ Ψ( c [ ~ • ] , m, ~ t 0 ) . Therefore, in order to pro v e that Φ( J c [ ~ • ] K , m, ˆ ~ t ) = Ψ( c [ ~ • ] , m, ~ t ) it is sufficien t to show that F k = P k . W e consider each direction of equiv alence separately . • F k ⊇ P k . T ake an attac k ~ b ∈ P k . That is h c [ ~ b ] , m i produces a trace ~ q such that ~ q agrees on all chec ked endorsements with ~ t except the last one. There are three p ossible wa ys in whic h these endorsements ma y disagree: (a) T race ~ t contains che cke d ( η k , v k , 1) and ~ q con tains che cke d ( η k , v 0 k , 1) suc h that v k 6 = v 0 k . By the rules for the translation, it must b e that the trace ˆ ~ t , which is pro duced by con- figuration h J c [ ~ a ] K , m i , has tw o corresp onding endorsement even ts endorse ( η 2 k − 1 , 1) and endorse ( η 2 k , v k ) . Similarly , the trace ˆ ~ q , produced b y h J c [ ~ b ] K , m i , has tw o corre- sp onding endorsemen t even ts endorse ( η 2 k − 1 , 1) and endorse ( η 2 k , v 0 k ) . Because v 0 k 6 = v k w e hav e that ˆ ~ q ∈ φ ( ˆ ~ t ) . 28 A. A SKAROV AND A. C. MYERS (b) T race ~ t contains chec ked endorsement even t che cke d ( η k , v k , 1) , while trace ~ q contains ev en t che cke d ( η k , v 0 k , 0) . In this case, the trace ˆ ~ t obtained from running h J c [ ~ a ] K , m i m ust con tain t w o endorsemen t ev en ts endorse ( η 2 k − 1 , 1) and endorse ( η 2 k , v k ) , while the trace ˆ ~ q corresp onding to h J c [ ~ b ] K , m i contains one even t endorse ( η 2 k − 1 , 0) . There- fore, ˆ ~ q ∈ φ ( ˆ ~ t ) . (c) T race ~ t contains chec ked endorsement even t che cke d ( η k , v k , 0) , while trace ~ q contains ev en t che cke d ( η k , v 0 k , 1) . This is similar to the previous case. F rom ˆ ~ q ∈ φ ( ˆ ~ t ) it follows that ~ b ∈ F k . • F k ⊆ P k . T ake an attac k ~ b ∈ F k . There must b e a trace ˆ ~ q , pro duced by h J c [ ~ b ] K , m i , suc h that ˆ ~ q ∈ φ ( ˆ ~ t ) . There are tw o wa ys this can happ en: (a) ˆ ~ q and ˆ ~ t disagree at the translated endorsement ev en t that has lab el η 2 k − 1 . More precisely , one must hav e form endorse ( η 2 k − 1 , 1) and the other, endorse ( η 2 k − 1 , 0) . In the original run, this corresp onds to tw o traces ~ t and ~ q such that ~ t contains the ev en t che cke d ( η k , b k , v k ) and ~ q contains the ev ent che cke d ( η k , b 0 k , v 0 k ) . W e know that b k = 1 and b 0 k = 0 , and hence b k + b 0 k ≤ 1 . A ccording to Definition 7.1, w e need to sho w that v 0 k 6 = v k . Assume this is not the case, and that v 0 k = v k . Then by rule (T-CHECKED), w e hav e b k = b 0 k , which con tradicts the earlier conclusion. Hence ~ q ∈ ψ ( ~ t ) . (b) Alternativ ely , ˆ ~ q and ˆ ~ t disagree at the endorsemen t ev en t that has lab el η 2 k . This also means that they agree on the earlier endorsement, i.e., for the corresp onding trace with c heck ed endorsemen t, we can show that b k = b 0 k = 1 , and v k 6 = v 0 k . Therefore, ~ q ∈ ψ ( ~ t ) . F rom ~ q ∈ ψ ( ~ t ) it follows that ~ b ∈ P k . Using Lemma 7.5 w e can show the follo wing Prop osition, which relates the securit y of the source and translated programs. Prop osition 7.6 (Relation of chec ked and direct endorsemen ts) . Given a pr o gr am c [ ~ • ] that only uses che cke d endorsements such that Γ , p c ` c [ ~ • ] , then c [ ~ • ] satisfies pr o gr ess-insensitive r obustness for che cke d endorsements if and only J c [ ~ • ] K satisfies pr o gr ess-insensitive r obust- ness for dir e ct endorsements. Pro of . Note that our translation preserv es typing: when Γ , p c ` c [ ~ • ] , then Γ , p c ` J c [ ~ • ] K . Therefore, by Prop osition 6.1 the translated program satisfies progress-insensitive robust- ness with endorsements. T o sho w that the source program satisfies the progress-insensitive robustness with chec ked endorsemen ts, we use Lemma 7.5 and note that the corresp onding sets of irrelev ant attac ks and control b et w een any t w o runs of the programs m ust b e in sync. 2 Notes on the ade quacy of the tr anslation. W e observe t w o facts ab out the adequacy of this translation. First, for non-t yp ed programs, the relation do es not hold. F or instance, a program like [ • ]; endorse ( u ) if u = u 0 then low := declassify ( u < h ) else skip do es not satisfy Definition 7.3. Ho w ev er, translation of this program satisfies Definition 5.3. A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 29 Second, observe that omitting endorsemen t of the expression would lead to occlusion. Consider an alternative translation that endorses only the v ariables x 1 , . . . x n but not the result of the whole expression. Using such a translation, a program if u · 0 > 0 then skip else skip ; truste d := x is translated to temp := x ; if t · 0 > 0 then skip else skip ; truste d := x Ho w ev er, while the first program does not satisfy Definition 7.3, the second program is accepted by Definition 5.3. 8. Attac k er impact In prior work, robustness controls the attac k er’s abilit y to cause information release. In the presence of endorsement, the attac k er’s abilit y to influence trusted lo cations also b ecomes an imp ortant security issue. T o capture this influence, we in tro duce an in tegrit y dual to attac k er knowledge, called attacker imp act . Similarly to low even ts, we define truste d events as assignments to trusted v ariables and termination. Definition 8.1 (A ttac k er impact ) . Giv en a program c [ ~ • ] , memory m , and trusted ev en ts ~ t  , define p ( c [ ~ • ] , m, ~ t  ) to b e a set of attac ks ~ a that match trusted even ts ~ t  : p ( c [ ~ • ] , m, ~ t  ) , { ~ a | h c [ ~ a ] , m i− → ∗  t 0 ∧ ~ t  = ~ t 0 T } A ttac k er impact is defined with resp ect to a given sequence of trusted even ts ~ t  , starting in memory m , and program c [ ~ • ] . The impact is the set of all attac ks that agree with ~ t  in their fo otprin t on trusted v ariables. In tuitiv ely , a smaller set for attac k er impact means that the attack er has greater p ow er to influence trusted even ts. Similarly to progress knowledge, we define pr o gr ess imp act , c haracterizing which attacks lead to one more trusted ev ent. This then allo ws us to define robustness conditions for inte grity , which hav e not previously b een iden tified. Definition 8.2 (Progress impact) . Giv en a program c [ ~ • ] , memory m , and sequence of trusted even ts ~ t  , define progress impact p → ( c [ ~ • ] , m, ~ t  ) as p → ( c [ ~ • ] , m, ~ t  ) , { ~ a | h c [ ~ a ] , m i− → ∗  t 0 h c 0 , m 0 i ∧ ~ t  = ~ t 0 T ∧ h c 0 , m 0 i− → ∗ t 00 ? } The in tuition for the baseline robustness for in tegrit y is that attac ker should not influence trusted data. This is similar to noninterference for integrit y (mo dulo av ailability attacks, whic h hav e not b een explored in this con text b efore). Ho w ev er unlike earlier w ork, we can easily extend the notion of in tegrit y robustness to endorsements and c hec k ed endorsements. Definition 8.3 (Progress-insensitive integrit y robustness with endorsemen ts) . A program c [ ~ • ] satisfies progress-insensitive robustness for in tegrit y if for all memories m , and for all traces ~ t · t  where t  is a trusted even t, we ha v e p → ( c [ ~ • ] , m, ~ t T ) \ Φ( c [ ~ • ] , m, ~ t · t  ) ⊆ p ( c [ ~ • ] , m, ~ t T · t  ) Irrelev an t attac ks are defined precisely as in Section 5. W e omit the corresponding definitions for programs without endorsements and with chec ked endorsements. 30 A. A SKAROV AND A. C. MYERS 1 [ • ] 2 endorse ( guess, new _ password) 3 if ( declassify (guess==password)) 4 then 5 password = new _ password; 6 nfailed = 0; 7 ok = true; 8 else 9 nfailed = nfailed + 1; 10 ok = false; Figure 11. P assword up date 1 [ • ] 2 endorse ( req _ time) 3 if (req _ time <= now) 4 then 5 if (req _ time >= embargo _ time) 6 then return declassify (new _ data) 7 else return old _ data 8 else 9 return old _ data Figure 12. Accessing embar- go ed information The type system of Section 6 also enforces integrit y robustness with endorsemen ts, re- jecting insecure programs such as t := u and if ( u 1 ) then t := endorse ( u 2 ) , but accepting t := endorse ( u ) . Moreo ver, a connection betw een chec ked and direct endorsements, analo- gous to Prop osition 7.6, holds for in tegrit y robustness to o. 9. Examples Passwor d up date. Figure 11 sho ws co de for updating a password. The attack er controls v ariables guess of lev el ( P , U ) and new _ password of level ( S , U ) . The v ariable password has level ( S , T ) and v ariables nfailed and ok ha v e level ( P , T ) . The declassification on line 3 uses the untrusted v ariable guess . This v ariable, how ev er, is listed in the endorse clause on line 2; therefore, the declassification is accepted. The initially un trusted v ariable new _ password has to b e endorsed to up date the passw ord on line 5. The example also sho ws how other trusted v ariables— nfailed and ok —can be up dated in the then and else branc hes. Data sanitization. Figure 12 shows an annotated v ersion of the co de from the introduction, in whic h some information ( new_data ) is not allo w ed to b e released until time embargo_time . The attack er-controlled v ariable is req_time of level ( P , U ) , and new_data has lev el ( S , T ) . The chec k ed endorse ensures that the attac k er cannot violate the integrit y of the test req_time >= embargo_time . (V ariable now is high-in tegrit y and contains the current time). Without the chec ked endorse, the release of new_data would not b e p ermitted either seman- tically or by the t yp e system. 10. Related w ork Prior robustness definitions [16, 10], based on equiv alence of low traces, do not differen tiate programs suc h as [ • ]; low := u < h ; low 0 := h and [ • ]; low 0 := h ; low := u < h ; Per dimensions of information release [24], the new security conditions co v er not only the “who” dimension, but are also sensitive to “where” information release happ ens. Also, the security condition of robustness with endorsemen t do es not suffer from the o cclusion problems of qualified robustness. Balliu and Mastro eni [6] deriv e sufficien t conditions for robustness using w eak est precondition seman tics. These conditions are not precise enough to distinguish the examples ab ov e, and, moreov er, do not supp ort endorsemen t. A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 31 Prior w ork on robustness semantics defines termination-insensitiv e securit y conditions [16, 6]. Because the new framework is p ow erful enough to capture the security of programs with intermediate observ able ev en ts, it can describe the robustness of nonterminating pro- grams. Prior work on qualified robustness [16] uses a non-standard scr ambling seman tics in whic h qualified robustness unfortunately b ecomes a p ossibilistic condition, leading to anom- alies such as reachabilit y of dead co de. The new framework a v oids suc h artifacts b ecause it uses a standard, deterministic semantics. Chec k ed endorsemen t was introduced informally in the Swift web application frame- w ork [9] as a con v enien t wa y to implemen t complex security p olicies. The current pap er is the first to formalize and to study the prop erties of chec ked endorsemen t. Our seman tic framew ork is based on the definition of attack er knowledge, developed in prior w ork introducing gr adual r ele ase [1]. Attac ker knowledge is used for expressing confiden tialit y p olicies in recen t work [7, 3, 2, 8]. How ev er, none of this w ork considers in tegrit y; applying attack er-centric reasoning to integrit y p olicies is no v el. 11. Conclusion W e ha v e in tro duced a new knowledge-based framework for seman tic security conditions for information securit y with declassification and endorsement. A k ey technical inno v ation is characterizing the impact and con trol of the attack er o v er information in terms of sets of similar attac ks. Using this framework, w e can express seman tic conditions that more precisely characterize the security offered b y a security t ype system, and derive a satisfactory accoun t of new language features suc h as chec ked endorsement. References [1] A. Ask arov and A. Sab elfeld. Gradual release: Unifying declassification, encryption and key release p olicies. In Pro c. IEEE Symp. on Se curity and Privacy , pages 207–221, May 2007. [2] A. Ask arov and A. Sab elfeld. Tight enforcement of information-release p olicies for dynamic languages. In Pro c. IEEE Computer Se curity F oundations Symp osium , July 2009. [3] A. Ask aro v, S. Hunt, A. Sab elfeld, and D. Sands. T ermination-insensitiv e noninterference leaks more than just a bit. In ESORICS , pages 333–348, Octob er 2008. [4] A. Ask aro v and A. C. Myers. A semantic framew ork for declassification and endorsement. In Pr o c. 19th Eur op e an Symp. on Pr o gr amming (ESOP’10) , March 2010. [5] A. Ask arov and A. Sab elfeld. Security-t yp ed languages for implementation of cryptographic proto cols: A case study . In Pr o c. 10th Eur op e an Symp osium on R ese ar ch in Computer Se curity (ESORICS) , num b er 3679 in Lecture Notes in Computer Science. Springer-V erlag, September 2005. [6] M. Balliu and I. Mastroeni. A weak est precondi tion approach to active attacks analysis. In PLAS ’09: Pr o c. of the ACM SIGPLAN F ourth W orkshop on Pr o gr amming L anguages and Analysis for Se curity , pages 59–71. ACM, 2009. [7] A. Banerjee, D. Naumann, and S. Rosenberg. Expressiv e declassification p olicies and mo dular static enforcemen t. In Pr o c. IEEE Symp. on Se curity and Privacy , pages 339–353, May 2008. [8] N. Brob erg and D. Sands. Flow-sensitiv e semantics for dynamic information flow p olicies. In S. Chong and D. Naumann, editors, ACM SIGPLAN F ourth W orkshop on Pr o gr amming L anguages and Analysis for Se curity (PLAS 2009) , Dublin, June 15 2009. ACM. [9] S. Chong, J. Liu, A. C. Myers, X. Qi, K. Vikram, L. Zheng, and X. Zheng. Secure web applications via automatic partitionin g. In Pro c. SOSP 2007 , pages 31–44, Octob er 2007. [10] S. Chong and A. C. Myers. Decentralized robustness. In CSFW ’06: Pr o c. of the 19th IEEE workshop on Computer Se curity F oundations , pages 242–256, W ashington, DC, USA, 2006. IEEE Computer So ciety . [11] M. R. Clarkson, S. Chong, and A. C. Myers. Civitas: To w ard a secure voting system. In Pr o c. IEEE Symp. on Se curity and Privacy , pages 354–368, May 2008. 32 A. A SKAROV AND A. C. MYERS [12] P . Efstathop oulos, M. Krohn, S. V anDeBogart, C. F rey , D. Ziegler, E. Kohler, D. Mazières, F. Kaasho ek, and R. Morris. Lab els and even t pro cesses in the Asb estos op erating system. In Pro c. 20th ACM Symp. on Op er ating System Principles (SOSP) , Octob er 2005. [13] J. A. Goguen and J. Meseguer. Security p olicies and security mo dels. In Pr oc. IEEE Symp. on Se curity and Privacy , pages 11–20, April 1982. [14] M. D. McIlroy and J. A. Reeds. Multilev el securit y in the UNIX tradition. Softwar e—Pr actic e and Exp erienc e , 22(8):673–694, August 1992. [15] A. C. My ers. JFlo w: Practical mostly-static information flo w con trol. In Pr o c. ACM Symp. on Principles of Pr o gr amming L anguages , pages 228–241, January 1999. [16] A. C. Myers, A. Sabelfeld, and S. Zdancewic. Enforcing robust declassification and qualified robustness. J. Computer Se curity , 14(2):157–196, May 2006. [17] A. C. My ers and B. Lisko v. A decen tralized mo del for information flow control. In Pr o c. 17th ACM Symp. on Op er ating System Principles (SOSP) , pages 129–142, Saint-Malo, F rance, 1997. [18] A. C. Myers, L. Zheng, S. Zdancewic, S. Chong, and N. Nystrom. Jif 3.0: Jav a information flow. Soft ware release, http://www.cs.cornell.edu/jif , Jul y 2006. [19] K. O’Neill, M. Clarkson, and S. Chong. Information-flo w security for in teractiv e programs. In Pr o c. IEEE Computer Se curity F oundations W orkshop , pages 190–201, July 2006. [20] P . Ørbæk and J. Palsberg. T rust in the λ -calculus. J. F unctional Pro gr amming , 7(6):557–591, 1997. [21] F. P ottier and S. Conchon. Information flow inference for free. In Pr oc. 5th ACM SIGPLAN Interna- tional Confer enc e on F unctional Pr o gr amming (ICFP) , pages 46–57, 2000. [22] A. W. Roscoe. Csp and determinism in security mo deling. In Pr o c. IEEE Symp osium on Se curity and Privacy , 1995. [23] A. Sabelfeld and A. C. My ers. Language-based information-flow security . IEEE J. Sele cte d Ar e as in Communic ations , 21(1):5–19, January 2003. [24] A. Sab elfeld and D. Sands. Declassification: Dimensions and principles. J. Computer Se curity , 2009. [25] G. Smith and D. V olpano. Secure information flow in a multi-threaded imp erative language. In Pr o c. 25th ACM Symp. on Principles of Pr o gr amming L anguages (POPL) , pages 355–364, San Diego, California, Jan uary 1998. [26] D. V olpano and G. Smith. Probabilistic noninterference in a concurrent language. In Pr o c. IEEE Com- puter Se curity F oundations W orkshop , pages 34–43, June 1998. [27] D. V olpano, G. Smith, and C. Irvine. A sound type system for secure flow analysis. J. Computer Se curity , 4(3):167–187, 1996. [28] S. Zdancewic and A. C. Myers. Robust declassification. In Pr o c. 14th IEEE Computer Se curity F oun- dations W orkshop , pages 15–23, June 2001. [29] S. Zdancewic, L. Zheng, N. Nystrom, and A. C. Myers. Secure program partitioning. ACM T r ansactions on Computer Systems , 20(3):283–328, August 2002. [30] N. Zeldo vich, S. Boyd-Wic kizer, and D. Mazières. Securing distributed systems with information flow con trol. In Pr o c. 5th USENIX Symp osium on Networke d Systems Design and Implementation (NSDI) , pages 293–308, 2008. A c knowledgmen ts The authors w ould like to thank the anon ymous review ers for commen ts on a draft of this pap er. W e also thank Owen Arden, Stephen Chong, Mic hael Clarkson, Daniel Hedin, Andrei Sab elfeld, and Danfeng Zhang for useful discussions. This work was supp orted by a grant from the Office of Nav al Researc h (N000140910652) and by tw o NSF grants (the TR UST center, 0424422; and 0964409). The U.S. Gov ernment is authorized to repro duce and distribute reprints for Go vernmen t purp oses, notwithstanding an y copyrigh t annotation thereon. The views and conclusions contained herein are those of the authors and should not b e in terpreted as necessarily representing the official p olicies A TT A CKER CONTROL AND IMP ACT FOR CONFIDENTIALITY AND INTEGRITY 33 or endorsement, either expressed or implied, of any of the funding agencies or of the U.S. Go v ernmen t. This work is licensed under the Creative Commons Attribution-NoDerivs License. T o view a copy of this license, visit http://creativecommons.org/licenses/by-nd/2.0/ or send a letter to Creative Commons, 171 Second St, Suite 300, San Fr ancisco , CA 94105, USA, or Eisenacher Strasse 2, 10777 Berlin, Germany

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment