On Secure Distributed Implementations of Dynamic Access Control

Distributed implementations of access control abound in distributed storage protocols. While such implementations are often accompanied by informal justifications of their correctness, our formal analysis reveals that their correctness can be tricky.…

Authors: Avik Chaudhuri

On Secure Distributed Implemen tations of Dynamic Access Control A vik Chaudhuri University of California at Santa Cruz avik@cs.ucsc .edu Abstract Distributed impleme ntations of access co ntrol aboun d in distribu ted storage proto- cols. While such implementations are often accompanied by i nformal justifications of the ir correctn ess, our formal analysis reve als that their correctness can be tri cky . In particular , we discover se veral sub tleties in a state-of-the-art implementation based on capabilities, t hat can undermine correctness under a simple specification of access control. W e consider both safety and security for correctness; loosely , safety requires that an implementation does not introduce unspecified behaviors, and security re- quires that a n implementation p reserves the specified behavioral equ iv a lences. W e sho w that a secure implementation of a static access polic y alr eady requires some care in order to prev ent unspecified leaks of information about the access policy . A dynamic access policy cau ses further problems. For instance, if accesses can be dynamically granted then the implementation d oes not remain secure—it leaks in- formation about the ac cess p olicy . If accesses can be dynamically re vo ked then the implementation does not e ven remain safe. W e sho w that a safe implementation is possible if a clock is introduced in the implementation. A secure implementation is possible if the specification is accordingly generalized. Our analysis shows ho w a distrib uted implementation can be systematically d e- signed from a specification, guide d by precise formal goals. While our results are based on formal criteria, we sh o w ho w violations of e ach of those criteria can lead to real att acks. W e distill t he key ideas behind those attacks and propose correc- tions in terms of useful de sign principles. W e sho w that other stateful computations can be distributed just as well using those principles. 1 Introd uction In most file systems, protectio n relies on acc ess control. Usually the access ch ecks are local—the file system maintains an access policy that specifies which principals may access which files, and any access to a file is guarded by a local check that enfor ces the policy fo r that file. In r ecent file systems, howe ver, the access c hecks are distrib uted, and access control is implemented v ia cry ptograp hic techniques. In this paper , we try to understand th e extent to which these distributed imp lementation s of access contro l preserve the simple character of local access checks. 1 W e f ocus on implem entations that app ear in file systems b ased on n etworked stor - age [ 13]. In such systems, access con trol an d stor age a re par allelized to improve per- forman ce. Execution requ ests a re served b y stora ge servers; such r equests ar e gu ided by acc ess req uests that a re served else where by access-contro l servers. When a user requests access to a file, an access-co ntrol server certifies the access decision for tha t file by providing the user with an unforgeab le cap ability . Any subsequent execution request carries that capability as pro of of access; a storage server can ef ficien tly verify that the capability is authentic and serve the e xecu tion request. W e formally study the co rrectness of these implem entations vis- ` a-vis a simple sp ec- ification of local access con trol. Impleme nting static acc ess policies already req uires some car e in this setting ; dynamic a ccess p olicies cause further p roblems tha t r equire considerab le analysis. W e study these cases separately in Sections 2 and 3 . Based on our analysis, we d ev elop formal mod els and proo fs for an imp lementation of arb itrary access policies in Section 6. W e consider both safety and security for co rrectness; loosely , safety requires tha t an implementatio n does not introduce unspecified behaviors, and security requires that an implementatio n preserves the spec ified beh avioral equivalences. Our proof s of safety and security are built mo dularly by sh owing simulation s; we develop the necessary definitions and proof techniques in Section 4. Our analysis shows how a distributed imp lementation can be sy stematically de- signed from a specificatio n, guided by pre cise formal g oals. W e ju stify tho se goals by showing h ow th eir violations can lead to real attacks (Section s 2 and 3). Furthe r , we distill the key ideas beh ind those attacks an d pro pose correction s in terms of usef ul design pr inciples. W e show that o ther stateful computation s can b e distributed just as well using those principles (Section 7). Comparison with related work This pap er culminates a line of work that we b egin in [10] and continu e in [1 1]. In [10 ], we show how to securely implement static access policies with capab ilities; in [11], we present a safe (b ut not secure) implemen tation of dynamic access policies in that setting. In this paper, we carefully revie w those results, and systematically analyz e the difficulties that arise for security in the case of dynamic access policies. Our analysis leads us to de velop v aria nts of the implementation in [11] that we can prove secure with approp riate assumptions. Th e proo fs are b uilt by a n ew , instructive technique , which may be of indepen dent interest. Further, guided by ou r analysis of access control, we show how to automatically d e- riv e secure distributed imp lementations of o ther statef ul co mputation s. This approach is reminiscent of secure progr am partitio ning [22]. Access co ntrol for networked stor age has been studied in lesser detail by Gob- ioff [13] using belief logics, and by Hale vi et al. [15] using universal composab ility [9]. The tec hniques u sed in this pa per a re similar to th ose used by Abadi e t a l. for secure implementatio n of ch annel abstractions [ 2] and a uthentication primitives [3], and by Maffeis to study the equivalence o f communication patterns in distributed query sy s- tems [1 7]. Th ese techn iques rely on programmin g lang uages concepts, includ ing test- ing equiv alenc e [21] and full abstraction [19, 1]. A hug e body of such techniqu es have been dev eloped for formal specification and verification of systems. 2 W e do no t co nsider access control for untru sted storag e [16] in this paper . In file systems based on untrusted storage, files are crypto graphica lly secu red befor e storage, and their a ccess keys are managed and shared by users. As such, untrusted storage is quite similar to public communicatio n, a nd standard techniqu es f or secure communi- cation on public networks apply for secure storage in this s etting. Related work in that area in cludes f ormal analysis o f pr otocols for secure file shar ing on u ntrusted stor age [18, 8], as well as correctness proofs for the crypto graphic techniqu es inv olved i n such protoco ls [7, 12, 6]. 2 Re view: the case of static access polic ies T o warm up, let u s focu s on implementing access p olicies that ar e static . In this case, a secure implemen tation already appears in [10]. Below we systematically reconstru ct that implemen tation, focu sing on a detailed a nalysis of its correctn ess. This analysis allows us to distill some basic design prin ciples, marked with bold R , in preparation for later sections, where we conside r the more difficult pro blem of implemen ting dynam ic access policies. Consider the fo llowing proto col, NS s , for networked stor age. 1 Principals include users U, V , W . . . , an acce ss-control server A , and a stora ge server S . W e assume th at A maintain s a (static) access policy F an d S maintain s a store ρ . Access decision s under F fo llow the relation F ⊢ U op over users U and operatio ns o p . Execution of an operation op unde r ρ follows the relation ρ J o p K ⇓ ρ ′ J r K over next stores ρ ′ and results r . Let K AS be a secret key shared by A and S , an d mac be a f unction over messages and keys that produc es unforgeable message authentication c odes (MA Cs) [14]. W e assume that MACs can be deco ded to retrieve their messages. (Usually MAC s ar e explicitly paired with t heir messages, so that the deco ding is tri vial. ) (1) U → A : op (2) A → U : mac ( op , K AS ) if F ⊢ U op (2 ′ ) A → U : error otherwise (3) V → S : κ (4) S → V : r if κ = m ac ( op , K AS ) and ρ J op K ⇓ ρ ′ J r K (4 ′ ) S → V : er ror otherwise Here a user U requests A for access to an oper ation op , and A returns a cap ability for op only if F specifies that U may access op . Elsewhere, a user V requests S to execute an operation by send ing a cap ability κ , and S executes the operation only if κ authorizes access to that operation. What does “ safety” o r “secur ity” m ean in this setting? A reasonable spec ification of co rrectness is the fo llowing trivial pr otocol, IS s , for ideal storage. Here princip als include u sers U, V , W , . . . an d a server D . The access policy F and th e store ρ are 1 By con venti on, we use superscript s s and d to den ote “static ” and “dynamic”, and superscrip ts + and − to denote “ex tension” and “restrict ion”. 3 both maintained by D ; the access and execution relations remain as above. Th ere is no cryptog raphy . (i) V → D : op (ii) D → V : r if F ⊢ V op and ρ J op K ⇓ ρ ′ J r K (ii ′ ) D → V : error otherwise Here a user V reque sts D to execute an ope ration op , and V executes op only if F specifies that V may access op . This tr ivial protocol is correct “by definition”; so if NS s implements this protoco l, it is co rrect as well. What notions of implementatio n correctn ess are appropriate here? A basic criterion is that of safety [4]. Definition 1 (Safety) . Under a ny context ( adversary), the beha viors of a safe imple- mentation ar e inclu ded in the behaviors of the s pecificatio n. In practice, a suitable notion of inclusion may need to be crafted to accommo date specific implementation beh aviors by design (such as those due to messages (1) , (2) , and (2 ′ ) in NS s ). T ypic ally , those behaviors can be elim inated by a sp ecific context (called a “wrapper”), and safety m ay b e defin ed m odulo that context as lo ng as other, interesting behaviors are not eliminated. Still, saf ety only implies th e pr eservation of certain trace pro perties. A m ore pow- erful criter ion m ay b e der iv ed from the p rogram ming languages co ncept of semantics preservation, otherwise known as full abstraction [19, 1]. Definition 2 (Secur ity) . A secur e imp lementation preser ves behavio ral equ ivalences of the specification. In th is p aper, we tie security to an appro priate may testing co ngruen ce [21]. W e c on- sider a pr otocol instance to inc lude the file sy stem and so me code ru n b y “ho nest” users, an d assume that an arb itrary context collu des with the rem aining “d ishonest” users. From any NS s instance, we derive its IS s instance by an appropriate refinem ent map [4]. If NS s securely imp lements IS s , then for all NS s instances Q 1 and Q 2 , Q 1 and Q 2 are congru ent if their IS s instances are congru ent. Security imp lies safe ty fo r all practical pu rposes, so a safe ty cou nterexample usu- ally suffices to brea k security . For instance, we are in tro uble if o perations that can not be executed in IS s can so mehow be executed in NS s by manip ulating capabilities. Suppose that F 6⊢ V op for a ll dishon est V . Then no su ch V can execute op in IS s . Now suppose that some su ch V requests execution of op in NS s . W e k now that op is executed on ly if V shows a capab ility κ for op . Since κ c annot be forged , it must be obtained from A by some hone st U that satisfies F ⊢ U op . Therefore: R1 Capabilities obtained by honest users must not be shared with dishonest users. (Howe ver U can still shar e κ with honest users, and any execution req uest with κ can then be reprod uced in the specification as an execution request by U .) While (R1) pre ven ts explicit leaking of capabilities, we in fact require that capabil- ities do not leak any informa tion that is no t a vailable to IS s contexts. Informatio n m ay also be leaked implicitly (by observable effects). T herefor e: 4 R2 Capabilities obtained by honest users must not be examined or comp ared. Both (R1) and (R2) may be enforced by typecheckin g the code run by ho nest users. Finally , we re quire that infor mation is not leaked via capabilities ob tained by dis- honest users. (Recall that such capab ilities are already a vailable to the adversary .) Un - fortun ately , a capability for an op eration op is provided only to those u sers w ho h av e access to op u nder F ; in oth er words, A leaks in formatio n on F wh enever it re turns a capa bility! This leak breaks security . Why? Consider implementatio n instances Q 1 and Q 2 with op as the on ly operation, whose execution returns error and ma y be ob- served on ly b y h onest u sers; su ppose that a d ishonest u ser ha s ac cess to op in Q 1 but not in Q 2 . Then Q 1 and Q 2 can b e d istinguished by a context that requ ests a capabil- ity for op —a capability will b e returne d in Q 1 but n ot in Q 2 —but th eir specification instances cannot be distinguished by any context. Why do es this leak co ncern us? Af ter all, we expect that executing an operatio n should eventually leak some infor mation abo ut access to that oper ation, sinc e o ther- wise, having access to that operation is useless! Howe ver the leak here is prem ature; it allows a dishonest user to o btain inform ation about its access to op in an un detectable way , without having to request execution of op . T o prevent this leak, we must modify the protoc ol: R3 “F ake” capabilities for op must be return ed to users who do not hav e access to op . The p oint is that it sho uld not be possible to distinguish the fake capa bilities f rom the real ones p rematurely . Let K AS be another secret key shared b y A an d S . As a preliminar y fix, let us modify the following message in NS s . (2 ′ ) A → U : mac ( op , K AS ) if F 6⊢ U op Unfortu nately this mod ification is not enoug h, since the ad versary can still compare capabilities th at are o btained b y different user s f or a particu lar op eration op , to know if their accesses to op are the same unde r F . T o prevent this leak: R4 Capabilities for dif ferent users must be dif ferent. For instance, a capability ca n me ntion the u ser wh ose acce ss it authenticates. Making the meanin g of a message explicit in its co ntent is a go od design princ iple for security [5], and we use it on several occasion s in this paper . Accordingly we modif y the following messages in NS s . (2) A → U : mac ( h U, op i , K AS ) if F ⊢ U op (2 ′ ) A → U : mac ( h U, op i , K AS ) otherwise (4) S → V : r if κ = mac ( h , op i , K AS ) and ρ J op K ⇓ ρ ′ J r K (On receiving a ca pability κ f rom V , S still does n ot care whether V is the user to which κ is issued, even if that infor mation can no w be obtained from κ .) The following result can then be proved ( cf. [10]) . Theorem 1. NS s secur ely implemen ts IS s . 5 3 The case of dynamic access policies W e no w consider the more dif ficult problem of implem enting dyn amic access policies. Let F be dy namic; the f ollowing protoco l, NS d , is o btained b y a dding admin istration messages to NS s . Execution of an administrative operation θ und er F f ollows th e relation F J θ K ⇓ F ′ J r K over next policies F ′ and results r . (5) W → A : θ (6) A → W : r if F ⊢ W θ and F J θ K ⇓ F ′ J r K (6 ′ ) A → W : error otherwise Here A executes θ (p erhaps modifying F ) if F specifies that W con trols θ . The fol- lowing pro tocol, IS d , is obtained by adding similar messages to IS s . (iii) W → D : θ (iv) D → W : r if F ⊢ W θ and F J θ K ⇓ F ′ J r K (iv ′ ) D → W : e rror otherwise Unfortu nately NS d does n ot remain secure with respect to IS d . Consider the NS d pseudo- code below . Infor mally , a cquir e κ means “obtain a capability κ ” an d us e κ means “request execution with κ ”; chmod θ means “ request access mo dification θ ”; and su ccess means “detec t successful u se of a capability”. Here κ is a capab ility for an operation op and θ m odifies access to op . t1 a cquir e κ ; chmo d θ ; us e κ ; s uccess t2 c hmod θ ; acqu ire κ ; us e κ ; success Now (t1) and (t2) map to the same IS d pseudo- code chmod θ ; exec op ; su ccess — informa lly , exe c op means “r equest execution o f op ”. Indeed , requ esting execution with κ in NS d amounts to re questing execution of op in IS d , so the refinem ent map must er ase instances of acqui re and replac e instances of use with the app ropriate instances o f e xec . Howe ver, suppo se that initially no user h as access to op , and θ specifies th at a ll users m ay access op . Then (t1) and (t2) can be distinguished b y testing the e vent success . In (t1) κ does not authorize access to op , so success mu st be false; b ut in (t2) κ may author ize access to op , so su ccess may be true. Moreover , if rev o cation is possible, NS d does not ev en remain safe with respect to IS d ! Why? L et θ specify th at access to op is r ev oked f or some user U , and revoked be the event that θ is executed (thus mod ifying the access p olicy). In IS d , U cannot execute op after revoked . But in NS d , U ca n execute op after revo ked by u sing a capability that it acquires before revoked . Safety in a special ca se On e way o f eliminating the counterexamp le above is to make the following assumption : A1 Accesses cannot be dynamically re voked. W e can then prove the following ne w result (see Section 6). 6 Theorem 2. NS d safely implements IS d assuming (A1) . 2 The key observation is that with (A1 ), a user U cann ot access op until it can a lw ays access op , so U gains no advantage by acquiring capabilities early . Safety in the general case Safety b reaks with re vocation. Ho wev er , we can recover safety by in troducin g time . Let A and S share a logical clock (or counter) that measur es time, and let the same clock appear in D . W e have that: R5 Any capability that is pro duced at time Cl k expires at time Clk + 1 . R6 Any ad ministrative o peration req uested at time Clk is executed at th e next clo ck tick (to time Clk + 1 ), so that policies in NS d and IS d may change only at clock ticks (and not between). W e call this arran gement a “mid night-shift scheme”, since the under lying idea is th e same as that of periodic ally shifting g uards at a m useum or a b ank. Implementing this sch eme is straightforward. For (R5), capab ilities car ry timestam ps. For (R6), administrative operatio ns are executed on an “accumulato r” Ξ instead o f F , and at ev ery clock tick, F is updated to Ξ . Accordingly , we mo dify the following m essages in NS d to obtain the protoco l NS d + . (2) A → U : mac ( h U, op , Clk i , K AS ) if F ⊢ U op (2 ′ ) A → U : m ac ( h U, op , Cl k i , K AS ) otherwise (4) S → V : r if κ = mac ( h , op , Clk i , K AS ) and ρ J op K ⇓ ρ ′ J r K (6) A → W : r if F ⊢ W θ and Ξ J θ K ⇓ Ξ ′ J r K Like wise, we modify the following message in IS d to obtain the protoco l IS d + . (iv) D → W : r if F ⊢ W θ and Ξ J θ K ⇓ Ξ ′ J r K Now a capability that carries Clk as its timestamp certifies a p articular access decision at the instant Clk : the mean ing is m ade explicit in th e conten t, which is go od practice. Howe ver, recall that MA Cs can be decoded to retrieve their me ssages. In particular, one can tell the time in NS d + by decodin g capab ilities. Clear ly we require that: R7 If it is possible to tell the time in NS d + , it must also be possible to do so in IS d + . So we mu st make it possible to tell the time in IS d + . (The a lternative is to make it impossible to tell the time in NS d + , by encrypting the timestamps ca rried by capabil- ities. Recall th at the n otion of “time” h ere is purely logic al.) According ly we ad d th e following messages to IS d + . (v) U → D : () (vi) D → U : Clk The following result can then be proved ( cf . [11]). 2 Some implementa tion details, such as (R3), are not required for safety . 7 Theorem 3. NS d + safely implements IS d + . This result appear s in [11]. Unfo rtunately , the definition of safety in [11] is rather non- standard. Moreover , beyond this result, security is not considered in [11]. In the rest of this section, we analyze the difficulties that arise for security , and present new results. It turns out that there are se veral recipes to break security , and exp iry of capabilities is a commo n ingr edient. Clearly , using an expired capab ility has no counterpa rt in IS d + . So: R8 Any use of an expired capability must block (without any obser vable effect). Indeed , security brea ks withou t (R8). Consider the NS d + pseudo- code below . Info r- mally , stale me ans “detect any use of an expired cap ability”. Here κ is a cap ability for operation op . t3 a cquir e κ ; use κ ; stale W itho ut (R8), (t3) can be disting uished from a false event b y testing the event stal e . But co nsider impleme ntation instances Q 1 and Q 2 with op as the o nly op eration, who se execution h as no ob servable effect on the sto re; let Q 1 run (t3) an d Q 2 run fa lse . Since st ale canno t be reprodu ced in the specification, it must m ap to false . So the specification instances of Q 1 and Q 2 run exec op ; false and fal se . These instances cannot be distinguished. Moreover , expiry of a capability yields th e info rmation that time has elapsed b e- tween the acquisition and use of that capability . W e may expect that leaking this infor- mation is harmless; af ter all, the elapse o f time can b e trivially d etected by in specting timestamps. Then why sho uld we care about such a lea k? If the ad versary knows that the clo ck h as ticked at least once, it also knows that a ny pen ding ad ministrative oper- ations have been executed, possibly modify ing the access policy . If this inform ation is leaked in a way that can not be r eprodu ced in the specificatio n, we are in trouble. Any such way allo ws the adversary to implicitly control the e xpiry of a capability before its use. (Explicit controls, such as com parison o f timestamps, are n ot pr oblematic, sinc e they can be reproduced in the specification.) For instance, consid er the NS d + pseudo- code b elow . Her e κ an d κ ′ are capab ilities for operations op and op ′ , and θ mod ifies access to op . t4 a cquir e κ ′ ; chmod θ ; acq uire κ ; use κ ; succes s ; use κ ′ ; succes s t5 c hmod θ ; acqu ire κ ; us e κ ; success ; acquir e κ ′ ; use κ ′ ; succes s Both (t4) and (t5) map to the same IS d + pseudo- code chmod θ ; exe c op ; succe ss ; exec op ′ ; succes s But suppose that initially no user has access to op and all users ha ve access to op ′ , and θ specifies that all user s m ay ac cess op . The inte rmediate su ccess event is tru e on ly 8 if θ is executed; t herefor e it “forces” time to elapse for progress. Now (t4) and (t5) can be distinguished by testing the fi nal suc cess e ven t. In (t4) κ ′ must be stale when used, so the event must be false; but in ( t5) κ ′ may be fresh when used , so th e event may be true. Therefo re, security br eaks. Security in a special case One way o f p lugging such leaks is to co nsider that th e elapse of time is alto gether uno bservable. (This prospect is not as shock ing as it sounds, since here “time” is simply the value of a privately maintained counter .) W e expect th at executing an operatio n h as some observable effect. Now if ini- tially a user d oes not h av e access to an operation , but that access can be dyn amically granted, th en the elapse of tim e can be detected by observing the effect of executing that operation . So we mu st assume that: A2 Accesses cannot be dynamically granted. On the other hand, we must allow accesses to be dynamica lly rev oked, since otherwise the access policy becomes static. Now if initially a user has access to an oper ation, b u t that acc ess can b e dy namically rev oked, then it is possible to detect the e lapse of time if the failur e to execute that operation is observable. So we mu st assume that: A3 Any unsuccessfu l use of a capability blocks (without any observable ef fe ct). Let us now try to adap t the counterexample ab ove with (A2) an d (A3). Suppose that initially all users h av e access to op and op ′ , and θ specifies that no user ma y access op . Consider the NS d + pseudo- code below . Info rmally , failure means “ detect un- successful use of a capability”. t6 a cquir e κ ′ ; chmod θ ; acq uire κ ; use κ ; failur e ; use κ ′ ; succes s t7 c hmod θ ; acqu ire κ ; us e κ ; failure ; acquir e κ ′ ; use κ ′ ; succes s Both (t6) and (t7) map to the same IS d + pseudo- code chmod θ ; exe c op ; failu re ; exec op ′ ; succes s Fortunately , now (t6) and (t7) canno t be distinguished, since the interm ediate failu re ev ent canno t b e ob served if true . ( In con trast, recall that the interme diate succ ess ev ent in (t4) and (t5) forces a distinction between them.) Indeed , with (A2) an d (A3) there remains n o way to detect the elapse o f time, except by comparing timestamps. T o prevent the latter , we assum e that: A4 T imestamps are encry pted. 9 Let E AS be a secret key shared by A and S . The encryption of a term M with E AS un- der a random coin m is written as { m, M } E AS . W e remove message (4 ′ ) and mod ify the f ollowing messages in NS d + to o btain th e p rotocol N S d − . (Note that r andomiz a- tion takes care of (R4), so capabilities are not r equir ed to men tion users here.) (2) A → U : mac ( h U, op , { m, Clk } E AS i , K AS ) if F ⊢ U op (2 ′ ) A → U : mac ( h U, op , { m , Clk } E AS i , K AS ) otherwise (4) S → V : r if κ = m ac ( h , op , T i , K AS ) , T = { , Clk } E AS , and ρ J op K ⇓ ρ ′ J r K According ly , we remove the messages (iv ′ ) , (v) , and (vi) f rom IS d + to obtain the protoco l IS d − . W e can then pr ove the following new result (see Section 6): Theorem 4. NS d − secur ely implemen ts IS d − assuming (A2) , (A3) , and (A4) . The key observation is that with (A2), (A3), and ( A4), tim e can stand still (so that capabilities never expire). Security in the g eneral case More generally , we may co nsider plugg ing problem - atic leaks b y static an alysis. (Any such analy sis must be incomplete becau se of the undecid ability of the pro blem.) Howe ver , several complicatio ns arise in th is case. • The adversary can control the elapse of time by interacting with hone st users in subtle ways. Such interaction s lead to counterexamp les of the same flav or as the one with (t4 ) an d ( t5) ab ove, but are d ifficult to pr ev ent statically withou t severely restricting th e co de run by hon est user s. For instance, e ven if the suspicious-look ing pseudo- code chm od θ ; ac quire κ ; use κ ; success in (t4 ) and (t5) is replaced by an inn ocuou s pair of inputs o n a pub lic chann el c , th e adversary ca n still run the same cod e in parallel and serialize it by a pair of outpu ts on c (which serve as “begin/end” signals). • Even if we re strict the code run by ho nest users, such that ev ery use of a capability can be serialized immediately after its acquisition, the adversary can still force time to elapse a fter a capability is sen t to the file system and b efor e it is examined. Unless we hav e a w ay to constrain this elapse of time, we are in troub le. T o see how the adversary can brea k security by interacting with honest users, consider the N S d + pseudo- code below . He re κ is a capability for op eration op , and θ mo difies access to op ; further c () and w hi den ote input and output on public channels c and w . t8 a cquir e κ ; use κ ; c (); chmod θ ; c (); suc cess ; w hi t9 c (); c (); w hi Although use κ im mediately follows acquir e κ in ( t8), the delay between u se κ and succes s can be detected by the adversary to force time to elapse between those events. 10 Suppose th at initially n o u ser h as a ccess to op o r op ′ , θ specifies that a h onest user U may access op , and θ ′ specifies that all u sers m ay acc ess op ′ . Consider the following context. Her e κ ′ 0 and κ ′ 1 are capabilities for op ′ . c hi ; acquire κ ′ 0 ; use κ ′ 0 ; failur e ; chmod θ ′ ; acquir e κ ′ 1 ; use κ ′ 1 ; succes s ; c hi This co ntext forces time to elapse between a pair of outp uts on c . The con text can distinguish (t8) and (t9) by testing outpu t on w : in (t8) κ does not autho rize ac- cess to op , so su ccess is false and there is no ou tput o n w ; o n th e other hand, in (t9) ther e is. Secur ity breaks as a con sequence. Consider imple mentation instances Q 1 and Q 2 with U as the only honest user a nd op and op ′ as the on ly op erations, such that on ly U can detect execution of op and all users can detect execution o f op ′ ; let Q 1 run (t8) an d Q 2 run (t9). The specification instan ces of Q 1 and Q 2 run exec op ; c (); chmod θ ; c (); success ; w hi and c (); c (); w hi , which cann ot be d istin- guished: th e execution of op can always be de layed until θ is executed, so that success is true and ther e is an ou tput on w . Intuitively , an execution requ est in NS d + commits to a time bou nd (specified by th e timestamp o f the cap ability used for the request) within w hich that req uest must be proce ssed for p rogress; but op eration req uests in IS d + make no such commitment. T o solve this problem, we must assume that: A5 In IS d + a time bo und is specified for every o peration request, so that the request is dropp ed if it is no t processed within that time bound. Usual (u nrestricted) req uests n ow carry a time bou nd ∞ . Accordin gly we mo dify the following messages in IS d + . (i) V → D : ( op , T ) (ii) D → V : r if Clk ≤ T , F ⊢ V op , and ρ J op K ⇓ ρ ′ J r K W ith (A5), u sing an expir ed cap ability no w has a cou nterpart in IS d + . Informally , if a capability for an operation op is prod uced at time T in NS d + , then any use of that capability in N S d + maps to an ex ecution re quest for op in IS d + with time bound T . There remains no fundam ental difference between NS d + and IS d + . W e can then prove our main new result (see Section 6): Theorem 5 (Main theorem) . NS d + secur ely implemen ts IS d + assuming (A5) . 3 Fortunately , (A5) seems to be a reasonab le requ irement, and we impose that requ ire- ment implicitly in the sequel. Discussion Let us now revisit the prin ciples d ev eloped in Section s 2 and 3, and dis- cuss some alternativ es. First recall (R3), wh ere we introduce fake ca pabilities to prev ent p remature leaks of informatio n abo ut the access policy F . It is reasonable to con sider tha t we do not 3 This result holds with or without (R8). 11 care about such leaks, and wis h to keep the orig inal message (2 ′ ) in NS s . But then we must allow those leaks in the specification. For instance, we can make F pu blic. More practically , we can add messages to IS s that allow a user to know whether it has access to a particular operation . Next recall (R5) and ( R6), where we intro duce th e mid night-shif t scheme. Th is scheme can be relaxed to allow different capabilities to expire after different intervals , so long as a dministrative operations that af fect th eir c orrectness are not executed before those intervals elapse. Let dela y be a fu nction ov er users U , oper ations op , and clo ck values Clk that pro duces time intervals. W e may have that: R5 Any capability for U and op that is prod uced at time Cl k expires at tim e Clk + dela y ( U, op , Clk ) . R6 If an a dministrative oper ation af f ects th e access decision for U and op and is re- quested in the interval Clk , . . . , Clk + delay ( U, op , Clk ) − 1 , it is e xecu ted at the clock tick to time Clk + dela y ( U, op , Clk ) . This sche me remains so und, sinc e any capability fo r U an d op that is pr oduced at Clk and e xpires at Clk + del a y ( U, op , Clk ) certifies a correct access decision for U and op between Clk , . . . , Clk + delay ( U, op , Cl k ) − 1 . Finally , the implementatio n d etails in Sections 2 and 3 are far from unique. Gu ided by th e same under lying principle s, we can design c apabilities in various other ways. For instance, we may have a n implementa tion that does not requ ire K AS : any capa- bility is of the for m mac ( h h U, op , Clk i , { m, L } E AS i , K AS ) , where m is a fresh nonce and L is the predicate F ⊢ U op . Although th is design in volves more cryp tograph y than the o ne in NS d + , it reflects better pr actice: the access decision for U and op u nder F is explicit in the conten t o f any capability that certifies that decision . What does this design buy us? Consider applications where the access decision is not a boolean, but a label, a decision tree, or some arbitra ry data structur e. Th e design in NS d + requires a different signin g key f or e ach value of the access d ecision. Since the n umber o f such keys may be infin ite, verification of c apabilities b ecomes very inefficient. Th e desig n above is appro priate for such applications, and we d ev elop it further in Section 7. 4 Definitions and pr oof techniques Let us now develop form al definition s and pr oof techniq ues f or security and safety; these serve as back groun d for Sectio n 6, where we presen t for mal m odels and pr oofs for security and safety of NS d + with respect to IS d + . Let  be a pre congru ence on processes and ≃ be the associated congr uence. A process P un der a context ϕ is written as ϕ [ P ] . Contexts act as tests for beha v iors, and P  Q means that any test that is passed by P is passed by Q —in other words, “ P has no more behaviors than Q ”. W e describ e an impleme ntation as a binary relation R over processes, wh ich re- lates specification instan ces to im plementation instances. Th is relation conveniently generalizes a refinemen t map [4]. 12 Definition 3 (Full abstraction) . An implementatio n R is fu lly abstract if it satisfies: ( P R E S E RV AT I O N ) ∀ ( P, Q ) ∈ R . ∀ ( P ′ , Q ′ ) ∈ R . P  P ′ ⇒ Q  Q ′ ( R E FL E C T I O N ) ∀ ( P, Q ) ∈ R . ∀ ( P ′ , Q ′ ) ∈ R . Q  Q ′ ⇒ P  P ′ ( P R E S E RV AT I O N ) and ( R E FL E C T I O N ) are respectively soun dness and completeness of the implementatio n und er  . Security only requires soundness. Definition 4 ( cf. Definition 2 [Security]) . An implementatio n is secure if it satisfies ( P R E S E RV A T I O N ) . Intuitively , a secu re implemen tation does no t intr oduce any in teresting behaviors—if ( P, Q ) and ( P ′ , Q ′ ) are in a secure R and P has no more behaviors th an P ′ , then Q has no more behaviors than Q ′ . A fully ab stract implemen tation mor eover does n ot eliminate any interesting beha viors. Any subset of a secure imp lementation is secu re. Security implies p reservation of ≃ . Finally , testing itself is tri vially secure since  is closed unde r any con text. Proposition 6. Let ϕ be an y context. Then { ( P , ϕ [ P ]) | P ∈ W } is secu r e for a ny set of pr ocesses W . On the other hand, a context may elimin ate some interesting behaviors by acting as a test f or those b ehaviors. A fully abstract context d oes no t; it m erely translates b ehav- iors. Definition 5 (Fully abstract co ntext) . A context ϕ is fully a bstract for a set of pr o cesses W if { ( P, ϕ [ P ]) | P ∈ W } is fully abstract. A fully abstract context can be used as a wrapper to accoun t for any benign differences between the impleme ntation and the specification. An implementation is safe if it does not introdu ce any behaviors mod ulo such a wrapper . Definition 6 ( cf. Definitio n 1 [Safety ]) . An imp lementation R is safe if ther e exists a fully abstract conte xt φ for the set of specification instances such that R satisfies: ( I N C L U S I ON ) ∀ ( P, Q ) ∈ R . Q  φ [ P ] Let us see wh y φ must be fully abstract in the d efinition. Suppose that it is not. Then for so me P and P ′ we h av e φ [ P ]  φ [ P ′ ] and P 6 P ′ . Intuitively , φ “c overs up” the behaviors of P that are not included in the behaviors of P ′ . Unfor tunately , those behaviors may be unsafe. For instance , let P ′ be a pi ca lculus process [20] that d oes not contain public chann els, and { P ′ } be th e set of specification instances—we co nsider any output on a public channel to be unsafe. Let c be a public channel; let P = c hi ; P ′ and φ = • | ! c hi . Th en P 6 P ′ and φ [ P ]  φ [ P ′ ] , as require d. But clearly P is unsafe by our assumptions; yet P  φ [ P ′ ] , so that by definition { ( P ′ , P ) } is s afe! The definition therefor e becom es meaningless. 13 W e now present some proof techniques. A d irect proof of security requires map- pings between sub sets of  . Tho se map pings m ay be difficult to d efine an d m anipulate. Instead a security pr oof may be built modularly by showing simulations, as in a safety proof . Such a pro of requires simpler mappings between processes. Proposition 7 (Pro of of secu rity) . Let φ and ψ be co ntexts such that for all ( P , Q ) ∈ R , Q  φ [ P ] , P  ψ [ Q ] , and φ [ ψ [ Q ]]  Q . Then R is secure . Pr oof. Suppose that ( P , Q ) ∈ R , P  P ′ , an d ( P ′ , Q ′ ) ∈ R . Then Q  φ [ P ]  φ [ P ′ ]  φ [ ψ [ Q ′ ]]  Q ′ . Intuitively , R is secu re if R and R − 1 both satisfy ( I N C L U S I O N ) , a nd the witn essing contexts “cancel” e ach o ther . A simple tech nique for p roving full abstra ction for con - texts follo ws as a corollary . Corollary 8 (Pro of of f ull abstraction for contexts) . Let ther e b e a context ϕ − 1 such that for all P ∈ W , ϕ − 1 [ ϕ [ P ]] ≃ P . Then ϕ is a fully abstract context for W . Pr oof. T ake φ = ϕ − 1 and ψ = ϕ in the p roposition above to show that { ( ϕ [ P ] , P ) | P ∈ W } is secur e. The con verse follows by Proposition 6. Theory for t he a pplied pi calculus Let a, b, . . . r ange over n ames, u , v , . . . over names and variables, M , N , . . . over ter ms, and A, B , . . . over extended pr ocesses. Semantic relations includ e the binar y relations ≡ , → , and ℓ − → over extended processes (structural equiv alence, reduc tion, and labeled transition); her e labels ℓ ar e of the form a ( f M ) or ( ν e u ) a h e v i (wher e a / ∈ e u an d e u ⊆ e v ). Both → an d ℓ − → are closed und er ≡ an d → is closed un der arbitrary e valuation contexts. W e recall some theor y on may testing fo r applied pi calculus programs. Definition 7 (Barb) . A barb ↓ a is a pr edicate that tests po ssible output on a ; we write A ↓ a if A ( ν e u ) a h e v i − → B for some B , e v , an d e u . A weak b arb ⇓ a tests possible eventual output on a , i.e. , ⇓ a , → ⋆ ↓ a . Definition 8 (Frame) . Let A be closed. Then we have A ≡ ( ν e a )( σ | P ) fo r some e a , σ , and P such that fv ( rng ( σ )) ∪ f v ( P ) = ∅ ; define frame ( A ) ≡ ( ν e a ) σ . Definition 9 (Static equ iv alenc e) . Let A and B be clo sed. Then A is statically equiv- alent to B , written A ≈ s B , if ther e exists e a , σ , an d σ ′ such that frame ( A ) ≡ ( ν e a ) σ , frame ( B ) ≡ ( ν e a ) σ ′ , dom ( σ ) = dom ( σ ′ ) , and for all M a nd N , { e a } ∩ ( f n ( M ) ∪ fn ( N )) = ∅ ⇒ M σ = N σ ⇔ M σ ′ = N σ ′ Proposition 9. A ≈ s B if and only if frame ( A ) ≃ f rame ( B ) . Pr oof. By induction on the structure of closing e valuation contexts. W e can prove  by showing a simulation relation that approxim ates  . 14 Definition 10 (Simulation pre order) . Let 4 be the lar gest r elation S such th at for all A an d B , ( A, B ) ∈ S implies • A ≈ s B • ∀ A ′ . A → A ′ ⇒ ∃ B ′ . B → ⋆ B ′ ∧ ( A ′ , B ′ ) ∈ S • ∀ A ′ , α. A ℓ − → A ′ ⇒ ∃ B ′ . B → ⋆ ℓ − →→ ⋆ B ′ ∧ ( A ′ , B ′ ) ∈ S Proposition 10 (Proof of tes ting precon gruenc e) . 4 ⊆  . 5 Models and pr oo fs f or static access policies W e no w present implemen tation and specification models and security proofs for static access p olicies. Models and pr oofs fo r dy namic access policies f ollow essentially the same routine, and are presented in the next s ection. 5.1 Pr eli minaries W e fix an equation al theor y Σ with the following pro perties. • Σ inclu des a t heory of natural numb ers with symbols 0 ( zero), + 1 ( successor), and ≤ (less tha n or equal to). • Σ includes a theory o f finite tuples with symbols h , i (in dexed concaten ate) and . (ind exed project). • Σ contains exactly one equation that in volves the symbol mac , which is msg ( mac ( x, y )) = x Clients are iden tified by natural n umbers; we fix a finite subset I o f N an d con sider any user not identified in I to be dishonest. File-system code and oth er processes are conveniently modeled by parameterized process expressions, whose seman tics are defined ( recursively) by extendin g the usual semantic relations ≡ , → , and ℓ − → . 5.2 Models Figures 1 and 2 show applied pi c alculus m odels f or th e file systems un der study . W e ignore the rules in the inn er boxes in these figures (labeled ( D U M M Y ...)) in a first reading. Figure 1 models a traditio nal file system (with local access control). T he file system is pa rameterized b y an access p olicy F , a store ρ , and a r enaming η of its default interface. Th at interface inclu des a chann el β ◦ k for ev ery k ∈ N ; intuitively , a user identified by k m ay send operation requests on this channel. Processes Req k ( F, op , n ) and EOk ( M , op , n ) deno te inter nal states. In th e equa- tional theory auth ( F , k , op ) = ok me ans that user k may acc ess op und er F , and 15 ( O P R E Q ) k ∈ N T FS ( F , ρ ) η ≡ η ( β ◦ k )( op , x ); Req k ( F , op , x ) η | T F S ( F , ρ ) η ( O P O K ) perm ( F , k, op ) = L Req k ( F , op , M ) → EO k ( L, op , M ) ( O P E X E C ) exec ( L, op , ρ ) = h N , ρ ′ i EOk ( L, op , M ) | T F S ( F , ρ ) η → M h N i | T F S ( F , ρ ′ ) η ( D U M M Y A U T H R E Q ) j ∈ N \I  TS NAS η ≡ η ( α j )( op , x ); x h mac ( h j, op i , K ? ) i |  TS NAS η ( D U M M Y E X E C R E Q ) j ∈ N \I  TS NAS η ≡ η ( β j )( κ, x ); DReq ( κ, x ) η |  TS NAS η ( D U M M Y O P R E Q ) κ = mac ( m sg ( κ ) , K ? ) msg ( κ ) = h j, op i j ∈ N \I DReq ( κ, M ) η → η ( β ◦ j ) h op , M i Figure 1: A traditio nal file system with local access control ( A U T H R E Q ) k ∈ N NA F S ( F , ρ ) η ≡ η ( α k )( op , x ); CReq k ( F , op , x ) | N A F S ( F , ρ ) η ( A U T H C A P ) cert ( F , k , op ) = κ CReq k ( F , op , M ) → M h κ i ( E X E C R E Q ) k ∈ N NA F S ( F , ρ ) η ≡ η ( β k )( κ, x ); Req ( κ, x ) | NA F S ( F , ρ ) η ( O P O K ) verif ( κ ) = L L ∈ { true , false } msg ( κ ) = h , op i Req ( κ, M ) → EOk ( L, op , M ) ( O P E X E C ) exec ( L, op , ρ ) = h N , ρ ′ i EOk ( L, op , M ) | N A F S ( F , ρ ) η → M h N i | N A F S ( F , ρ ′ ) η ( D U M M Y O P R E Q ) j ∈ N \I  NAS TS η ≡ η ( β ◦ j )( op , x ); DReq j ( op , x ) η |  NAS TS η ( D U M M Y A U T H & E X E C R E Q ) DReq j ( op , M ) η ≡ ( ν c ) η ( α j ) h op , c i ; c ( κ ); η ( β j ) h κ, M i Figure 2: A network- attached file system with distributed access control 16 exec ( L, op , ρ ) = h N , ρ ′ i means th at the execution of op o n stor e ρ un der de cision L returns N and store ρ ′ . Decisions are derived by perm ( , , ) as follows. L = true if auth ( F, k , op ) = ok , = false otherwise perm ( F, k , op ) , L A traditional storage system may be described as ( ν i ∈I β ◦ i )( C | I F S ( F, ρ )) Here C is code r un by honest u sers; the file-system exports th e default inter face ( im- plicitly renamed b y “ide ntity”), and channels associated with honest users ar e hidden from the context. The context may be arbitra ry and is left implicit; in particu lar , chan- nels associated with dishonest users are a vailable to the context. Figure 2 mod els a network-attached file sy stem (with distributed access con trol). As above, the file system is param eterized by an access policy F , a store ρ , and a renaming η of its d efault interface. That inter face includes channe ls α k and β k for ev ery k ∈ N ; intuitively , a user identified by k may send authorizatio n requests o n α k and execution requests on β k . Processes CReq k ( F, op , c ) , Req ( κ, n ) , and EOk ( M , op , n ) denote internal states. In the equa tional theory auth ( F, k , op ) = o k and exec ( L , op , ρ ) = h N , ρ ′ i have the same mean ings as above. Capabilities and d ecisions are derived by cert ( , , ) and verif ( ) as follows. a = K M D if auth ( F, k , op ) = ok , = K ′ M otherwise cert ( F, k , op ) , m ac ( h k , op i , a ) L = true if κ = mac ( msg ( κ ) , K M D ) , = false if κ = mac ( msg ( κ ) , K ′ M ) verif ( κ ) , L A network-attached storage system may be described as ( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F , ρ )) As ab ove, C is code run by honest users; the file-system expo rts the default interface and hides the ke y s that authentica te capabilities. Channels associated with hon est users are hidden from the context. T he context may be ar bitrary and is left implicit; in particular, channels associated with dishonest users are av ailable to the context. 5.3 Pr o ofs of security W e prove that the implementation is secure, safe, and fully ab stract with respect to the specification. W e begin by outlining the proofs, and then present details. 5.3.1 Out line Let F , ρ , and C ran ge over access policies, stores, and code for h onest u sers th at are “wellfo rmed” in th e implem entation. Let ⌈ ⌉ abstract such F , ρ , and C in the 17 fn ( M ) ∩ ( A ∪ { α j , β j | j ∈ N \I } ) = ∅ ⌈ M ⌉ = M ⌈ P ⌉ Γ = Q Γ ⊇ { α j , β j | j ∈ N \I } ⌈ P ⌉ = Q dom (Γ) ⊇ A ⌈ 0 ⌉ Γ = 0 n / ∈ dom (Γ) ⌈ ( ν n ) P ⌉ Γ = ( ν n ) ⌈ P ⌉ Γ ⌈ P | Q ⌉ Γ = ⌈ P ⌉ Γ | ⌈ Q ⌉ Γ ⌈ ! P ⌉ Γ = ! ⌈ P ⌉ Γ fnv ( u, e x ) ∩ dom (Γ) = ∅ ⌈ u ( e x ); P ⌉ Γ = u ( e x ); ⌈ P ⌉ Γ fnv ( u, f M ) ∩ dom (Γ) = ∅ ⌈ u h f M i ; P ⌉ = u h f M i ; ⌈ P ⌉ fnv ( M , N ) ∩ dom (Γ) = ∅ ⌈ if M = N then P else Q ⌉ Γ = if M = N th en ⌈ P ⌉ Γ else ⌈ Q ⌉ Γ i ∈ I fnv ( c, x ) ∩ dom (Γ) = ∅ c / ∈ fn ( P ) ⌈ ( ν c ) α i h op , c i ; c ( x ); P ⌉ Γ = ⌈ P ⌉ Γ ,x : Cert ( i, op ) { i, i ′ } ⊆ I Γ( x ) = Cert ( i ′ , op ) fnv ( op , M ) ∩ dom (Γ) = ∅ ⌈ β i h x, M i ; P ⌉ Γ = β ◦ i ′ h op , M i ; ⌈ P ⌉ Γ Figure 3: Abstraction functio n specification. W e define R = [ F, ρ,C { ( ν i ∈I β ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ⌈ ρ ⌉ )) , ( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F , ρ )) } W e prove that R is secure by sho wing contexts φ an d ψ such that: Lemma 11. F or any F , ρ , and C , 1. ( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F , ρ ))  φ [( ν i ∈I β ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ⌈ ρ ⌉ ))] 2. ( ν i ∈I β ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ⌈ ρ ⌉ ))  ψ [( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F , ρ ))] 3. φ [ ψ [( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F, ρ ))]]  ( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F, ρ )) Proposition 7 then applies. Mor eover we show: Lemma 12. F or any F , ρ , and C , ψ [ φ [( ν i ∈I β ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ⌈ ρ ⌉ ))]]  ( ν i ∈I β ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ⌈ ρ ⌉ )) Now R − 1 is secure by Prop osition 7. Thus R is p roved fully abstract. Mo reover Lemmas 1 1.1–2 already im ply th e co n verse of Lemma 1 2; so φ is a fully abstract context by C orollary 8 (takin g φ − 1 = ψ ). T hus R is proved safe. W e now revisit Figures 1 an d 2 and fo cus on th e rules in the inner bo xes. Those rules de fine processes  TS NAS and  NAS TS . Intu iti vely , these processes tran slate p ublic requests f rom NAS s to TS s and from TS s to NAS s . Let e a TS and e a NAS include the public interfaces of TS s and NAS s . W e define φ = ( ν e a TS ) ( • |  TS NAS ) ψ = ( ν e a NAS ) ( • |  NAS TS ) 18 fn ( κ, M ) ∩ A = ∅ Req ( κ, M ) S ′ F 1 DReq ( κ, M ) η 2 k ∈ N fn ( op , M ) ∩ A = ∅ Req ( cert ( F , k, op ) , M ) S ′ F 1 Req k ( F , op , M ) fn ( L, op , M ) ∩ A = ∅ EOk ( L, op , M ) S ′ F 1 EOk ( L, op , M ) j ∈ N \I fn ( L, op , M ) ∩ A = ∅ CReq j ( F , op , M ) S ′ F 1 M h m ac ( h j, op i , K ? ) i ( F I L E S Y S T E M S ) ∀ r ∈ L . P r S ′ F 1 Q r fn ( ρ ) ∩ A = ∅ NA F S ( F , ρ ) | Π r ∈L P r S F 1 T FS ( F , ρ ) η 2 | Π r ∈L Q r ( H O N E S T U S E R S ) ∀ x. x ∈ dom ( σ ) ⇒ ∃ i ∈ I , op . Γ( x ) = Cert ( i, op ) ∧ σ ( x ) = cert ( F, i, op ) C σ S Γ ,F 2 ⌈ C ⌉ Γ i ∈ I P S Γ ,F 2 Q Γ( x ) = Cert ( i, op ) ( ν c )( c ( x ); P | CReq i ( F , op , c )) S F 3 Q i ∈ I P S Γ ,F 2 Q Γ( x ) = Cert ( i, op ) ( ν c )( c ( x ); P | c h cert ( F , i, op ) i ) S F 3 Q ( T RU S T E D C O D E ) P S F 1 Q P ′ S Γ ,F 2 Q ′ ∀ r ∈ L . P r S F 3 Q r ( ν i ∈I α i β i )( P | P ′ | Π r ∈L P r ) S ′ F ( ν i ∈I β ◦ i )( Q | Q ′ | Π r ∈L Q r ) ( S Y S T E M C O D E ) P S ′ F Q ∀ x, N . ( ∃ σ ′ . σ ≡ { N / x } | σ ′ ) ⇒ N : F Export ( ν e n )( ν K M D K ′ M )( σ | P ) S ( ν e n )( ν K ? )( η 3 ( σ ) | ( ν j ∈ N \I β ◦ j ? )( Q |  TS NAS η 2 )) Figure 4: Simulation relatio n for Lemma 11.1 ( 4 φ [ ] ) The abstraction fu nction ⌈ ⌉ is shown in Fig ure 3. Here A contains special n ames whose uses in well-forme d code are either disciplined or forb idden. A , { α i , β i | i ∈ I } ∪ { α j ? , β j ? , β ◦ j ? | j ∈ N \I } ∪ { K M D , K ′ M , K ? } The names in { α j ? , β j ? , β ◦ j ? | j ∈ N \I } ∪ { K ? } are in vented to simplify proofs below . 5.3.2 Simulatio n relations Figures 4, 5, and 6 show simulation rela tions for Lemm a 11.1–3. A ll these re lations are closed und er ≡ . Here η 1 and η 2 rename the p ublic interfaces of NAS s and TS s and η 3 renames the priv ate authentication keys K M D and K ′ M . η 1 , [ α j 7→ α j ? , β j 7→ β j ? | j ∈ N \I ] η 2 , [ β ◦ j 7→ β ◦ j ? | j ∈ N \I ] η 3 , [ a 7→ K ? | a ∈ { K M D , K ′ M } ] These renamin gs map to names in A that d o not o ccur in wellfo rmed code ( see Fig- ure 3). In par ticular , the pur pose of η 1 and η 2 is to rena me some p ublic ch annels to fresh ones that can be hidden by restriction in ψ and φ . (A similar purp ose is served by 19 k ∈ N fn ( op , M ) ∩ A = ∅ Req k ( F , op , M ) T ′ 1 Req ( cert ( F , k, op ) , M ) fn ( L, op , M ) ∩ A = ∅ EOk ( L, op , M ) T ′ 1 EOk ( L, op , M ) ( F I L E S Y S T E M S ) ∀ r ∈ L . P r T ′ 1 Q r fn ( ρ ) ∩ A = ∅ T FS ( F , ρ ) | Π r ∈L P r T F 1 NA F S ( F , ρ ) η 1 | Π r ∈L Q r ( H O N E S T U S E R S ) ∀ x. x ∈ dom ( σ ) ⇒ ∃ i ∈ I , op . Γ( x ) = Cert ( i, op ) ∧ σ ( x ) = cert ( F, i, op ) ⌈ C ⌉ Γ T F 2 C σ ( S Y S T E M C O D E ) P 1 T F 1 Q 1 P 2 T F 2 Q 2 P = ( ν i ∈I β ◦ i )( P 1 | P 2 ) Q = ( ν i ∈I α i β i )( ν K M D K ′ M )( Q 1 | Q 2 ) ( ν e n )( σ | P ) T ( ν e n )( σ | ( ν j ∈ N \I α j ? β j ? )( Q |  NAS TS η 1 )) Figure 5: Simulation relation for Lemma 11.2 ( 4 ψ [ ] ) quantification in logic.) Hiding tho se names strengthens Lemmas 11.1–2 while not af- fecting their proo fs; b ut more impo rtantly , th e restrictions are require d to prove Lemma 11.3. Further the purpose of η 3 is to ab stract term s th at m ay b e a vailable to con texts. Such ter ms m ust b e o f type Ex port ; intuitively , K M D and K ′ M may ap pear only as authenticatio n keys in capabilities issued to dishonest users. N = N ′ σ { K M D , K ′ M , K ? } ∩ fn ( N ′ ) = ∅ ∀ L ∈ rng ( σ ) . ∃ j ∈ N \I , op . L = cert ( F, j, op ) ∧ op : F Export N : F Export W e show t hat term abstraction preserves equi valence in the equation al theory . Lemma 13. Sup pose th at M : F Export and N : F Export . Then M = N iff η 3 ( M ) = η 3 ( N ) . This lemma is requ ired to sh ow static equivalence in proofs of so undness for the relations S , T , and U in Figures 4, 5, and 6, which in turn lead to Lemma 11. W e prove that those relations are included in the simulation preorder . Lemma 14. S ⊆ 4 , T ⊆ 4 , and U ⊆ 4 . Intuitively , by S a network-attached storage system may be simulated by a trad i- tional stor age system by for warding pu blic re quests dir ected a t N A F S to a hidd en T F S interface ( via φ ). Sym metrically , by T a tra ditional storag e system may be sim ulated by a n etwork-attached stor age system by f orwarding pu blic requests direc ted at T F S to a hidd en N A F S interface (via ψ ) . Finally , by U a network-attached storage system may simulate another network-a ttached storage system by fi ltering requests directed at NA F S thr ough a hidden T F S interface bef ore for warding them to a hidden NA F S inter- face ( via φ [ ψ ] ). This rather my sterious detou r f orces a fresh c apability to be acqu ired for e very e xecution request. 20 fn ( κ, M ) ∩ A = ∅ DReq ( κ, M ) η 2 U ′ F 1 Req ( κ, M ) j ∈ N \ I fn ( op , M ) ∩ A = ∅ β ◦ j ? h op , M i U ′ F 1 Req ( cert ( F, j, op ) , M ) j ∈ N \ I fn ( op , M ) ∩ A = ∅ DReq j ( op , M ) η 1 ⊕ η 2 U ′ F 1 Req ( cert ( F, j, op ) , M ) j ∈ N \ I fn ( op , M ) ∩ A = ∅ ( ν c )( c ( x ); β j ? h x, M i | CR eq j ( F , op , c )) U ′ F 1 Req ( cert ( F, j, op ) , M ) j ∈ N \ I fn ( op , M ) ∩ A = ∅ ( ν c )( c ( x ); β j ? h x, M i | c h cert ( F , j, op ) i ) U ′ F 1 Req ( cert ( F, j, op ) , M ) j ∈ N \ I fn ( op , M ) ∩ A = ∅ β j ? h cert ( F, j, op ) , M i U ′ F 1 Req ( cert ( F, j, op ) , M ) j ∈ N \ I fn ( op , M ) ∩ A = ∅ Req ( cert ( F , j, op ) , M ) U ′ F 1 Req ( cert ( F, j, op ) , M ) fn ( L, op , M ) ∩ A = ∅ EOk ( L, op , M ) U ′ F 1 EOk ( L, op , M ) j ∈ N \ I fn ( op , M ) ∩ A = ∅ M h m ac ( h j, op i , K ? ) i U ′ F 1 CReq j ( F , op , M ) ( F I L E S Y S T E M S ) ∀ r ∈ L . P r U ′ F 1 Q r fn ( ρ ) ∩ A = ∅  TS NAS η 2 |  NAS TS η 1 ⊕ η 2 | N A FS ( F , ρ ) η 1 | Π r ∈L P r U F 1 NA F S ( F , ρ ) | Π r ∈L Q r ( H O N E S T U S E R S ) ⌈ C ⌉ Γ = C ◦ ∀ x. x ∈ dom ( σ ) ⇒ ∃ i ∈ I , op . Γ( x ) = Cert ( i, op ) ∧ σ ( x ) = cert ( F, i, op ) C σ U Γ ,F 2 C σ i ∈ I P U Γ ,F 2 Q Γ( x ) = Cert ( i, op ) ( ν c )( c ( x ); P | CR eq i ( F , op , c )) U F 3 ( ν c )( c ( x ); Q | CR eq i ( F , op , c )) ( T RU S T E D C O D E ) P U F 1 Q P ′ U F 2 Q ′ ∀ r ∈ L . P r U F 3 Q r ( ν i ∈I α i β i )( ν K M D K ′ M )( P | P ′ | Π r ∈L P r ) U ′ F ( ν i ∈I α i β i )( Q | Q ′ | Π r ∈L Q r ) ( S Y S T E M C O D E ) P U ′ F Q ∀ x, N . ( ∃ σ ′ . σ ≡ { N / x } | σ ′ ) ⇒ N : F Export ( ν e n )( ν K ? )( η 3 ( σ ) | ( ν j ∈ N \I β ◦ j ? α j ? β j ? ) P ) U ( ν e n )( ν K M D K ′ M )( σ | Q ) Figure 6: Simulation relation for Lemma 11.3 ( φ [ ψ [ ]] 4 ) 21 j ∈ N \I fn ( op , M ) ∩ A = ∅ DReq j ( op , M ) V ′ F 1 Req j ( op , M ) j ∈ N \I fn ( op , τ , M ) ∩ A = ∅ N = mac ( h j, op i , K ? ) ( ν c )( c ( κ ); β j ? h κ, M i | c h N i ) V ′ F 1 Req j ( op , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ N = mac ( h j, op i , K ? ) L = perm ( F , j, op ) β j ? h N , M i V ′ F 1 EOk ( L, op , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ N = mac ( h j, op i , K ? ) L = perm ( F , j, op ) DReq ( N , M ) η 1 ⊕ η 2 V ′ F 1 EOk ( L, op , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ L = perm ( F , j, op ) β ◦ j ? h op , M i V ′ F 1 EOk ( L, op , M ) fn ( op , M ) ∩ A = ∅ EOk ( L, op , M ) V ′ F 1 EOk ( L, op , M ) fn ( adm , M ) ∩ A = ∅ AReq k ( adm , M ) V ′ F 1 AReq k ( adm , M ) ( FI L E S Y S T E MS ) ∀ r ∈ L . P r V ′ F 1 Q r fn ( ρ ) ∩ A = ∅  NAS TS η 1 |  TS NAS η 1 ⊕ η 2 | T F S ( F , ρ ) η 2 | Π r ∈L P r V F , Clk 1 T FS ( F , ρ ) | Π r ∈L Q r ( H O N E S T U S E R S ) ⌈ C ⌉ Γ V F 2 ⌈ C ⌉ Γ ( S Y S T E M C O D E ) P V F 1 Q P ′ V F 2 Q ′ P ′′ = ( ν i ∈I β ◦ i )( ν K ? )( P | P ′ ) Q ′′ = ( ν i ∈I β ◦ i )( Q | Q ′ ) ( ν e n )( σ | ( ν j ∈ N \I β ◦ j ? α j ? β j ? ) P ′′ ) V ( ν e n )( σ | Q ′′ ) Figure 7: Simulation relation for Lemma 12 ( ψ [ φ [ ]] 4 ) By definition of ⌈ ⌉ and alph aconv ersion to default public interfaces, we have f or any F , ρ , and C : 1. ( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F , ρ )) 4 φ [( ν i ∈I β ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ⌈ ρ ⌉ ))] 2. ( ν i ∈I β ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ⌈ ρ ⌉ )) 4 ψ [( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F , ρ ))] 3. φ [ ψ [( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F, ρ ))]] 4 ( ν i ∈I α i β i )( C | ( ν K M D K ′ M ) N A F S ( F, ρ )) Lemma 11 follows by Propo sition 10. Thus R is secu re. Further, Figure 7 shows a simulation relation for Lemma 12. W e p rove that th e relation V is included in the simulation preorder . Lemma 15. V ⊆ 4 . By definition of ⌈ ⌉ and alphaconversion to default p ublic in terfaces, we ha ve fo r any F , ρ , and C : ψ [ φ [( ν i ∈I β ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ⌈ ρ ⌉ ))]] 4 ( ν i ∈I β i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ⌈ ρ ⌉ )) Lemma 12 follows by Propo sition 10. Thus R is safe and fu lly abstract. 22 6 Models and pr oo fs f or dynamic access policies Next we pr esent models and pro ofs for dy namic access p olicies, fo llowing the ro utine of Section 5. 6.1 Models The mode ls extend th ose in Section 5, and are sh own in Figures 8 an d 9. (As usual, we ign ore the rules in the inner boxes in a first reading.) Interfaces are extended with channels δ k and δ ◦ k for ev ery k , on which users iden tified by k send administration requests in the implementation and the specification. In the equa tional theory auth ( F, k , op ) = ok and exec ( L, op , ρ ) = h N , ρ ′ i have the same meanin gs as in Sectio n 5. Capabilities are derived b y cert ( , , , ) as fol- lows. a = K M D if auth ( F, k , op ) = ok , = K ′ M otherwise cert ( F, k , op , Clk ) , m ac ( h k , op , Cl k i , a ) Recall that ad ministrative oper ations schedu led at time Clk are executed at the next clock tick (to Clk + 1 ). In the equational theory pus h ( L, adm , Ξ , Clk ) = h N , Ξ ′ i means that an administrative operation a dm pushed on s chedule Ξ un der decision L at Clk return s N an d the sche dule Ξ ′ ; and s ync ( F, Ξ , Clk ) = F ′ means that an access policy F synchro nized under schedule Ξ at Cl k return s the access policy F ′ . A traditional storage system may be described as ( ν i ∈I α ◦ i β ◦ i δ ◦ i )( C | T F S ( F , ∅ , 0 , ρ )) where C is code run by honest users, F is an access policy a nd ρ is a store; initially the schedule is empty and the time is 0 . Similarly a network-attach ed storage system may be described as ( ν i ∈I α i β i δ ◦ i )( C | ( ν K M D K ′ M ) N A F S ( F , ∅ , 0 , ρ )) As usu al, let F , ρ , and C ran ge over access po licies, stor es, and cod e for honest users tha t are “we llformed” in th e imp lementation, and let ⌈ ⌉ abstract such F , ρ , a nd C in the specification . W e d efine R = S F, ρ,C { ( ν i ∈I α ◦ i β ◦ i δ ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ∅ , 0 , ⌈ ρ ⌉ )) ( ν i ∈I α i β i δ i )( C | ( ν K M D K ′ M ) N A F S ( F, ∅ , 0 , ρ )) } Figure 10 shows the abstraction fu nction ⌈ ⌉ . Here A = { α j ? , β j ? , δ j ? , α ◦ j ? , β ◦ j ? , δ ◦ j ? | j ∈ N \I } ∪ { K M D , K ′ M , K ? } ∪ { α i , β i , δ i | i ∈ I } 6.2 Examples of security At this point we r evisit the “cou nterexamples” in Section 3. By m odeling them f ormally in this setting, we show that tho se counterexamples are eliminated . Recall (t1) and (t2). 23 ( C L K R E Q ) k ∈ N T FS ( F , Ξ , Clk , ρ ) η ≡ η ( α ◦ k )( x ); TReq ( x ) | T FS ( F , Ξ , Clk , ρ ) η ( T I M E ) k ∈ N TReq ( M ) | T F S ( F , Ξ , Clk , ρ ) η → M h Clk i | T F S ( F, Ξ , Clk , ρ ) η ( A D M R E Q ) k ∈ N T FS ( F , Ξ , Clk , ρ ) η ≡ η ( δ ◦ k )( adm , x ); AReq k ( adm , x ) | T F S ( F, Ξ , Clk , ρ ) η ( A D M O K ) perm ( F , k , adm ) = L push ( L, adm , Ξ , Clk ) = h N , Ξ ′ i AReq k ( adm , n ) | T FS ( F , Ξ , Clk , ρ ) η → n h N i | T F S ( F , Ξ ′ , Clk , ρ ) η ( O P R E Q ) k ∈ N T FS ( F , Ξ , Clk , ρ ) η ≡ η ( β ◦ k )( op , τ , x ); Req k ( op , τ , x ) | T F S ( F , Ξ , Clk , ρ ) η ( O P O K ) perm ( F , k , op ) = L Clk ≤ τ Req k ( op , τ , M ) | T F S ( F , Ξ , Clk , ρ ) η → EOk ( L, op , M ) | T F S ( F , Ξ , Clk , ρ ) η ( O P E X E C ) exec ( L, op , ρ ) = h N , ρ ′ i EOk ( L, op , M ) | T F S ( F , Ξ , Clk , ρ ) η → M h N i | T F S ( F , Ξ , Clk , ρ ′ ) η ( T I C K ) sync ( F , Ξ , Clk ) = F ′ T FS ( F , Ξ , Clk , ρ ) η → T F S ( F ′ , Ξ , Clk + 1 , ρ ) η ( D U M M Y A D M R E Q ) j ∈ N \I  TS NAS η ≡ η ( δ j )( op , x ); η ( δ ◦ j ) h op , x i |  TS NAS η ( D U M M Y A U T H R E Q ) j ∈ N \I  TS NAS η ≡ η ( α j )( op , x ); ( ν m ) η ( α ◦ j ) h m i ; m ( Clk ); x h mac ( h j, op , Clk i , K ? i |  TS NAS η ( D U M M Y E X E C R E Q ) j ∈ N \I  TS NAS η ≡ η ( β j )( κ, x ); DReq ( κ, x ) η |  TS NAS η ( D U M M Y O P R E Q ) κ = mac ( m sg ( κ ) , K ? ) msg ( κ ) = h , op , Clk i DReq ( κ, M ) η → η ( β ◦ j ) h op , Clk , M i Figure 8: A traditio nal file system with local access control 24 ( A D M R E Q ) k ∈ N NA F S ( F , Ξ , Clk , ρ ) η ≡ η ( δ k )( adm , x ); AReq k ( adm , x ) | N A F S ( F , Ξ , Clk , ρ ) η ( A D M O K ) perm ( F , k , adm ) = L push ( L, adm , Ξ , Clk ) = h N , Ξ ′ i AReq k ( adm , M ) | N A F S ( F , Ξ , Clk , ρ ) η → M h N i | N A F S ( F , Ξ ′ , Clk , ρ ) η ( A U T H R E Q ) k ∈ N NA F S ( F , Ξ , Clk , ρ ) η ≡ η ( α k )( op , x ); CReq k ( op , x ) | N A F S ( F , Ξ , Clk , ρ ) η ( A U T H C A P ) cert ( F , k , op , Clk ) = κ CReq k ( op , M ) | N A F S ( F, Ξ , Clk , ρ ) η → M h κ i | N A F S ( F , Ξ , Clk , ρ ) η ( E X E C R E Q ) k ∈ N NA F S ( F , Ξ , Clk , ρ ) η ≡ η ( β k )( κ, x ); Req ( κ, x ) | N A F S ( F , Ξ , Clk , ρ ) η ( O P O K ) verif ( κ ) = L L ∈ { true , fal se } msg ( κ ) = h , op , Clk i Req ( κ, M ) | N A FS ( F , Ξ , Clk , ρ ) η → EOk ( L, op , M ) | N A FS ( F , Ξ , Clk , ρ ) η ( O P E X E C ) exec ( L, op , ρ ) = h N , ρ ′ i EOk ( L, op , M ) | N A F S ( F , Ξ , Clk , ρ ) η → M h N i | N A F S ( F , Ξ , Clk , ρ ′ ) η ( T I C K ) sync ( F , Ξ , Clk ) = F ′ NA F S ( F , Ξ , Clk , ρ ) η → N A F S ( F ′ , Ξ , Clk + 1 , ρ ) η ( D U M M Y C L K R E Q ) j ∈ N \I  NAS TS η ≡ η ( α ◦ j )( x ); ( ν c ) η ( α j ) h x, c i ; c ( y ); x h msg ( y ) . 3 i |  NAS TS η ( D U M M Y A D M R E Q ) j ∈ N \I  NAS TS η ≡ η ( δ ◦ j )( op , x ); η ( δ j ) h op , x i |  NAS TS η ( D U M M Y O P R E Q ) j ∈ N \I  NAS TS η ≡ η ( β ◦ j )( op , τ , x ); ( ν c ) η ( α j ) h op , c i ; c ( κ ); [ ms g ( κ ) . 3 ≤ τ ] η ( β j ) h κ, x i |  NAS TS η Figure 9: A network- attached file system with distributed access control 25 fn ( M ) ∩ ( A ∪ { α j , β j , δ j | j ∈ N \I } ) = ∅ ⌈ M ⌉ = M ⌈ P ⌉ Γ = Q Γ ⊇ { α j , β j , δ j | j ∈ N \I } ⌈ P ⌉ = Q . . . i ∈ I fnv ( adm , M ) ∩ dom (Γ) = ∅ ⌈ δ i h adm , M i ; P ⌉ Γ = δ ◦ i h adm , M i ; ⌈ P ⌉ Γ i ∈ I fnv ( c, x ) ∩ dom (Γ) = ∅ c / ∈ fn ( P ) ⌈ ( ν c ) α i h op , c i ; c ( x ); P ⌉ Γ = ( ν c ) α ◦ i h c i ; c ( x ); ⌈ P ⌉ Γ ,x : Cert ( i, op ) { i, i ′ } ⊆ I Γ( x ) = Cert ( i ′ , op ) fnv ( op , M ) ∩ dom (Γ) = ∅ ⌈ β i h x, M i ; P ⌉ Γ = β ◦ i ′ h op , x, M i ; ⌈ P ⌉ Γ Figure 10: Abstraction function t1 a cquir e κ ; chmo d ζ ; use κ ; succe ss κ t2 c hmod ζ ; acqui re κ ; use κ ; succe ss κ The following frag ments of NAS d code form alize these t races. I1 ( ν c ) α i h op , c i ; c ( κ ); ( ν m ) δ i h ζ , m i ; m ( z ); ( ν n ) β i h κ, n i ; n ( x ); [ s uccess ( x )] w hi I2 ( ν m ) δ i h ζ , m i ; m ( z ); ( ν c ) α i h op , c i ; c ( κ ); ( ν n ) β i h κ, n i ; n ( x ); [ s uccess ( x )] w hi This code is abstracted to the following frag ments of TS d code. S1 ( ν c ) α ◦ i h c i ; c ( τ ); ( ν m ) δ ◦ i h ζ , m i ; m ( z ); ( ν n ) β ◦ i h op , τ , n i ; n ( x ); [ success ( x )] w hi S2 ( ν m ) δ ◦ i h ζ , m i ; m ( z ); ( ν c ) α ◦ i h c i ; c ( τ ); ( ν n ) β ◦ i h op , τ , n i ; n ( x ); [ success ( x )] w hi Now whenever (I1) and (I 2) can be distinguished , so can (S1) and (S2). In deed the time bound τ is the same as the timestamp in κ ; so (in particular) the oper ation reque st in (S1) is dropp ed when ev er the execution request in (T1) is dro pped. A similar argument counters the “dangerous” e xample with (t4) and (t5): t4 a cquir e κ ; chmo d ζ ′ ; acquir e κ ′ ; use κ ′ ; succes s κ ′ ; use κ ; suc cess κ t5 c hmod ζ ′ ; acquir e κ ′ ; use κ ′ ; succes s κ ′ ; acquir e κ ; use κ ; suc cess κ Finally , recall (t8) and (t9). t8 a cquir e κ ; use κ ; c (); chmod ζ ; c (); s ucces s κ ; w hi t9 c (); c (); w hi The following frag ment of NAS d code form alizes (t8). I3 ( ν m ) α i h op , m i ; m ( κ ); ( ν n ) β i h κ, n i ; c (); ( ν m ) δ i h ζ , m i ; m ( z ); c (); n ( x ); [ success ( x )] w hi 26 This code is abstracted to the following frag ment of TS d code. S3 ( ν m ) α ◦ i h m i ; m ( τ ); ( ν n ) β ◦ i h op , τ , n i ; c (); ( ν m ) δ ◦ i h ζ , m i ; m ( z ); c (); n ( x ); [ success ( x )] w hi A NAS d context distinguishes (I3) and (t9): c hi ; α j h op ′ , m 0 i ; m 0 ( κ ′ 0 ); β j h κ ′ 0 , n 0 i ; n 0 ( x ); [ failure ( x )] δ j h ζ , p i ; α j h op ′ , m 1 i ; m 1 ( κ ′ 1 ); β j h κ ′ 1 , n 1 i ; n 1 ( x ); [ success ( x )] c hi But like wise a TS d context distinguishes (S3) and (t9): c hi ; α j h m 0 i ; m 0 ( τ ′ 0 ); β ◦ j h op ′ , τ ′ 0 , n 0 i ; n 0 ( x ); [ failure ( x )] δ ◦ j h ζ , p i ; α ◦ j h m 1 i ; m 1 ( τ ′ 1 ); β ◦ j h op ′ , τ ′ 1 , n 1 i ; n 1 ( x ); [ success ( x )] c hi 6.3 Pr o ofs of security W e show that R is secure, safe, and fully abstract. Recall the contexts φ an d ψ defined in Section 5. Th e processes  NAS TS and  TS NAS are redefined in the inner boxes in Figur es 8 and 9. I n particular, the rule ( D U M M Y O P R E Q ) in Figure 9 translates time-b ounded operation requests by TS d contexts. Simulation rela tions fo r security a re shown in Figures 11, 12, an d 1 3, a nd a simu- lation relation for safety and full abstraction is shown in Figure 14. Here η 1 , [ α j 7→ α j ? , β j 7→ β j ? , δ j 7→ δ j ? | j ∈ N \I ] η 2 , [ α ◦ j 7→ α ◦ j ? , β ◦ j 7→ β ◦ j ? , δ ◦ j 7→ δ ◦ j ? | j ∈ N \I ] A bina ry relation , , (“leads- to”) is defined over the product of access p olicies and clocks. Access policies may chang e at clock ticks (but not between). F ′ , Clk ′ F , Clk , ( Clk ′ < Clk ) ∨ ( Clk ′ = Clk ∧ F ′ = F ) As usual, any term that may be a vailable to contexts must be o f type Export . N = N ′ σ { K M D , K ′ M , K ? } ∩ fn ( N ′ ) = ∅ ∀ L ∈ rng ( σ ) . ∃ j ∈ N \I , op , Clk ′ . o p : F ,F , Clk Export ∧ ( F ( Clk ′ ) , Clk ′ F , Clk ) ∧ L = cert ( F ( Clk ′ ) , j, op , Cl k ′ ) N : F ,F , Clk Export W e prove that the relatio ns S , T , and U in Figures 11, 12, and 13 are in cluded in the simulation preorder . Some interesting poin ts in th ose proofs are listed belo w . • In Section 5, when an operation request is sent in TS s we send an approp riate authorizatio n request in NAS s , obtain a capability , and send an execution reque st with that capability (see T in Figure 5). In c ontrast, h ere w hen an o peration request is sent in T S d we wait after s ending an appropr iate au thorization request 27 fn ( κ, M ) ∩ A = ∅ F ′ , Clk ′ F , Clk Req ( κ, M ) S ′ F , Clk 1 DReq ( κ, M ) η 2 k ∈ N fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk Req ( cert ( F ′ , k, op , Clk ′ ) , M ) S ′ F , Clk 1 Req k ( op , Clk ′ , M ) fn ( L, op , M ) ∩ A = ∅ EOk ( L, op , M ) S ′ F , Clk 1 EOk ( L, op , M ) k ∈ N fn ( adm , M ) ∩ A = ∅ AReq k ( adm , M ) S ′ 1 F , Clk AReq k ( adm , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ CReq j ( op , M ) S ′ F , Clk 1 ( ν m ) α ◦ j ? h m i ; m ( x ); M h mac ( h j, op , x i , K ? ) i ( FI L E S Y S T E M S ) ∀ r ∈ L . P r S ′ F , Clk 1 Q r fn (Ξ , ρ ) ∩ A = ∅ NA F S ( F , Ξ , Clk , ρ ) | Π r ∈L P r S F , Clk 1 T FS ( F , Ξ , Clk , ρ ) η 2 | Π r ∈L Q r ( H O N E S T U S E R S ) dom ( σ ) = dom ( σ ′ ) = X ∀ x. x ∈ X ⇒ ∃ F ′ , Clk ′ , i ∈ I , op . ( F ′ , Clk ′ F , Clk ) ∧ σ ′ ( x ) = Clk ′ ∧ Γ( x ) = Cert ( i, op ) ∧ σ ( x ) = cert ( F ′ , i, op , Clk ′ ) C σ S Γ ,F , Clk 2 ⌈ C ⌉ Γ σ ′ i ∈ I P S Γ ,F , Clk 2 Q Γ( x ) = Cert ( i, op ) ( ν c )( c ( x ); P | CR eq i ( op , c )) S F , Clk 3 ( ν c )( c ( x ); Q | TReq ( c )) ( T R U S T E D C O D E ) P S F , Clk 1 Q P ′ S Γ ,F , Clk 2 Q ′ ∀ r ∈ L . P r S F , Clk 3 Q r ( ν i ∈I α i β i δ i )( P | P ′ | Π r ∈L P r ) S ′ F , Clk ( ν i ∈I α ◦ i β ◦ i δ ◦ i )( Q | Q ′ | Π r ∈L Q r ) ( S Y S T E M C O D E ) P S ′ F , Clk Q ∀ x, N . ( ∃ σ ′ . σ ≡ { N / x } | σ ′ ) ⇒ N : F ,F, Cl k Export ( ν e n )( ν K M D K ′ M )( σ | P ) S F , Clk ( ν e n )( ν K ? )( η 3 ( σ ) | ( ν j ∈ N \I α ◦ j ? β ◦ j ? δ ◦ j ? )( Q |  TS NAS )) Figure 11: Simulation relation for Lemma 16.1 ( 4 φ [ ] ) 28 i ∈ I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk Req i ( op , Clk ′ , M ) T ′ 1 F , Clk Req ( cert ( F ′ , k , op , Clk ′ ) , M ) η 1 j ∈ N \I fn ( op , τ , M ) ∩ A = ∅ Req j ( op , τ , M ) T ′ 1 F , Clk ( ν c ) α j ? h op , c i ; c ( κ ); [ msg ( κ ) . 3 ≤ τ ] β j ? h κ, M i fn ( L, op , M ) ∩ A = ∅ EOk ( L, op , M ) T ′ 1 F , Clk EOk ( L, op , M ) η 1 k ∈ N fn ( adm , M ) ∩ A = ∅ AReq k ( adm , n ) T ′ 1 F , Clk AReq k ( adm , n ) η 1 j ∈ N \I fn ( M ) ∩ A = ∅ TReq ( M ) T ′ 1 F , Clk ( ν c ) α j ? h M , c i ; c ( x ); M h msg ( x ) . 3 i ( FI L E S Y S T E MS ) ∀ r ∈ L . P r T ′ 1 F , Clk Q r fn (Ξ , ρ ) ∩ A = ∅ T FS ( F , Ξ , Clk , ρ ) | Π r ∈L P r T 1 F , Clk NA F S ( F , Ξ , Clk , ρ ) η 1 | Π r ∈L Q r ( H O N E S T U S E R S ) dom ( σ ) = dom ( σ ′ ) = X ∀ x. x ∈ X ⇒ ∃ F ′ , Clk ′ , i ∈ I , op . ( F ′ , Clk ′ F , Clk ) ∧ σ ( x ) = Clk ′ ∧ Γ( x ) = Cert ( i, op ) ∧ σ ′ ( x ) = cert ( F ′ , i, op , Clk ′ ) ⌈ C ⌉ Γ σ T 2 Γ ,F , Clk C σ ′ i ∈ I P T 2 Γ ,F , Clk Q Γ( x ) = Cert ( i, op ) ( ν c )( c ( x ); P | TReq ( c )) T ′ 3 ( ν c )( c ( x ); Q | CReq i ( op , c )) ( T R U S T E D C O D E ) P T F , Clk 1 Q P ′ T Γ ,F , Clk 2 Q ′ ∀ r ∈ L . P r T ′ 3 Q r ( ν i ∈I α ◦ i β ◦ i δ ◦ i )( P | P ′ | Π r ∈L P r ) T ′ ( ν i ∈I α i β i δ i )( ν K M D K ′ M )( Q | Q ′ | Π r ∈L Q r ) ( S Y S T E M C O D E ) P T ′ Q ( ν e n )( σ | P ) T ( ν e n )( σ | ( ν j ∈ N \I α j ? β j ? δ j ? )( Q |  NAS TS )) Figure 12: Simulation relation for Lemma 16.2 ( 4 ψ [ ] ) 29 fn ( κ, M ) ∩ A = ∅ DReq ( κ, M ) η 2 U ′ F , Clk 1 Req ( κ, M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk β ◦ j ? h op , Clk ′ , M i U ′ F , Clk 1 Req ( cert ( F ′ , j, op , Clk ′ ) , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk DReq j ( op , Clk ′ , M ) η 1 ⊕ η 2 U ′ F , Clk 1 Req ( cert ( F ′ , j, op , Clk ′ ) , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk ( ν c )( c ( x ); [ msg ( x ) . 3 ≤ Clk ′ ] β j ? h x, M i | CR eq j ( op , c )) U ′ F , Clk 1 Req ( cert ( F ′ , j, op , Clk ′ ) , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F, Clk N = mac ( h j, op , Clk ′ i , K ? ) ( ν c )( c ( x ); [ msg ( x ) . 3 ≤ Clk ′ ] β j ? h x, M i | c h N i ) U ′ F , Clk 1 Req ( cert ( F ′ , j, op , Clk ′ ) , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk β j ? h mac ( h j, op , Clk ′ i , K ? ) , M i U ′ F , Clk 1 Req ( cert ( F ′ , j, op , Clk ′ ) , M ) k ∈ N fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk Req ( mac ( h k , op , Clk ′ i , K ? ) , M ) U ′ F , Clk 1 Req ( cert ( F ′ , k , op , Clk ′ ) , M ) fn ( L, op , M ) ∩ A = ∅ EOk ( L, op , M ) U ′ F , Clk 1 EOk ( L, op , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ ( ν m ) α ◦ j ? h m i ; m ( x ); M h mac ( h j, op , x i , K ? ) i U ′ F , Clk 1 CReq j ( op , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk ( ν m ) ( m ( x ); M h mac ( h j, op , x i , K ? ) i | m h Clk ′ i ) U ′ F , Clk 1 M h cert ( F ′ , k , op , Clk ′ ) i fn ( adm , M ) ∩ A = ∅ AReq k ( adm , M ) U ′ F , Clk 1 AReq k ( adm , M ) ( FI L E S Y S T E MS ) ∀ r ∈ L . P r U ′ F , Clk 1 Q r fn (Ξ , ρ ) ∩ A = ∅  TS NAS η 2 |  NAS TS η 1 ⊕ η 2 | N A FS ( F , Ξ , Clk , ρ ) η 1 | Π r ∈L P r U F , Clk 1 NA F S ( F , Ξ , Clk , ρ ) | Π r ∈L Q r ( H O N E S T U S E R S ) ⌈ C ⌉ Γ = C ◦ ∀ x. x ∈ dom ( σ ) ⇒ ∃ F ′ , Clk ′ , i ∈ I , op . ( F ′ , Clk ′ F , Clk ) ∧ Γ( x ) = Cert ( i, op ) ∧ σ ( x ) = cert ( F ′ , i, op , Clk ′ ) C σ U Γ ,F , Clk 2 C σ i ∈ I Γ( x ) = Cert ( i, op ) P U Γ ,F , Clk 2 Q ( ν c )( c ( x ); P | CR eq i ( op , c )) U F , Clk 3 ( ν c )( c ( x ); Q | CReq i ( op , c )) ( T R U S T E D C O D E ) P U F , Clk 1 Q P ′ U F , Clk 2 Q ′ ∀ r ∈ L . P ℓ U F , Clk 3 Q ℓ ( ν i ∈I α i β i δ i )( ν K M D K ′ M )( P | P ′ | Π r ∈L P r ) U ′ F , Clk ( ν i ∈I α i β i δ i )( Q | Q ′ | Π r ∈L Q r ) ( S Y S T E M C O D E ) P U ′ F , Clk Q ∀ x, N . ( ∃ σ ′ . σ ≡ { N / x } | σ ′ ) ⇒ N : F ,F, Cl k Export ( ν e n )( ν K ? )( η 3 ( σ ) | ( ν j ∈ N \I α ◦ j ? β ◦ j ? δ ◦ j ? α j ? β j ? δ j ? ) P ) U ( ν e n )( ν K M D K ′ M )( σ | Q ) Figure 13: Simulation relation for Lemm a 16.3 ( φ [ ψ [ ]] 4 ) 30 j ∈ N \I fn ( op , τ , M ) ∩ A = ∅ ( ν c ) α j ? h op , c i ; c ( κ ); [ msg ( κ ) . 3 ≤ τ ] β j ? h κ, M i V ′ F , Clk 1 Req j ( op , τ , M ) j ∈ N \I fn ( op , τ , M ) ∩ A = ∅ N = m ac ( h j, op , x i , K ? ) ( ν c )( c ( κ ); [ msg ( κ ) . 3 ≤ τ ] β j ? h κ, M i | ( ν m ) α ◦ j ? h m i ; m ( x ); c h N i ) V ′ F , Clk 1 Req j ( op , τ , M ) j ∈ N \I fn ( op , τ , M ) ∩ A = ∅ N = m ac ( h j, op , x i , K ? ) ( ν c )( c ( κ ); [ msg ( κ ) . 3 ≤ τ ] β j ? h κ, M i | ( ν m )( m ( x ); c h N i | TReq ( m )) ) V ′ F , Clk 1 Req j ( op , τ , M ) j ∈ N \I fn ( op , τ , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk N = mac ( h j, op , Clk ′ i , K ? ) L = perm ( F ′ , j, op ) ( ν c )( c ( κ ); [ msg ( κ ) . 3 ≤ τ ] β j ? h κ, M i | ( ν m )( m ( x ); c h N i | m h Clk ′ i )) V ′ F , Clk 1 [ Clk ≤ τ ] EOk ( L, op , M ) j ∈ N \I fn ( op , τ , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk L = perm ( F ′ , j, op ) ( ν c )( c ( κ ); [ msg ( κ ) . 3 ≤ τ ] β j ? h κ, M i | c h mac ( h j, op , Clk ′ i , K ? ) i ) V ′ F , Clk 1 [ Clk ≤ τ ] EOk ( L, op , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk N = mac ( h j, op , Clk ′ i , K ? ) L = perm ( F ′ , j, op ) β j ? h N , M i V ′ F , Clk 1 EOk ( L, op , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F, Clk N = mac ( h j, op , Clk ′ i , K ? ) L = perm ( F ′ , j, op ) DReq ( N , M ) η 1 ⊕ η 2 V ′ F , Clk 1 EOk ( L, op , M ) j ∈ N \I fn ( op , M ) ∩ A = ∅ F ′ , Clk ′ F , Clk L = perm ( F ′ , j, op ) β ◦ j ? h op , Clk ′ , M i V ′ F , Clk 1 EOk ( L, op , M ) fn ( op , M ) ∩ A = ∅ EOk ( L, op , M ) V ′ F , Clk 1 EOk ( L, op , M ) fn ( adm , M ) ∩ A = ∅ AReq k ( adm , M ) V ′ F , Clk 1 AReq k ( adm , M ) j ∈ N \I fn ( M ) ∩ A = ∅ ( ν c ) α j ? h M , c i ; c ( y ); M h msg ( y ) . 3 i V ′ F , Clk 1 TReq ( M ) j ∈ N \I fn ( M ) ∩ A = ∅ ( ν c )( c ( y ); M h msg ( y ) . 3 i | ( ν m ) α ◦ j ? h m i ; m ( x ); c h mac ( h j, M , x i , K ? ) i ) V ′ F , Clk 1 TReq ( M ) j ∈ N \I fn ( M ) ∩ A = ∅ ( ν c )( c ( y ); M h msg ( y ) . 3 i | ( ν m )( m ( x ); c h mac ( h j, M , x i , K ? ) i | TReq ( M ) ) ) V ′ F , Clk 1 TReq ( M ) j ∈ N \I fn ( M ) ∩ A = ∅ Clk ′ ≤ Clk ( ν c )( c ( y ); M h m sg ( y ) . 3 i | ( ν m )( m ( x ); c h mac ( h j, M , x i , K ? ) i | m h Clk ′ i )) V ′ F , Clk 1 m h Clk ′ i j ∈ N \I fn ( M ) ∩ A = ∅ ( ν c )( c ( y ); M h m sg ( y ) . 3 i | c h mac ( h j, M , Clk ′ i , K ? ) i ) V ′ F , Clk 1 M h Clk ′ i ( FI L E S Y S T E M S ) ∀ r ∈ L . P r V ′ F , Clk 1 Q r fn (Ξ , ρ ) ∩ A = ∅  NAS TS η 1 |  TS NAS η 1 ⊕ η 2 | T F S ( F , Ξ , Clk , ρ ) η 2 | Π r ∈L P r V F , Clk 1 T FS ( F , Ξ , Clk , ρ ) | Π r ∈L Q r ( H O N E S T U S E R S ) ∀ x. x ∈ dom ( σ ) ⇒ ∃ Clk ′ , i ∈ I , op . Clk ′ ≤ Clk ∧ Γ( x ) = Cert ( i, op ) ∧ σ ( x ) = Clk ′ ⌈ C ⌉ Γ σ V Γ ,F , Clk 2 ⌈ C ⌉ Γ σ i ∈ I Γ( x ) = Cert ( i, op ) P V Γ ,F , Clk 2 Q ( ν c )( c ( x ); P | TReq ( c )) V F , Clk 3 ( ν c )( c ( x ); Q | TReq ( c )) ( S Y S T E M C O D E ) P V F , Clk 1 Q P ′ V F , Clk 2 Q ′ ∀ r ∈ L . P ℓ V F , Clk 3 Q ℓ P ′′ = ( ν i ∈I α ◦ i β ◦ i δ ◦ i )( ν K ? )( P | P ′ | Π r ∈L P r ) Q ′′ = ( ν i ∈I α ◦ i β ◦ i δ ◦ i )( Q | Q ′ | Π r ∈L Q r ) ( ν e n )( σ | ( ν j ∈ N \I α ◦ j ? β ◦ j ? δ ◦ j ? α j ? β j ? δ j ? ) P ′′ ) V ( ν e n )( σ | Q ′′ ) Figure 14: Simulatio n relation for Lemma 17 ( ψ [ φ [ ]] 4 ) 31 in NAS d (see T in Figure 12); we con tinue o nly when that operatio n req uest in TS d is processed, when we obtain a capab ility in NAS d , send an execution request with that capability , and process the execution requ est. But why wait? Su ppose that the operation r equest in TS d carries a time bound ∞ ; n ow if we ob tain a capability in NAS d before the o peration requ est in TS d is processed, we commit to a finite time boun d, which br eaks the simulation. • As before, φ [ ψ ] forces a fresh capability to be acquired for e very execution re- quest b y filter ing execution req uests in NAS d throug h TS d and bac k. When an execution requ est is sent in NAS d under φ [ ψ ] we send an execution request with the sam e capa bility in NAS d (see U in Figure 13). But u nder φ [ ψ ] a fre sh capability is obtained and the execution r equest is sent again with the fresh capa- bility . If the capab ility in the original request expires before the fresh capability , the simulation breaks. Fortunately operation requests in TS d carry time bou nds, so we can commun icate this expiry bo und thr ough TS d . In fact there seems to be no way around this problem unless time bounds can be s pecified in operation requests in TS d ! By Proposition 10 we hav e: Lemma 16. F or any F , ρ , and C , 1. ( ν i ∈I α i β i δ i )( C | ( ν K M D K ′ M ) N A F S ( F, ∅ , 0 , ρ ))  φ [( ν i ∈I α ◦ i β ◦ i δ ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ∅ , 0 , ⌈ ρ ⌉ ))] 2. ( ν i ∈I α ◦ i β ◦ i δ ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ∅ , 0 , ⌈ ρ ⌉ ))  ψ [( ν i ∈I α i β i δ i )( C | ( ν K M D K ′ M ) N A F S ( F , ∅ , 0 , ρ ))] 3. φ [ ψ [( ν i ∈I α i β i δ i )( C | ( ν K M D K ′ M ) N A F S ( F, ∅ , 0 , ρ ))]]  ( ν i ∈I α i β i δ i )( C | ( ν K M D K ′ M ) N A F S ( F , ∅ , 0 , ρ )) So by Proposition 7, R is secure. Further we pr ove that the relation V in Figu re 14 is also inclu ded in the simulation preord er . By Propo sition 10 we ha ve: Lemma 17. F or any F , ρ , and C , ψ [ φ [( ν i ∈I α ◦ i β ◦ i δ ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ∅ , 0 , ⌈ ρ ⌉ ))]]  ( ν i ∈I α ◦ i β ◦ i δ ◦ i )( ⌈ C ⌉ | T F S ( ⌈ F ⌉ , ∅ , 0 , ⌈ ρ ⌉ )) So by Lemmas 16.1–2 and Corollary 8, R is safe an d fully abstract. 7 Designing secure d istrib uted pr otocols In the preceding sections, we present a thorou gh an alysis of the problem of distributing access contro l. Let u s no w apply that analysis to a more general problem. Suppose that we are require d to design a d istributed p rotocol th at securely imp le- ments a spe cification. (The sp ecification m ay be an arbitr ary comp utation.) W e can 32 solve t his problem by partitioning the specification into smaller computations, running those comp utations in para llel, and secur ing the in termediate outp uts of those c ompu- tations so that they may be released and abso rbed in any or der . In particu lar , we c an design NS d + by partitionin g IS d + into access control and storag e, r unning them in parallel, and securing the intermediate ou tputs of access control as cap abilities. Th e same principles should guide any s uch design. For instance, by (R3) and (R4) interme- diate o utputs shou ld not leak information prematu rely; by (R5) and (R6) such outputs must be timestamped an d the states on which they depend must not change between clock ticks; and by (A5) the specification must be generalized with time bound s. Computation as a gra ph W e d escribe a compu tation as a d irected graph G ( V , E ) . The inpu t no des , collected by V i ⊆ V , are the nod es of indegree 0 . T he o utput n odes , collected by V o ⊆ V , are th e no des of o utdegree 0 . Furth er , we co nsider a set of state nodes V s ⊆ V such tha t V i ∩ V s = ∅ . As a techn icality , any node that is in a cycle o r has outdegree > 1 must be in V s . Nodes other than the in put n odes ru n som e cod e. Let M con tain all term s and ⊏ be a strict total order o n V . W e lab el each v ∈ V \ ( V i ∪ V s ) with a f unction λ v : M in ( v ) → M , an d eac h v ∈ V s with a func tion λ v : M in ( v ) × M → M . Further, each state node carries a shared clock, following the midnig ht-shift scheme. A co nfiguration ( σ , τ ) co nsists of a partial f unction σ : V → M suc h that d om ( σ ) ⊇ V s , a nd a total function τ : V s → N . Intuitively , σ assigns values at the state n odes and some other node s, and τ assigns times at the state n odes. For any v ∈ V \ V i , the function λ v outputs the value at v , taking as inputs the values at each in coming u , and the value at v if v is a state n ode; further, if such u / ∈ V s , the value at u is “co nsumed” on input. Formally , the oper ational seman tics is given by a binar y relation over configur ations. v ∈ V \ ( V i ∪ V s ) ∀ k ∈ 1 .. i n ( v ) . ( u k , v ) ∈ E ∧ σ ( u k ) = t k u 1 ⊏ . . . ⊏ u in ( v ) σ − = σ | V s ∪ ( V \{ u 1 ,...,u in ( v ) } ( σ , τ ) ( σ − [ v 7→ λ v ( t 1 , . . . , t in ( v ) )] , τ ) v ∈ V s τ ( v ) = Clk σ ( v ) = t ∀ k ∈ 1 .. i n ( v ) . ( u k , v ) ∈ E ∧ σ ( u k ) = t k u 1 ⊏ . . . ⊏ u in ( v ) σ − = σ | V s ∪ ( V \{ u 1 ,...,u in ( v ) } ( σ , τ ) ( σ − [ v 7→ λ v ( t 1 , . . . , t in ( v ) , t )] , τ [ v 7→ Clk + 1]) As usual, we leave the context implicit; the ad versary is an arb itrary context th at can write values at V i , read values at V o , and read times at V s . For example, a grap h that describes IS d + is: • 1 − → ⋆ 2 − − → ← − ⋆ 4 − → • 6 − → ⋆ 7 − → • 8 ↓ ↑ • 3 • 5 Here V i = {• 1 , • 5 } , V o = {• 3 , • 8 } , V s = { ⋆ 2 , ⋆ 4 , ⋆ 7 } , and V = V i ∪ V o ∪ V s ∪ {• 6 } . Intuitively , ⋆ 2 carries a ccumulator s, and • 1 and • 3 carry in puts an d o utputs fo r access 33 modification s; ⋆ 4 carries access policies, an d • 6 carries access decision s; ⋆ 7 carries stores, and • 5 and • 8 carry inpu ts and ou tputs for store operations. W e define: λ ⋆ 2 ( h k , θ i , F , h , Ξ i ) = exec ( perm F, k,θ , θ, Ξ) λ • 3 ( h N , Ξ i ) = N λ ⋆ 4 ( h , Ξ i , ) = Ξ λ • 6 ( F, h k , op i ) = h op , perm F, k, op i λ ⋆ 7 ( h op , L i , h , ρ i ) = exec ( L, op , ρ ) λ • 8 ( h N , ρ i ) = N Distrib ution as a graph cut Once describ ed as a g raph, a co mputation c an be dis- tributed alo ng any cut of tha t grap h. For instance, IS d + can be distributed along th e cut { ( • 6 , ⋆ 7 ) } to ob tain NS d + . W e presen t this deri vation formally in se veral s teps. Step 1 F or each v ∈ V , let S ( v ) ⊆ V s be the set of state no des that have paths to v , an d I ( v ) ⊆ V i be the set of in put no des that have paths to v without passing throug h nodes in V s . Then G ( V , E ) can be written in a form where, loosely , the v alues at I ( v ) and the times at S ( v ) are explicit in σ ( v ) fo r each node v . Formally , the explication of G is the graph ˆ G ( ˆ V , ˆ E ) wher e ˆ V = V ∪ { ˆ v | v ∈ V i } ∪ { ˆ u | u ∈ V o } a nd ˆ E = E ∪ { ( ˆ v , v ) | v ∈ V i } ∪ { ( u, ˆ u ) | u ∈ V o } . W e define: v ∈ V i ˆ λ v ( t ) = h t, t i v ∈ V o ˆ λ ˆ v ( , t ) = t v ∈ V \ ( V i ∪ V s ) λ v ( t 1 , . . . , t in ( v ) ) = t ˆ λ v ( h I 1 , t 1 i , . . . , h I in ( v ) , t in ( v ) i ) = hh I 1 . . . I in ( v ) i , t i v ∈ V s σ ( v ) = h Clk , t i λ v ( t 1 , . . . , t in ( v ) , t ) = t ′ ˆ λ v ( h , t 1 i , . . . , h , t in ( v ) i , h Clk , t i ) = h Clk + 1 , t ′ i This translation is sound and complete. Theorem 18. ˆ G is fully a bstract with r espect to G . For example, the explication of the graph for IS d + is: • 1 − → ⋆ 2 − − → ← − ⋆ 4 − → • 6 − → ⋆ 7 − → • 8 ↑ ↓ ↑ ↓ ˆ • 1 • 3 • 5 ˆ • 8 ↓ ↑ ˆ • 3 ˆ • 5 Here σ ( • 6 ) is of th e f orm hh k , op , Clk i , h op , perm F, k, op ii rath er than h op , perm F, k, op i ; the “inp ut” σ ( ˆ • 5 ) = h k, op i , the “time” τ ( ⋆ 4 ) = Clk , an d the “ou tput” h op , perm F, k, op i of an access check are all explicit in σ ( • 6 ) . A capability can be conveniently con - structed from this form (see below). 34 Step 2 Next, let E 0 be any cut. As a tec hnicality , we assume that E 0 ∩ (( V i ∪ V s ) × V ) = ∅ . The distrib ution of G along E 0 is the g raph G $ ( V $ , E $ ) , wh ere V $ = ˆ V ∪ { v | ( v , ) ∈ E 0 } ∪ { v $ | ( v , ) ∈ E 0 } and E $ = ( ˆ E \ E 0 ) ∪ { ( v , v $ ) | ( v, ) ∈ E 0 } ∪ { ( v $ , u ) | ( v , u ) ∈ E 0 } . Let K v and E v be secret keys shared by v and v $ for every ( v , ) ∈ E 0 . W e define: ( v , ) ∈ E 0 ˆ λ v ( t 1 , . . . , t in ( v ) ) = h t, t ′ i m is fre sh λ $ v ( t 1 , . . . , t in ( v ) ) = mac ( h t, { m, t ′ } E v i , K v ) ( v , ) ∈ E 0 τ ( S ( v )) is included in t λ $ v $ ( h t, mac ( h t, { , t ′ } E v i , K v ) i ) = h t, t ′ i v ∈ V \ V i ( v , ) / ∈ E 0 λ $ v = ˆ λ v Intuitively , for every ( v , ) ∈ E 0 , v $ carries the same values in G $ as v does in G ; those v alues are encoded and released at v , absorb ed at v , and decod ed back at v $ . For example, the distrib utio n of the graph for IS d + along the cut { ( • 6 , ⋆ 7 ) } is: • 1 − → ⋆ 2 − − → ← − ⋆ 4 − → • 6 ⋆ 7 − → • 8 ↑ ↓ ↑ ↑ ↓ ˆ • 1 • 3 • 5 • $ 6 ˆ • 8 ↓ ↑ ↑ ˆ • 3 ˆ • 5 • 6 This graph d escribes a variant of NS d + . In particular, the node • 6 now carries a ca- pability of the form mac ( hh k , op , Clk i , { m, h op , pe rm F, k, op i} E • 6 i , K • 6 ) , that secures the input, time, and output of an access check. Step 3 Finally , G is revised following (A5). The r evision of G alo ng E 0 is th e graph G # ( V # , E # ) , wher e V # = V ∪ { v # | ( v , ) ∈ E 0 } and E # = E ∪ { ( v # , v ) | ( v , ) ∈ E 0 } . W e define: ( v , u ) ∈ E 0 τ ( S ( v )) ≤ T λ # v ( t 1 , . . . , t in ( v ) , T ) = λ v ( t 1 , . . . , t in ( v ) ) v ∈ V \ V i ( v , ) / ∈ E 0 λ # v = λ v Intuitively , for ev ery ( v , ) ∈ E 0 , p rogress at v req uires that the times at S ( v ) do n ot exceed the time bounds at v # . For example, the revised form of the g raph for IS d + is: • 1 − → ⋆ 2 − − → ← − ⋆ 4 − → • 6 − → ⋆ 7 − → • 8 ↓ ր ↑ • 3 • # 6 • 5 Here • # 6 carries a time b ound T , and λ # • 6 ( F, h k , op i , T ) = λ • 6 ( F, h k , op i ) if τ ( ⋆ 4 ) ≤ T . W e prove the follo win g correctness result. 35 Theorem 19. G $ is fully abstract with r espect to G # . By Theor em 19, the grap h for NS d + is fully abstrac t with respect to the r evised graph for IS d + . Similarly , we can design NS s from IS s . The induced subgrap h of IS d + without {• 1 , ⋆ 2 , • 3 , ⋆ 4 } descr ibes IS s . W e d efine λ • 6 ( h k , op i ) = h op , perm F, k, op i for some static F . Distributing alon g the cu t { ( • 6 , ⋆ 7 ) } , we obtain th e indu ced subg raph of NS d + without { ˆ • 1 , • 1 , ⋆ 2 , • 3 , ˆ • 3 , ⋆ 4 } . This g raph describes a v ariant of NS s , with σ ( • 6 ) of the form mac ( hh k , op i , { m, h op , p erm F, k, op i} E • 6 i , K • 6 ) . (Here capabilities do not carry timestamp s.) By Theorem 19, the graph fo r NS s is fully abstrac t with respect to a trivially revised graph for IS s , where λ # • 6 ( h k , op i , hi ) = λ • 6 ( h k , op i ) . 8 Conclusion W e present a compr ehensive analysis of the problem of impleme nting distrib uted ac- cess contro l with cap abilities. In p revious work, we sh ow how to implement static access policies securely [10] and dyn amic access policies safe ly [ 11]. In this pape r , we exp lain those results in new light, revealing the several pitfalls that any such de- sign mu st care about for correctness, while discovering interesting special cases that allow simpler implementatio ns. Fur ther, we presen t new insights on the difficulty of implementin g dynamic access p olicies secu rely (a p roblem th at has h itherto rem ained unsolved). W e show that such an implementation is in f a ct possible if the specification is slightly generalized . Moreover , our a nalysis turn s out to b e surpr isingly gene ral. Guided by the same basic p rinciples, we show how to automatically der i ve secure distributed implem enta- tions of other stateful compu tations. This approach is reminiscent of secure progr am partitioning [22], and in vestigating its scope should be interesting future work. Acknowledgments This work owes muc h to Mart ´ ın Abadi, who formulated the orig- inal prob lem and co- authored our pr evious w ork in th is area. Many thanks to him and Sergio Maf f eis for helpfu l discussions on this work, and detailed commen ts on an ear- lier d raft of th is paper . It was Mart´ ın wh o suggested the name “midnigh t-shift”. Tha nks also to h im a nd C ´ edric Fourne t for clarifying a n issue about th e applied p i ca lculus, which led to simpler proof s. Refer ences [1] M. Ab adi. Protection in pro gramm ing-lang uage translations. In I CALP’98: In- ternational Colloquium on Automata, Lang uages and Pr ogramming , pages 868 – 883. Springer, 1998. [2] M. Abadi, C. Fournet, and G. Gonthier . Secure imp lementation of channel ab - stractions. I n Thirteenth Annual IEEE Sympo sium o n L ogic in Computer Science , pages 105–11 6, 1998. 36 [3] M. Abadi, C. Fournet, and G. Gonthier . Auth entication primiti ves and their com- pilation. In POPL’00: Principles of Pr ogramming Langu ages , pages 30 2–31 5. A CM, 200 0. [4] M. Abadi and L. Lampor t. The existence of refine ment mappings. Theoretical Computer Science , 82(2):25 3–284 , 199 1. [5] M. Abad i and R. Needh am. Pr udent en gineering p ractice f or cryp tograp hic pro- tocols. IEEE T ransactions on Softwar e Engineering , 22(1):6– 15, Jan. 199 6. [6] M. Backes, C. Cachin , and A. Oprea. Secure key-upd ating fo r lazy revocation. In ESORICS’06: Eu r opean Symposium on Resear ch in Computer Security , pages 327–3 46. Spring er , 200 6. [7] M. B ackes and A. Oprea. Lazy rev ocatio n in cryptograph ic file systems. In SISW ’05: Security in Storage W orkshop , pages 1–11. IEEE, 2005. [8] B. Blanch et and A. Chaudhuri. Autom ated fo rmal analy sis of a protocol fo r se- cure file sharing on un trusted storage. In S&P ’08: Pr oceed ings of the 29th IEEE symposium on Security and Privacy , pages 417–4 31. IEEE, 2008 . [9] R. Canetti. Uni versally co mposable secu rity: a new parad igm for cryptog raphic protoco ls. In FOCS’01: F ounda tions of Computer Science , p ages 136–1 45, 200 1. [10] A. Chaudhuri and M. A badi. Formal security analy sis of ba sic network-attached storage. In FMSE’05 : F ormal Method s in Security Engineering , pages 43– 52. A CM, 200 5. [11] A. Chaudhuri and M. Abadi. Formal analysis of dynamic, distributed file-system access co ntrols. In FORTE’06 : F ormal T echniques for Networked and Dis- tributed Systems , pages 99–114. Spring er , 200 6. [12] K. Fu, S. Kamara , and Y . K ohno. Key re gression: enabling ef ficient ke y distribu- tion for secure distributed s torage. In NDSS’06 : Network and Distrib uted System Security , 2006. [13] H. Gobioff, G. Gibson, an d J. T yga r . Secur ity for network attached storag e de- vices. T echnic al R eport CMU-CS-97-1 85, Carnegie Mellon Univ ersity , 1997. [14] S. Goldwasser an d M. Bellare. Lecture notes in cryp tograph y , 2001. See http://www.c s.ucsd.edu/u sers/mihir/papers/gb.html . [15] S. Halevi, P . A. Karger, and D. Naor . Enfo rcing con finement in distributed storag e and a cryptographic mo del for access contr ol. Cry ptology ePrint Arch i ve, Report 2005/1 69, 200 5. See http://epr int.iacr.org /2005/169 . [16] M. Kallahalla, E. Riedel, R. Swaminathan , Q. W a ng, and K. Fu. Plutus: scalab le secure file sharing on untru sted storage. In F AS T’03: F ile an d Storag e T echnolo- gies , pages 29–42 . USENIX Association, 2003 . 37 [17] S. Maffeis. Dyna mic W eb Data: A Pr o cess A lgebraic Appr oa ch . PhD th esis, Imperial College London, August 2006. [18] D. Ma zi ` eres and D. Shasha. Building secure file systems out of byzantine storage. In PODC’02: P rinciples of Dis tributed Computing , pages 108–11 7. A CM, 2002 . [19] R. Milner . Fully abstract mod els of typed lamb da-calculi. Theoretical C omputer Science , 4(1):1– 22, 197 7. [20] R. Milner . The poly adic pi-calculus: a tuto rial. In F . L. Bauer, W . Brauer, and H. Schwic htenberg, editor s, Logic a nd Algebra of Specification , pages 20 3–24 6. Springer-V erla g, 1993. [21] R. D. Nicola and M. C. B. Henn essy . T esting equiv alen ces for pr ocesses. Theo- r etical Compu ter Science , 34(1–2) :83–13 3, 1984 . [22] S. Zdancewic, L. Zhen g, N. Nystrom, and A. C. Myer s. Secure pr ogram parti- tioning. ACM T rans. Comput. Syst. , 20(3 ):283– 328, 2002. 38

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment