Side Effects in Steering Fragments
In this thesis I will give a formal definition of side effects. I will do so by modifying a system for modelling program instructions and program states, Quantified Dynamic Logic, to a system called DLAf (for Dynamic Logic with Assignments as Formula…
Authors: Lars Wortel
Side Effects in Steering F ragm en ts MSc Thesis ( Afstudeerscriptie ) written by L.L. W ortel (bo rn 12 October 19 84 in Heemskerk) under the sup erv ision o f Alban P onse and Pa ul Dekk er , and submitted to the Board o f Ex aminers in par tial fulfillment of the requirements for the degree o f MSc in Logic at the Universiteit van Amster dam . Date of the public defense: Members of the Thesis Com mittee: Septem b er 5th, 2011 Dr Alban Ponse Dr Paul Dekker Prof Dr Ja n v a n Eijck Dr Sar a Uc kelman Prof Dr Benedik t L¨ owe I’m de dic ating this thesis to my p ar ents, without whom I would never have gotten to t his p oint. I have test e d their p atienc e by taking my time to gr aduate, but they kept supp orting every sil ly little thing I have ever done. Thanks guys, you’r e the b est. Abstract In this thesis I will g ive a for mal definition of side effects. I will do so by mo difying a sys tem for mo delling prog r am instructions and pr ogra m states, Quantified Dynamic Lo gic, to a s ystem called DLA f (for Dynamic Logic with Assignments as F ormu las), which in c ontrast to QDL allows assignments in formulas and makes use o f sho rt-circuit ev aluation. I will s how the under lying logic in t hose f ormulas to be a v ariant of short-c ir cuit logic called r ep etition- pro of short-circuit logic. Using DLA f I will define the actual and the expe c ted ev aluation o f a single instruction. The s ide effects are then defined to be the difference b etw een the t wo. I will give rules for comp osing those side e ffects in single instructions, th us sca ling up our definition o f side effects to a definition of side effects in deterministic DLA f -progr ams. Using this definition I will give a clas sification of side effects, introducing a s most impo rtant class that of mar ginal side effects. Finally , I will show how to use o ur system for calculating the side effects in a real system s uch as P rogr am Algebra (PGA). Ackno wledgements I w ould first a nd foremo st like to thank my supe rvisor Alban Ponse for the big amounts of time and energy he put into guiding me throug h this pro ject. His advice has bee n inv aluable to me and his enth usiasm has b een a huge motiv ation for me thro ughout. A thank you also g o es out to Jan v an Eijck for p ointing me in the right direction halfwa y through the pro ject. Finally I would like to thank my entire thesis committee, co nsisting of Alban Ponse, Paul Dekker, Ja n v an Eijck, Sara Uck elman and Benedikt L¨ ow e, for taking the time to rea d and grade m y thesis. — Lars W ortel, August 20 11 Contents 1 In tro duction 1 1.1 What are side effects? . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 What are steer ing frag ment s? . . . . . . . . . . . . . . . . . . . . 2 1.3 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 Overview of this thesis . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Preliminaries 5 2.1 Int ro duction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 T oy la nguage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Prop ositio nal Dynamic Logic . . . . . . . . . . . . . . . . . . . . 8 2.4 Quantified Dynamic Logic . . . . . . . . . . . . . . . . . . . . . . 9 3 Mo difyin g QDL to DLA f 13 3.1 Int ro ducing DLA f . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 A working example . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3 Re-introducing WHILE . . . . . . . . . . . . . . . . . . . . . . . 19 3.3.1 The WHILE comma nd . . . . . . . . . . . . . . . . . . . . 19 3.3.2 WHILE in DLA f . . . . . . . . . . . . . . . . . . . . . . . 20 3.3.3 Lo oping b ehavior and abno rmal terminatio n . . . . . . . 21 4 T erminolo gy 23 4.1 F or mulas, instr uctions a nd progra ms . . . . . . . . . . . . . . . . 23 4.2 Normal forms o f formulas . . . . . . . . . . . . . . . . . . . . . . 24 4.3 Deterministic progra ms and canonical for ms . . . . . . . . . . . . 26 5 The logi c of formulas in DLA f 29 5.1 Prop ositio n algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.2 Short-Circuit Logics . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.3 Repe tition- Pro of Short-Circ uit Log ic . . . . . . . . . . . . . . . . 32 I 6 A treatmen t of side effects 37 6.1 Int ro duction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.2 Side effects in s ing le instr uctions . . . . . . . . . . . . . . . . . . 38 6.3 Side effects in ba sic instr uctions . . . . . . . . . . . . . . . . . . . 39 6.4 Side effects in pr ogra ms . . . . . . . . . . . . . . . . . . . . . . . 41 6.5 Side effects outside steering fra gments . . . . . . . . . . . . . . . 45 7 A classification of si de effects 47 7.1 Int ro duction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 7.2 Marginal side effects . . . . . . . . . . . . . . . . . . . . . . . . . 48 7.2.1 Int ro duction . . . . . . . . . . . . . . . . . . . . . . . . . 4 8 7.2.2 Marginal side effects in single instructions . . . . . . . . . 49 7.2.3 Marginal side effects cause d by primitive formulas . . . . 52 7.3 Other classes of side effects . . . . . . . . . . . . . . . . . . . . . 56 8 A case study: Progra m Al gebra 59 8.1 Progr am Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 8.1.1 Basics o f P GA . . . . . . . . . . . . . . . . . . . . . . . . 5 9 8.1.2 Behavior extraction . . . . . . . . . . . . . . . . . . . . . 61 8.1.3 Extensions of P GA . . . . . . . . . . . . . . . . . . . . . . 6 2 8.2 Logical c onnectives in PGA . . . . . . . . . . . . . . . . . . . . . 62 8.2.1 Int ro duction . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 8.2.2 Implemen tation of SCLAnd a nd SCLOr . . . . . . . . . . 63 8.2.3 Complex Steer ing F ra g ments . . . . . . . . . . . . . . . . 6 4 8.2.4 Negation . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 8.2.5 Other instructions . . . . . . . . . . . . . . . . . . . . . . 66 8.3 Detecting side effects in P GA . . . . . . . . . . . . . . . . . . . . 67 8.4 A working example . . . . . . . . . . . . . . . . . . . . . . . . . . 70 9 Conclusions and future w ork 75 II 1 Intro duction 1.1 What are side effects? In programming practice, side effe cts are a w ell- known phenomenon, ev en though nob o dy seems to have an e x act definition of what they are. T o get a ba sic idea, here are some examples from natur a l lang ua ge a nd pr ogra mming tha t should explain the int uition b ehind side effects. Suppo se you and your wife hav e come to an agreement reg arding gr o cery shopping. Up on le aving for work, she told you that “ if I do n’t call, you do not hav e to do the shopping”. La ter that day , she calls you to tell you something completely different, for ins tance that she is preg na nt. This call now ha s as s ide effect that you no longer know whether you have to do gr o cery shopping or not, even though the meaning o f the ca ll itself was something completely different. Another example is taken from [9]. Suppo se someone tells you that “Pho eb e is waiting in front of your do or, and you don’t k now it!” This is a p erfectly fine thing to s ay , but you cannot s ay it twice b ecause then it will no longer be true that you don’t k now that Pho eb e is waiting (after all, you were just told). Here, the side effect is that your knowledge gets up dated by the sentence, which makes the latter par t of that sentence, which is a statement ab out your knowledge, false. As said, in pro gramming practice, side effects are a well-known pheno menon. Logically , they are interesting b ecause the p os s ible presence of s ide effects in a progra m instr uction seque nc e inv alidates principles of prop o s itional logic such as commutativit y ( φ ∧ ψ ↔ ψ ∧ φ ) a nd idemp otency ( φ ∧ φ ↔ φ ). The textb o ok example is the following progra m: x:=1 if (x:=x +1 and x=2) then y Here the oper ator := sta nds for assignment and = for an equa lit y tes t. Assuming an ass ig nment instruction alwa ys succ eeds (that is, yields the r eply tr ue ), in the above ex a mple the test φ ∧ ψ , where φ is the instr uctio n x:=x+ 1 and ψ the instruction x= 2 , will succe e d a nd therefore , y will be executed. How ever, should the order o f tho s e instructions b e reversed ( ψ ∧ φ ), this no longer will b e the case. The reason is that the instruction φ has a side effect: apar t fro m returning 1 2 CHAPTER 1 . IN TRODUCTION true , it also increments the v ariable x with 1 , th us making it 2 . If φ is executed befo re ψ , the test in ψ ( x=2 ) will yield true . Other wise, it will yield fals e . It is easy to see that sho uld φ ∧ ψ b e executed twice, the end r esult will als o be f alse . Therefo r e, for χ = φ ∧ ψ , we hav e that χ ∧ χ 6↔ χ . 1.2 What are steering fragmen ts? Now that I hav e given a rough idea o f what side effects are, the reader is probably wondering ab out the second pa rt o f my thesis title: that of steering fragments. A s te ering fr agment or test is a pr ogra m fragment which is conce r ned with the control flow of the ex e c ution of that pro gram. T o be exact, a steering fragment will use the ev aluation result of a formula (which is a Bo ole an) and dep ending on the outcome, will steer further execution of the prog ram. Thus, a steering fragment consists of tw o pa rts: a formula and a co n trol part which decides what to do with the ev aluation r esult of that formula. Thro ug hout this thesis, I will be using the terms steering fragment and test interchangeably . The fo r mula in a steering fragment can either b e a primitive o r a comp o und formula. The comp onents of a co mp o und for m ula are usually co nnected via logical connectives such as ∧ a nd ∨ , o r inv olve nega tio n. If the for m ula of a steering fragment is comp ound, we s ay that the steering fragment is a c omplex ste ering fr agment . W e have already seen a cla s sical ex ample of a (complex ) steer ing fragment in the previous s ection: the if . . . then instructio n. In the ex a mple a b ov e, the formula is a comp ound for m ula with x := x + 1 a nd x = 2 as its comp o nents, connected via the lo gical connective ∧ . The control pa rt of this steer ing frag- men t consists of if and then and the pr escription to execute y if ev a luation o f x:=x+1 and x=2 yields true. 1.3 Related w ork The main contribution o f this thesis is to c o nstruct a formal mo del o f side effects in dynamic lo gic. Beca use of that, I only ha d limited time and space to pr o p erly resear ch related w ork done in this a rea. Despite that, I will briefly describ e some references I have come a cross throug hout this pro ject. Currently , a formal definition of side effects app ear s to b e missing in litera- ture. That is no t to say that side effects hav e been completely ignored. Attem pts hav e been made to create a log ic which admits the po ssibility of side effects by Bergstra and Ponse [5 ]. F urthermore, an initial, informa l classification of side effects has b een presented by Bergstra in [1]. I will re tur n to those references later in this thesis . Black and Windley hav e made an attempt to reaso n in a setting with side effects in [7, 8]. In their goal to verify a secure application written in C using Hoare axio matic semantics to express the corr ectness of progr am statements, they encountered the problem of side effects o ccurring in the e v aluation of some C-expressio ns. They solved the problem by cr eating ex tra inferenc e rule s which essentially separate the ev aluation of the side effect fro m the ev a luation o f the main expr ession. 1.4. O VER VIEW OF THIS THESIS 3 Also working with C is Norrish in [17]. He presents a forma l semantics for C and he, to o, runs into side effects in the pro cess . Norris h claims that a semantics gives a prog ram meaning by describing the way in which it changes a pro g ram state. Such a program sta te would b oth include the co mputer ’s memory as well as wha t is commonly known as the e n vironment (types of v ariables , mapping o f v ariable names to addresses in memo ry etc.). Norris h claims that in C, ch anges to the former co me ab o ut throug h the actio ns of side effects, which a re cre a ted by ev aluating certain expres sion forms such as assignments. Norrish’ for mal semantics for C is able to handle these side effects. B¨ ohm presents a different st y le of axiomatic definitions for pro gramming lan- guages [6]. Whereas o ther authors such a s Black a nd Windley a b ove use Hoar e axiomatic semantics whic h bases the logic on the no tion of pre- or p ostco ndition, B¨ ohm uses the v alue of a progra mming langua ge expr ession as the underly ing primitive. He relies on the fact that the underlying prog r amming lang uage is a n expression language such as Algol 68 [21]. Expr e s sions are allow ed to have ar- bitrary side effects a nd the no tio ns of statement and expr ession coincide. B¨ ohm claims that his for malism is just as intuitiv e a s Hoare-style log ic a nd that the notion of ‘easy axiomatiza bility’ — which is a ma jor measurement of the quality of a prog ramming language — is a matter of a choice of for malism, which in turn is arbitrar y . In this thesis I will develop a v a riant of Dynamic Logic to model side effects. Dynamic Logic is used fo r a wide ra nge of applicatio ns , r anging from mo delling key constr ucts of imper ative pro gramming to developing dyna mic semantic the- ories for natural lang uage. An early overview of dynamic logic is given b y Harel in [15 ]. More r ecently , V an Eijck and Stokhof hav e given an extensive overview of v arious systems o f dynamic lo g ic in [11]. 1.4 Ov erview of this thesis Int uitively , a side effect of a pr op ositional statement is a change in s tate o f a progr a m or mo del other than the effect (or change in sta te) it was initia lly executed for. In this thesis I will present a system that makes this int uition explicit. First, in Chapter 2 I will pr esent the preliminaries on which my s y stem, that can mo del pr ogra m instruc tio ns a nd their effect on prog ram states, is ba s ed. This system, which I prese nt in Chapter 3, will be a mo dified version of Quan- tified Dynamic Logic, ov er v iews of which ca n be found in [15, 11]. After introducing some terminolog y and explor ing the log ic behind this sys- tem in Chapters 4 a nd 5, I can forma lly define side effects, which I will do in Chapter 6. In Chapter 7 I will pro ceed to giving a classifica tion of side effects, int ro ducing margina l side effects as the most imp orta nt class. In Chapter 8 I will pr esent a c ase study to see this definition of side effects in action. F or this I will use an — aga in slig htly mo dified — version o f P r ogra m Algebra [3]. I will end this thes is with some conclusions and s ome p ointers for future w ork. 4 CHAPTER 1 . IN TRODUCTION 2 Prelimi na r i es 2.1 In tro duction In order to say something useful ab out s ide e ffects, we need a forma l definition. Such a definition can b e found using dynamic log ics. The basic idea here is that the up date of a progr am instruction is the change in pr ogra m state it ca uses. This allows us to introduce an exp ected and an ac tua l ev a luation o f a progr am instruction. The exp ected ev alua tion of a progr am ins truction is the change you would exp ect a prog ram instruction to make to the pr ogra m state up on ev aluation. This may differ, how ever, fro m the actual ev aluation, namely when a side effect o cc urs when actua lly ev aluating the pr ogram instruction. The s ide effect of a pro gram instruction then is defined a s the difference in exp ected a nd actual ev aluatio n of a pr ogra m instruction. T o flesh this out in a formal definition, w e first need a sy stem that is able to mo del pr ogram sta tes and progr am instr uctions. Q u antifie d D ynamic L o gic (QDL) is such a system. QDL was developed by Harel [14] and Go ldblatt [13]. It ca n be seen as a first order version of Pr op ositional Dynamic L o gic (P DL), which was develop ed by Pratt in [19, 20]. Much o f the ov erv iew of b oth PDL and QDL I will give below is ta ken from the ov er v iew of dynamic logic b y V a n Eijck and Stokhof [1 1]. Dynamic lo gic can b e viewed as dea ling with the log ic of action and the result of action [1 1]. Although v arious kinds of a ctions ca n b e mo delled with it, o ne is of pa rticular interest fo r us: the actions p er fo rmed on co mputers, i.e. computations. In essence, these a re actio ns that change the memory state of a machine, or on a somewhat higher level the progra m state of a computer progra m. Regardless of what kinds o f actions are mo delled, the core of dyna mic lo gic can in many cases be characterized in a similar wa y via the log ic of ‘lab elled transistion systems’. A labelled transition system or L TS o ver a signatur e h P, A i , with P a set of prop ositions and A a set of a ctions, is a triple h S, V , R i where S is a set of states , V : S → P ( P ) is a v aluation function and R = { a →⊆ S × S | a ∈ A } is a set of lab elled transitions (one binar y relation on S for each lab el a ). There are v a rious versions of dynamic logic. Before I will introduce tw o of 5 6 CHAPTER 2. PRELIMINARIES these, I will first describ e the setting I will b e using in my examples. This setting consists of a toy progr a mming lang uage that is express ive enough to mo del the working examples I need to discuss side effects. 2.2 T o y language My toy lang uage should b e a ble to handle assig nment s and steering fr agments. The steering frag ment can p ossibly b e complex, so our toy la nguage should b e able to ha ndle comp ound fo r mulas: multiple formulas (such as equality tests) connected via log ical connectives. In par ticula r, I will be using short-cir cuit left and ( ∧ r ❜ ) and short- circuit left or ( ∨ r ❜ ) as connectives. Finally , assig nments should b e allow ed in tests as well: they are, in line with wha t one would exp ect, defined to always r eturn t rue . As to y language I will first present the WHILE language defined by V an Eijck in [1 1]. W e will see so o n enough that we will actually need more functionalit y than it offers , but it will serve us well in the intro duction of PDL, QDL and the illustration of the proble ms we will run into. The WHILE la nguage works on natural n umbers and defines arithmetic ex- pressions, Bo olean expressio ns and progr amming commands. Arithmetic ex- pressions a with n ranging over n umerals a nd v ov er v ariables from a set V are defined as follows: a ::= n | v | a 1 + a 2 | a 1 ∗ a 2 | a 1 − · a 2 Bo olean ex pressions are defined a s: B ::= ⊤ | a 1 = a 2 | a 1 ≤ a 2 | ¬ B | B 1 ∨ B 2 Finally , we define the following pr ogra mming commands: C ::= SKIP | ABOR T | v := a | C 1 ; C 2 | IF B THEN C 1 ELSE C 2 F or the sake of simplicity , we will p ostp one the intro duction of the WHILE command until a fter we hav e pr esented our mo dified system in Chapter 3. The s emantics o f the arithmetic ex pressions are fairly self-ex planatory . W e assume that every nu meral n in N has a n interpretation I ( n ) ∈ N and le t g be a mapping fr o m V to N . W e then hav e the following interpretations of the arithmetic e x pressions, relative to initia l v aluation or initial progra m sta te g : J n K g := I ( n ) J v K g := g ( v ) J a 1 + a 2 K g := J a 1 K g + J a 2 K g J a 1 ∗ a 2 K g := J a 1 K g ∗ J a 2 K g J a 1 − · a 2 K g := J a 1 K g − · J a 2 K g The semantics of the B o olean express ions a re standard as well, wr iting T for 2.2. TO Y LANGUAGE 7 true and F for false: J ⊤ K g := T J a 1 = a 2 K g := ( T if J a 1 K g = J a 2 K g F o therwise J a 1 ≤ a 2 K g := ( T if J a 1 K g ≤ J a 2 K g F o therwise J ¬ B K g := ( T if J B K g = F F o therwise J B 1 ∨ B 2 K g := ( T if J B 1 K g = T or J B 2 K g = T F o therwise The semantics of the commands of the toy la nguage can b e given in v arious styles. Here I take a lo ok at a v ar ia nt called structural op erationa l seman- tics [11]. It is s pe c ifie d using a tr ansition system from pa irs o f a state a nd a command, to e ither a state or again a state and a (new) command. First I will give the tra nsitions for the assig nment command. It lo oks like this, wher e we write g [ v 7→ t ] for the v aluation which is lik e v aluation g e xcept for the v ariable v , which has bee n mapped to t : ( g , v := t ) = ⇒ g [ v 7→ J t K g ] Here we hav e the pair of state g a nd the assignment command v := a at the start of the transition. After the transitio n, we only hav e a new state left, since the execution of this comma nd has finished in a s ingle s tep. The SKIP co mmand do es nothing: it do es not c ha nge the state and it finishes in a single step. ( g , SKIP) = ⇒ g In str uctural op eratio nal semantics, there are tw o rules for sequential com- po sition, o ne for when progra m C 1 finishes in a single step and one fo r which it do es not. ( g , C 1 ) = ⇒ g ′ ( g , C 1 ; C 2 ) = ⇒ ( g ′ , C 2 ) ( g , C 1 ) = ⇒ ( g ′ , C ′ 1 ) ( g , C 1 ; C 2 ) = ⇒ ( g ′ , C ′ 1 ; C 2 ) Finally , we hav e the rules for conditio na l a ction. There are tw o (similar ) rules, dep ending on the o utcome o f the test: ( g , IF B THEN C 1 ELSE C 2 ) = ⇒ ( g , C 1 ) J B K g = T ( g , IF B THEN C 1 ELSE C 2 ) = ⇒ ( g , C 2 ) J B K g = F 8 CHAPTER 2. PRELIMINARIES 2.3 Prop ositional Dynamic Logic Now that I have introduced the toy langua g e, it is time to take a lo ok at the first version of dynamic log ic we are interested in: P rop ositio na l Dynamic Lo gic (PDL in short). The la nguage of PDL consists of formulas φ (bas ed on basic prop ositions p ∈ P ) and prog rams α (based on basic a ctions a ∈ A ): φ ::= ⊤ | p | ¬ φ | φ 1 ∨ φ 2 | h α i φ α ::= a | ? φ | α 1 ; α 2 | α 1 ∪ α 2 | α ∗ As the name sug g ests, PDL is based on prop ositional logic. This means that the usual prop erties such as ass o ciativity and dualit y are v alid and will b e used throughout. F urthermore, we can use the following abbrev iations: ⊥ = ¬⊤ φ 1 ∧ φ 2 = ¬ ( ¬ φ 1 ∨ ¬ φ 2 ) φ 1 → φ 2 = ¬ φ 1 ∨ φ 2 φ 1 ↔ φ 2 = ( φ 1 → φ 2 ) ∧ ( φ 2 → φ 1 ) [ α ] φ = ¬h α i¬ φ The relationa l comp ositio n R 1 ◦ R 2 of binary relations R 1 , R 2 on state set S is g iven by: R 1 ◦ R 2 = { ( t 1 , t 2 ) ∈ S × S | ∃ t 3 (( t 1 , t 3 ) ∈ R 1 ∧ ( t 3 , t 2 ∈ R 2 )) } The n -fold comp osition R n of a bina ry relation R on S with itself is r ecursively defined as follows, with I the identit y relation on S : R 0 = I R n = R ◦ R n − 1 Finally , the reflexive transitive closur e of R is g iven b y: R ∗ = [ n ∈ N R n T o define the sema n tics of PDL ov er basic prop ositions P and ba sic actions A , we need the labelled transis tion system T = h S T , V T , R T i for signature h P, A i . The for mulas o f PDL ar e interpreted as subsets of S T , the actions as binary 2.4. QUANTIFIED DYNA MIC LO GIC 9 relations on S T . This lea ds to the following interpretations: J ⊤ K T = S T J p K T = { s ∈ S T | p ∈ V T ( s ) } J ¬ φ K T = S T − J φ K T J φ 1 ∨ φ 2 K T = J φ 1 K T ∪ J φ 2 K T J h α i φ K T = { s ∈ S T | ∃ t ( s, t ) ∈ J α K T and t ∈ J φ K T } J a K T = a − → T J ? φ K T = { ( s, s ) ∈ S T × S T | s ∈ J φ K T } J α 1 ; α 2 K T = J α 1 K T ◦ J α 2 K T J α 1 ∪ α 2 K T = J α 1 K T ∪ J α 2 K T J α ∗ K T = ( J α K T ) ∗ The pr ogra mming constr ucts in o ur toy language ar e expres sed in P DL as fol- lows: SKIP := ? ⊤ ABOR T := ? ⊥ IF φ THEN α 1 ELSE α 2 := (? φ ; α 1 ) ∪ (? ¬ φ ; α 2 ) Although PDL is a p ow erful logic, it is not enough yet to prop erly mo del the toy lang uage we need. The rea son for that is the need for a s signments. Since assignments change relational structur es, the a ppr opriate asser tion langua ge is fir s t order predica te logic, a nd not pr op ositional log ic [1 1]. So instead o f PDL, which as the name sug g ests use s prop os itional logic, we need a version of dynamic logic that uses firs t order predica te logic. This is where Quantified Dynamic Logic (QDL in shor t) c omes in. 2.4 Quan tified Dynamic Logic The lang uage o f QDL consists of terms t , fo r mulas φ and progra ms π . F or functions f a nd rela tional symbols R we have: t ::= v | f t 1 . . . t n φ ::= ⊤ | Rt 1 . . . t n | t 1 = t 2 | ¬ φ | φ 1 ∨ φ 2 | ∃ v φ | h π i φ π ::= v := ? | v := t | ? φ | π 1 ; π 2 | π 1 ∪ π 2 | π ∗ In the c ase of natural num b er s, examples of f a re + , ∗ etc. a nd examples of R ar e ≤ and ≥ . The same abbrevia tions as in PDL a re used, most notably ⊥ = ¬⊤ and [ π ] φ = ¬h π i¬ φ . The r andom assignment ( v := ?) do es not increase the expr essive power of QDL [11]. It can, how ever, b e nicely used to express the universal and existential quantifier: ∃ v φ ↔ h v := ? i φ ∀ v φ ↔ [ v := ?] φ 10 CHAPTER 2. PRELIMINARIES The pair ( f , R ) is called a first order signature. A mo del for such a signatur e is a s tructure o f the form M = ( E M , f M , R M ) where E is a non-e mpty set, the f M are interpretations in E for the mem b ers of f a nd the R M similarly are the int erpre ta tions in E for the members of R . Now let V b e the set of v aria bles of the lang uage. Interpretation o f terms in M is defined rela tive to an initial v aluatio n g : V → E M : J v K M g = g ( v ) (QDL1) J f t 1 . . . t n K M g = f M ( J t 1 K M g , . . . , J t n K M g ) (QDL2) T r uth in M for formulas is defined by simultaneous r ecursion, where g ∼ v h then means that h differs a t most from g on the assignment it gives to v ar iable v : M | = g ⊤ alwa ys (QDL3) M | = g Rt 1 . . . t n iff ( J t 1 K M g , . . . , J t n K M g ) ∈ R M (QDL4) M | = g t 1 = t 2 iff J t 1 K M g = J t 2 K M g (QDL5) M | = g ¬ φ iff M 6| = g φ (QDL6) M | = g φ 1 ∨ φ 2 iff M | = g φ 1 or M | = g φ 2 (QDL7) M | = g ∃ v φ iff for some h with g ∼ v h, M | = h φ (QDL8) M | = g h π i φ iff for s o me h with g J π K M h , M | = h φ (QDL9) The same g o es for the relationa l meaning in M for prog rams: g J v := t K M h iff h = g [ v 7→ J t K M g ] (QDL10) g J ? φ K M h iff g = h a nd M | = g φ (QDL11) g J π 1 ; π 2 K M h iff ∃ f with g J π 1 K M f and f J π 2 K M h (QDL12) g J π 1 ∪ π 2 K M h iff g J π 1 K M h or g J π 2 K M h (QDL13) g J π ∗ K M h iff g = h o r g J π ; π ∗ K M h (QDL14) The ab ov e definition makes co ncatenation (;) a n ass o ciative o p erator: ( π 1 ; π 2 ); π 3 = π 1 ; ( π 2 ; π 3 ) As a co nv ention, we o mit the br ack e ts wherever po ssible. Although QDL go e s a long wa y to mo de lling our toy langua ge and program states, w e are not quite there yet. The mo difications we hav e to make co me to light when w e examine the expressive p ow er of QDL. QDL currently has more expressive power than it has sema nt ics de fined for. This pro blem surface s when the moda lity o p erator is nested within a test, like this: ?( h v := t i⊤ ) This is the progra m ? φ , with φ = h π i ψ , π = v := t and ψ = ⊤ . As the semantics of QDL ar e currently defined, the pr ogra m π will ma ke a change to an initial 2.4. QUANTIFIED DYNA MIC LO GIC 11 v aluation g if it is int erpreted in it, retur ning v aluation h where the assignment g had for v a riable v will b e expresse d b y t . This is express ed by QDL10. How ever, the curr ent semantics only assign relatio nal meaning to a test instr uction ? φ as long as g = h , a s ex pressed b y QDL11. Another s imila r e xample is the following: ?( h v := v + 1; v := v − · 1 i⊤ ) Although this s ituation should b e similar as a b ov e, it is no t: b eca use the pro - gram state g ets changed twice, QDL now is able to ass ig n s e mantics to this progra m since the progr am state gets r eturned to the or iginal state by the sec- ond progra m ins truction (and we therefore hav e g = h ). So, not only can we devis e even a very s imple corre c t QDL-pr ogra m for which there ar e no semantics defined, we can also give a very similar example for which QDL do es define s emantics. Not o nly do es that somewhat er ratic behavior seem undesirable, but the nature of the examples here pr esent us with a proble m when we are consider ing side effects. Exactly for the situations in which side effects o c cur, namely when a n instruction in a test causes a c hange in the progra m state, there are no semantics defined in QDL. Therefore, I am going to have to mo dify Q DL so that it do es define semantics in those situations. 12 CHAPTER 2. PRELIMINARIES 3 Mo difying QDL to DLA f 3.1 In tro ducing DLA f In this chapter I will present Dynamic L o gic with Assignments as F ormulas , or DLA f in short, the resulting dynamic logic after making t wo ma jor mo dificatio ns to QDL. The mo dificatio ns I will make a re such that DLA f can mo del the sp ecific kinds of constructions tha t we ar e interested in. This means that, like the name suggests, we hav e to introduce semantics for assig nment s in formulas. F urther more, we will drop or mo dify so me other QDL-instructio ns that we do not need. Because of that DLA f ev ades the problem of QDL mentioned in Section 2.4 of the pr evious chapter and o ne other problem I will g e t ba ck to in Section 3.3. Before I introduce DLA f , howev er , I will show the mo dificatio ns that need to be done to V an Eijck’s WHILE lang uage so that it c an mo del the instructions we need. In the WHILE lang uage, B o olean expr e s sions a re a ssumed to cause no state change up on ev aluatio n. Howev er , for our purp os e this is inadequa te. W e wan t to allow assignments in tests as well and they cause a state change. This warrant s the firs t mo difica tion to the WHILE language and its sema nt ics: as- signments are a llow ed in Bo o lean e xpressions . The seco nd modifica tion is that the Bo olean OR function will b e replaced by a short-circuit version: B ::= ⊤ | a 1 = a 2 | a 1 ≤ a 2 | ¬ B | B 1 ∨ r ❜ B 2 | v := a The new semantics for Bo olean expressio ns a re like the semantics defined by V an Eijck, with as ma jor difference that there are now s emantics defined for assignments: J v := a K g := T F urther more, B o olean expr e s sions now might in tro duce a state change, so every command containing a Bo olean ex pression (which for now only is the IF THEN ELSE command) should account for that. In structur a l op erationa l semantics, we take a lo ok at how the Bo o lean expr ession changes the state and p erform the remaining actions in that new state: ( g , B ) = ⇒ g ′ ( g , IF B THEN C 1 ELSE C 2 ) = ⇒ ( g ′ , C 1 ) J B K g = T 13 14 CHAPTER 3. MODIFYING QDL TO DLA F And s imilar for the cas e that J B K g = F . As sa id, ther e is one more thing that needs to b e modified in the language ab ov e . In order to b e prop er ly able to reaso n ab out side effects, the or der in which the tests g et executed is imp ortant. Because of that, the OR construct in Bo o lean expres sions needs to b e replaced b y a sho r t-circuit direc ted version: J B 1 ∨ r ❜ B 2 K g := T if J B 1 K g = T T if J B 1 K g = F and for ( g , B ) = ⇒ g ′ , J B 2 K g ′ = T F otherwis e W e will make use of its dual, the shor t-circuit left and ( ∧ r ❜ ) to o. It is defined similarly as a b ove. As a conv ention, from her e on ∨ r ❜ and ∧ r ❜ can b e us e d int erchangeably in definitions, unless ex plicitly stated other wise. Both ∨ r ❜ as well a s ∧ r ❜ are as so ciative. W e ag ain o mit brackets wher ever p os sible. All we hav e left to define now is the state change a Bo ole a n can c a use. This is defined as follows: ( g , B ) = ⇒ ( g [ v 7→ J t K g ] if B = ( v := t ) g o.w. Missing in the a b ov e WHILE la ng uage are the r a ndom assignment and the existential qua nt ifier. This is b ecaus e I have decided to drop them. The reason for that is that they ca n cause non-deterministic b ehavior and in this thesis, we are not interested in the (side effects o f ) non-deterministic progr ams. In fact it is ques tionable whether w e ca n say anything ab out side effects in non- deterministic pro grams, but I will return to that in my p ossibilities for future work in Cha pter 9. Aside from that, in our context of (impera tive) progra ms, the rando m a ssignment is a n unusual concept at b est. The s ame go es for the formula ∃ v φ . With those mo difications to the toy language in mind, we can take a lo o k at the similar mo dificatio ns that need to be made to QDL. In the resulting dynamic lo gic DLA f , w e k eep the s ame ter ms: t ::= v | f t 1 . . . t n In DLA f we of course drop the rando m assignment and existential quantifier, to o. B y dr opping them, we lose the quantified character of Q DL. Because of that, the resulting logic is no lo nger ca lled a qua ntifi ed dynamic log ic . The first ma jor change to QDL, b esides the a bsence of the rando m ass ignment and the existential q ua ntifi er, is that I repla ce the h π i φ co mmand with the weak er [ v := t ] ⊤ : φ ::= ⊤ | Rt 1 . . . t n | t 1 = t 2 | ¬ φ | φ 1 ∨ r ❜ φ 2 | φ 1 ∧ r ❜ φ 2 | [ v := t ] ⊤ This mo dification explicitly expresses the pos sibility of assignments in formulas. All other prog rams, howev e r , are no longer allowed in formulas. Because of this mo dification we will avoid a num b er of problems that QDL has, while k eeping the desired functionality that there should b e ro o m for assignments in formulas. I will addre ss these problems in detail in Sec tio n 3.3. W e hav e also repla ced the ∨ connective with its sho rt-circuit v ar iant ( ∨ r ❜ ) and for con venience, hav e explicitly in tro duced its dual ( ∧ r ❜ ). W e will re tur n to the motiv ation for this change a t the end of this chapter. 3.1. INTR ODUCING DLA F 15 W e a lso need to replace the QDL- fo rmula as so ciated with this command (QDL9). The truth in M for the new co mma nd is defined as follows: M | = g [ v := t ] ⊤ alwa y s (DLA9) It should come a s no sur prise that this a lwa ys succeeds, since assignments alwa ys succeed and yield t rue . Since this for mula always succeeds, we re placed the po ssibility mo dality ( h v := t i⊤ ) for the necessity mo dality ([ v := t ] ⊤ ). The reason we keep this for mula in the form of a mo dality a t all (and not just v := t ), is b ecause formulas of this form ca n change the initial v aluation. This is in s harp contrast to the basic formulas t 1 = t 2 and R t 1 . . . t 2 , which do not change the initial v aluatio n a nd a r e t ypically not mo dalities. Because of that, it is unin tuitive to write the a ssignment for mula as v := t . On a side note: in our toy langua ge w e do simply write v := t for the assign- men t, regar dle s s of where it o ccur s. This is beca use in the world of (impera tive) progra mming, assig nmen ts ar e allowed in steering frag ment s. W e will see b elow that we are g oing to a ccept p ossible state changes in formulas, in contrast to the origina l QDL versions. F or this we will use a mechanism to determine when a state change happ ens, that is, a function that returns the progr a m(s) that are encountered when ev alua ting a formula φ . This function is defined a s follows: Definition 1. The pr o gr am extr acti on function Π M g : φ → π r eturns for formula φ the pr o gr am(s) that ar e enc oun t er e d when evaluating the formula given mo dal M and initial valuation g . It is define d r e cursively as fol lows: Π M g ( ⊤ ) = ? ⊤ Π M g ( Rt 1 . . . t n ) = ? ⊤ Π M g ( t 1 = t 2 ) = ? ⊤ Π M g ( ¬ φ ) = Π M g ( φ ) Π M g ( φ 1 ∨ r ❜ φ 2 ) = ( Π M g ( φ 1 ) if M | = g φ 1 Π M g ( φ 1 ); Π M h ( φ 2 ) if M 6| = g φ 1 and g J Π M g ( φ 1 ) K M h Π M g ( φ 1 ∧ r ❜ φ 2 ) = ( Π M g ( φ 1 ) if M 6| = g φ 1 Π M g ( φ 1 ); Π M h ( φ 2 ) if M | = g φ 1 and g J Π M g ( φ 1 ) K M h Π M g ([ v := t ] ⊤ ) = ( v := t ) In the firs t three ca ses, no progr ams are enco untered. Therefor e, the pro- gram extraction function returns the empt y pr o gram (? ⊤ ). The formula ¬ φ is transpare nt, that is, it returns any pro gram enco un tered in its subformula φ . Because of the shor t-circuit c hara cter of ∨ r ❜ and ∧ r ❜ , a cas e dis tinctio n is made here: in case o f ∨ r ❜ , φ 2 will no t b e ev aluated if φ 1 yields true, therefor e only the pr ogram(s) encountered in φ 1 will b e returned. Otherwise, the result is a concatenation of the pr ogra m(s) encount ered in φ 1 and φ 2 . Ob viously , fo r ∧ r ❜ the opp osite is the case and this cla us e is deriv able from the previo us one us ing duality . Finally , if the formula is an assig nment , the pr ogra m equiv alent o f that assignment is returned. Because the ev aluation of a formula now can ca use a sta te change, the orig- inal definition for the truth in M of ∨ r ❜ (QDL7) is no lo nger v alid. In ca se φ 1 16 CHAPTER 3. MODIFYING QDL TO DLA F contains an a ssignment, φ 2 m ust b e ev aluated in a different v aluation, na mely the one re sulting a fter ev a luating φ 1 in the initial v aluatio n: M | = g φ 1 ∨ r ❜ φ 2 iff fo r g J Π M g ( φ 1 ) K M h , M | = g φ 1 or M | = h φ 2 (DLA7a) Since we hav e a dded ∧ r ❜ to formulas as well, we also explicitly hav e to define the truth in M for ∧ r ❜ , whic h is similar to the up dated definition of ∨ r ❜ : M | = g φ 1 ∧ r ❜ φ 2 iff fo r g J Π M g ( φ 1 ) K M h , M | = g φ 1 and M | = h φ 2 (DLA7b) Although ∨ r ❜ and ∧ r ❜ use short-circ uit ev aluatio n, we do no t e x plicitly have to define them as such ab ove b ecause we will make sure, v ia the progra m extrac tio n function a nd a n up dated version of QDL11 (s e e b elow), that the v a luation do es not change as a r esult of φ 2 when M | = g φ 1 is true (in c a se of ∨ r ❜ ) or false (in case of ∧ r ❜ ). W e ca n now turn our attention to progr ams in DLA f . Besides the a bsence of the ra ndo m as s ignment, what a progr am π can be do es not change: π ::= v := t | ? φ | π 1 ; π 2 | π 1 ∪ π 2 | π ∗ T o remedy the problem that more things can be express ed in QDL than there are semantics for, we need, as mentioned earlier, to a ccept that a state change can o ccur when ev aluating a pro gram containing formulas. In the case of QDL, that only is the test instr uction, g iven sema ntics earlier in QDL11. So, as seco nd ma jor change w e need to repla ce Q DL11 by: g J ? φ K M h iff ( M | = g φ a nd g = h if Π M g ( φ ) = ? ⊤ M | = g φ a nd g J Π M g ( φ ) K h otherwise (DLA11) The choice her e is in place to av oid lo o ping b ehavior when ev aluating g J ? ⊤ K h . The definitions ab ov e make extensive use of the empty progr am (? ⊤ ). In what follows, it will b e handy to know that the empt y prog ram is truly empt y . In particular, we would lik e to hav e π ; ? ⊤ = π and ? ⊤ ; π = π . I will prov e that below. Lemma 3.1.1. F or any pr o gr am π , initial valuation g , out put valuation h and mo del M g J π ; ? ⊤ K M h iff g J π K M h Pr o of. The pro o f follows from the ab ov e defined QDL-axio ms: g J π ; ? ⊤ K M h iff ∃ f g J π K M f and f J ? ⊤ K h Since we have f J ? ⊤ K h iff f = h and M | = f ⊤ , and since the latter is alwa ys true, we hav e g J π ; ? ⊤ K M h iff g J π K M h Lemma 3.1.2. F or any pr o gr am π , initial valuation g , out put valuation h and mo del M g J ? ⊤ ; π K M h iff g J π K M h 3.2. A WORKING EXAMPLE 17 Pr o of. Similar as for Lemma 3.1 .1. The change to QDL11 has r emedied the pro blem that there a re expressions in Q DL for which ther e ar e no sema n tics defined. O f cours e I ma de a sec ond ma jor c hange — namely replacing h π i φ b y [ v := t ] ⊤ . The rea son for that will come to light as so on a s I will r eintroduce the WHILE command in Section 3.3. Before I will do that, how ever, I will first discuss a working example to provide some more insig ht into the inner workings o f DLA f . 3.2 A w orking example In this section I will pr esent a working exa mple to illus trate how DLA f works. I will use the following pro gram, pr esented here in our toy language: x := 1; IF ( x := x + 1 ∧ r ❜ x = 2) THEN y := 1 ELSE y := 2 In DLA f , this trans la tes to : x := 1; (?([ x := x + 1] ⊤ ∧ r ❜ x = 2); y := 1 ) ∪ (? ¬ ([ x := x + 1] ⊤ ∧ r ❜ x = 2); y := 2) The v aluatio ns g , h, . . . are defined for all v ariables v ∈ V , i.e. they are total functions. Usua lly we are o nly interested in a small n umber of v ar iables, e.g . x and y , in which case we talk a bo ut a v aluation g such that g ( x ) = J t K M g , g ( y ) = J t ′ K M g , or if v aluatio n h is a n up date of v aluation g , h = g [ x 7→ J t K M g , y 7→ J t ′ K M g ] (whic h is a s horthand for g [ x 7→ J t K M g ][ y 7→ J t ′ K M g ]). In all examples we discus s we take for M the mo del of the na tural num b ers a nd we use numerals to denote its element s. Since we a re working on natural num b ers, as constants w e hav e n ranging ov er numerals, a s functions w e hav e + , ∗ a nd − · , and as extra relation we hav e ≤ . Our mo del M contains those co nstants, functions and relations . Assume we hav e an initial v a luation g that sets x and y to 0: g ( x ) = g ( y ) = 0. W e will now first show how the pro gram in our toy language gets ev aluated us ing the structural op erationa l semantics we provided in Chapter 2: ( g , x := 1; IF ( x := x + 1 ∧ r ❜ x = 2) THEN y := 1 EL SE y := 2 ) = ⇒ ( g [ x 7→ 1] , IF ( x := x + 1 ∧ r ❜ x = 2) THEN y := 1 EL SE y := 2 ) W e now need to know if J ( x := x + 1 ∧ r ❜ x = 2) K g [ x 7→ 1] = T . W e ca n ea sily see that it is and further mo re up dates the v a luation a gain by incrementing x by 1. Thu s we get as v a luation g [ x 7→ 2] and w e can finish our ev alua tion a s follows: ( g [ x 7→ 2] , y := 1 ) = ⇒ g [ x 7→ 2 , y 7→ 1] 18 CHAPTER 3. MODIFYING QDL TO DLA F Having seen how our example progra m ev aluates using the se ma nt ics for our toy language, w e can turn our attention to the ev alua tion using DLA f . W e need to ask o urselves if g J π K M h exists (with π the progra m ab ov e ), that is, if there is a v aluation h that mo dels the state of the prog r am after b eing executed on initial v aluation g . Schematically , π c an be br oken down a s follows: π ::= π 0 ; π 1 π 0 ::= x := 1 π 1 ::= (? φ 0 ; π 2 ) ∪ (? ¬ φ 0 ; π 3 ) π 2 ::= y := 1 π 3 ::= y := 2 φ 0 ::= φ 1 ∧ r ❜ φ 2 φ 1 ::= [ x := x + 1] ⊤ φ 2 ::= x = 2 The br eak-down ab ove pav es the way to ev aluate g J π K M h using the DLA f - axioms g iven in the prev ious sections . W e s ta rt by applying QDL12 : g J π K M h = g J π 0 ; π 1 K M h iff ∃ f s.th. g J π 0 K M f and f J π 1 K M h W e find f by ev a luating g J x := 1 K M f using Q DL10 and QDL1 : g J x := 1 K M f iff f = g [ x 7→ J 1 K M g ] = g [ x 7→ 1] Now w e ne e d to ev a luate f J (? φ 0 ; π 2 ) ∪ (? ¬ φ 0 ; π 3 ) K M h . Using QDL13, we get: f J (? φ 0 ; π 2 ) ∪ (? ¬ φ 0 ; π 3 ) K M h iff f J ? φ 0 ; π 2 K M h or f J ? ¬ φ 0 ; π 3 K M h First w e turn our attention to f J ? φ 0 ; π 2 K M h . Using QDL12 aga in we get ∃ d such that f J ? φ 0 K M d and d J π 2 K M h . T o ev aluate the former, we need to use our own rule DLA11. Here we need the progr am extraction function Π for the first time: f J ? φ 0 K M d = f J ?([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)) K M d iff M | = f [ x := x + 1] ⊤ ∧ r ❜ ( x = 2) and f J Π M f ([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)) K M d W e will first have a lo ok at the progra m extraction function Π. Below we will see how it calculates the pro grams that a re encountered while ev alua ting the formula ( x := x + 1) ∧ r ❜ ( x = 2): Π M f ([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)) = Π M f ([ x := x + 1] ⊤ ); Π M f ( x = 2) = ( x := x + 1); ? ⊤ Therefore, we hav e: f J ? φ 0 K M d = f J ?([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)) K M d iff M | = f [ x := x + 1] ⊤ ∧ r ❜ ( x = 2) and f J x := x + 1; ? ⊤ K M d iff f J x := x + 1 K M d 3.3. RE-INTR ODUCING WHILE 19 The first of these tw o, M | = f [ x := x + 1] ⊤ ∧ r ❜ ( x = 2), nicely shows w hy we need an up dated version of ∧ r ❜ and ∨ r ❜ . As we alre a dy no ticed the test φ 0 contains a progra m (the assig nment x := x + 1) and ther efore the state (v aluatio n) changes. As w e will see, this will change the outcome of the second pa rt of the test. W e need DLA7b and our prog ram extr a ction function Π here: M | = f ( x := x + 1) ∧ r ❜ ( x = 2) iff for f J x := x + 1 K M c ,M | = f ( x := x + 1) a nd M | = c ( x = 2) M | = f ( x := x + 1) is defined by DLA8 to be always tr ue. Applying QDL10 on f J x := x + 1 K M c will g ive us c = f [ x 7→ 2]. W e can then apply Q DL5 on M | = c ( x = 2): M | = c ( x = 2) iff J x K M c = J 2 K M c W e ca n easily s e e (using QDL1) that J x K M c = c ( x ) = 2 = J 2 K M c . Therefore, we hav e M | = c ( x = 2) and th us M | = f [ x := x + 1] ⊤ ∧ r ❜ ( x = 2). W e no w need to finish the ev aluation of DLA11 b y ev aluating f J x := x + 1 K M d . This c an again b e done using Q DL10 a nd g ives us d = f [ x 7→ 2]. Because the test φ 0 has now s ucceeded, we can contin ue to the ev aluation of d J π 2 K M h = d J y := 1 K M h . This will give us h = d [ y 7→ 1 ]. Having already es tablished that ? φ 0 succeeds, we a lso know that ? ¬ φ will not succeed. Ther efore, we are do ne with the ev aluation of this progra m π , getting that g J π K M h with g ( x ) = g ( y ) = 0 is indeed po ssible w ith h = g [ x 7→ 2 , y 7→ 1]. 3.3 Re-in tro ducing WHILE In Section 2 .2 I introduced our toy langua ge, whic h was like V an Eijck’s WHILE language, but without a WHILE (or : g uarded iteration) pr ogra mming com- mand. Now that w e hav e see n DLA f in action in our simplified toy language, it is time to re-intro duce the WHILE co mmand. After doing that, we will see that the re-introduction o f WHILE raises some more is sues that warran t the seco nd mo dification I made to QDL, na mely repla cing the formula h π i φ with [ v := t ] ⊤ . 3.3.1 The WHILE command The WHILE command takes the for m WHILE B DO C . The complete list of progra mming commands in our toy language then is: C ::= SKIP | ABOR T | v := a | C 1 ; C 2 | IF B THEN C 1 ELSE C 2 | WHILE B DO C In structur al op erational semantics, the semantics for the guarded iteration ar e as follows. There are tw o options: if the guard ( B ) is not satisfied, command C is not executed. Instead, the command finishes, with a s only (p ossible) change the c hange that the ev aluation of g ua rd B has made to the state: ( g , B ) = ⇒ g ′ ( g , WHILE B DO C ) = ⇒ g ′ J B K g = F If the gua rd is s atisfied, the rule b e comes a little more co mplicated b ecause command C gets executed in a state which is p ossibly changed by guar d B . 20 CHAPTER 3. MODIFYING QDL TO DLA F Like b efore, we have t wo cases: one for which C finishes in a single step and one for which it do es not. ( g , B ) = ⇒ g ′ ( g ′ , C ) = ⇒ g ′′ ( g , WHILE B DO C ) = ⇒ ( g ′′ , WHILE B DO C ) J B K g = T ( g , B ) = ⇒ g ′ ( g ′ , C ) = ⇒ ( g ′′ , C ′ ) ( g , WHILE B DO C ) = ⇒ ( g ′′ , C ′ ; WHILE B DO C ) J B K g = T 3.3.2 WHILE in DLA f In PDL, and therefo r e QDL and DLA f , WHILE is expressed as follows: WHILE φ DO α := (? φ ; α ) ∗ ; ? ¬ φ Thanks to the updated rule for ? φ (DLA11), DLA f is able to handle prog rams with WHILE p er fectly . T o see how this works, consider the following ex ample: x := 0; y := 0; WHILE ( x := x + 1 ∧ r ❜ x ≤ 2) DO y := y + 1 In DLA f , this translates to: x := 0; y := 0; (?([ x := x + 1] ⊤ ∧ r ❜ x ≤ 2); y := y + 1) ∗ ; ? ¬ ([ x := x + 1] ⊤ ∧ r ❜ x ≤ 2) After the first tw o commands, we hav e g ( x ) = g ( y ) = 0. W e now need to lo ok at how the ∗ op erator is ev aluated. QDL14 sta tes that g J π ∗ K M h iff g = h or g J π ; π ∗ K M h ). This means that π is either ex e cuted not at a ll (in which case g = h ) or at leas t once. In our case, π = ?([ x := x + 1] ⊤ ∧ r ❜ x ≤ 2); y := y + 1. The first option is that π is executed not at all, in which ca s e g = h . How ever, under this v aluation h there is no p os s ible v aluation h ′ after ev aluation of the next pro gram command (? ¬ ([ x := x + 1] ⊤ ∧ r ❜ x ≤ 2)). In other words, h J ? ¬ ([ x := x + 1] ⊤ ∧ r ❜ x ≤ 2) K M h ′ is false. The r efore, we hav e to tur n our a tten tion to the other option given by the ∗ command, which is g J π ; π ∗ K M h . F or the ev aluation of this we fir st need QDL1 2 , which tells us that there has to b e an f s uch that g J π K M f and f J π ∗ K M h . In Section 3.2 we hav e alr eady seen how g J π K M f ev aluates; it will s ucceed a nd res ult in a new v aluation f = g [ x 7→ 1 , y 7→ 1 ]. Now we need to ev aluate π ∗ again, but this time with a differe nt initial v aluation (namely f ). This lo op contin ues until we arr ive at a v alua tion f ′ for which the final pro gram comma nd (the tes t ? ¬ ([ x := x + 1 ] ⊤ ∧ r ❜ x ≤ 2 )) wil l succeed. In o ur example, this happ ens in the second iteration, when we hav e f ′ = g [ x 7→ 2 , y 7→ 2], giving us a resulting v a luation h = g [ x 7→ 3 , y 7→ 2], which is exa ctly wha t w e would ex pec t g iven this WHILE lo op. 3.3. RE-INTR ODUCING WHILE 21 3.3.3 Lo oping b ehavior and abnormal termination An in teresting pr oblem regarding the WHILE language a nd QDL is that WHILE T DO SKIP (lo oping b ehavior) and ABOR T (a bnormal termination) are indis- tinguishable. In some semantics, such as natural semantics, this is als o the ca se [11]. In structura l o p er ational semantics, how ever, there is an (infinite) deriv a - tion sequence for WHILE T DO SKIP , wher eas there is no deriv atio n sequence for ABOR T. Using the s ta ndard lemma that h π 1 ; π 2 i φ ↔ h π 1 ih π 2 i φ (cf. [1 5, 11]) we can prov e the equiv alence o f WHILE T DO SKIP and ABOR T in QDL. T o do so, we need to a sk if h (? ⊤ ; ? ⊤ ) ∗ ; ? ⊥i φ ↔ h ? ⊥i φ . Theorem 3.3 . 1. In Q DL, lo oping b ehavior and abnormal termination ar e e quiv- alent: for any φ h (? ⊤ ; ? ⊤ ) ∗ ; ? ⊥i φ ↔ h ? ⊥i φ Pr o of. W e will work o ut the left part first: h (? ⊤ ; ? ⊤ ) ∗ ; ? ⊥i φ ↔ h (? ⊤ ; ? ⊤ ) ∗ ih ? ⊥i φ So we hav e h (? ⊤ ; ? ⊤ ) ∗ i ψ with ψ = h ? ⊥i φ . T ruth of the former in a ra ndom mo del M and for an initial v aluation g is defined a s follows: M | = g h (? ⊤ ; ? ⊤ ) ∗ i ψ iff for some h with g J (? ⊤ ; ? ⊤ ) ∗ K M h , M | = h ψ F urther more we have g J (? ⊤ ; ? ⊤ ) ∗ K M h iff g = h or g J (? ⊤ ; ? ⊤ ); (? ⊤ ; ? ⊤ ) ∗ K M h W e hav e s e en in the previous sectio n how such a formula ev alua tes; after one iteration we will have g J ? ⊤ ; ? ⊤ K M f , with f = h , as one of the o ptions the ∗ command gives us. Finally we have g J ? ⊤ ; ? ⊤ K M h = g J ? ⊤ K M h iff g = h and M | = g ⊤ This is always the ca se, so indeed there is an h such that g J (? ⊤ ; ? ⊤ ) ∗ K M h (namely h = g ). Therefor e, determining the truth o f M | = g h (? ⊤ ; ? ⊤ ) ∗ i ψ comes down to determining the truth of M | = g ψ , which is M | = g h ? ⊥i φ . Since that is e x actly the rig ht hand side of the equation we star ted out with, we indeed hav e that h (? ⊤ ; ? ⊤ ) ∗ ; ? ⊥i φ ↔ h ? ⊥i φ Not b eing able to distinguis h b etw een lo oping b ehavior and abnor mal ter- mination seems undesir able. It is b ecause o f this that I hav e decided to dr op the h π i φ formulas and r e place it by the weaker, but less problematic formulas [ v := t ] ⊤ . Lo oping b ehaviour ca n now no long er b e proven to b e equiv a lent to a bnormal termination. F urthermore , we avoid problems with formulas that require infinite ev aluations, such as h (? ⊤ ) ∗ ; ? ⊥i φ . Because lo oping b ehavior and abno r mal termination ca n no longer b e pr ov e n equal in DLA f , the relationa l meaning of DLA f -instructions now is an instance 22 CHAPTER 3. MODIFYING QDL TO DLA F of the structural ope rational semantics we defined for o ur toy languag e , with the v aluations a s ‘states’. Naturally , this is wha t we wan t, since it expresses that DLA f is a fully defined system that ha s the behavior we would exp e c t g iven our toy language. This mo difica tion also underlines the usefulness of the switch to s hort-circuit versions of the logical connectives ( ∨ r ❜ and its dua l ∧ r ❜ ). In QDL, the steering fragment of the pro gram IF x := x + 1 AND x == 2 THEN a E LSE b can b e express e d using ? ( h x := x + 1 i ( x = 2)). In DLA f such an expre s sion now no lo nger is allow ed. How ever, having ∧ r ❜ and ∨ r ❜ in DLA f allows us to provide a per haps even mor e natural translatio n of this prog r am, namely ?([ x := x + 1] ⊤ ∧ r ❜ x = 2). The full ev aluation v ersio ns of these logica l connectives ( ∧ and ∨ ) would not do, b eca use the o r der of the prog ram instr uctions is imp ortant here. As we will see in Chapter 4, we do not ne e d ∧ r ❜ and ∨ r ❜ in DLA f , but the fact they provide natural transla tions o f this kind, together with the fact that having lo gical connectives defined is standard in dynamic logic, is reaso n enough to keep them. 4 T erminol ogy In this chapter I will pres ent the terminology I will b e using in the remainder of this thesis. In par ticular, I will present a more fine-g rained br eakdown of the definitions for formulas, instructions and pro grams. F ur ther more, I will int ro duce a prop er ty of formulas called nor ma l form and use that to prov e y et another prop erty of DLA f regar ding c omplex steer ing fra gments. Next, I will int ro duce a sub cla ss of progr ams called deterministic pro grams. Finally , I will int ro duce a prop erty of deterministic progr ams called canonical form. 4.1 F orm ulas, instructions and programs In this sec tion I will present the mor e fine-grained break down of the definitions for formulas, instructions and progra ms. Definition 2 . F ormulas c an either b e primitive or c omp ound formulas. Prim- itive formul as ar e written as ϕ and define d as fol lows: ϕ ::= ⊤ | Rt 1 . . . t n | t 1 = t 2 | [ v := t ] ⊤ Comp ound formulas ar e written as φ and define d similarly, but with ne gation and short-cir cuit disjunction and c onjunction as addition: φ ::= ⊤ | Rt 1 . . . t n | t 1 = t 2 | ¬ φ | φ 1 ∨ r ❜ φ 2 | φ 1 ∧ r ❜ φ 2 | [ v := t ] ⊤ Definition 3. Instructi ons c an either b e single inst ructions or b asic instru c- tions. Single instructions ar e written as ρ and define d as fol lows: ρ ::= ( v := t ) | ? ϕ Basic instru cti ons ar e written as and have a little less r estrictive definition r e gar ding tests: ::= ( v := t ) | ? φ This me ans that single instructions form a subset of b asic inst ructions: ρ ⊆ 23 24 CHAPTER 4. TERMINOLOGY Definition 4. Pr o gr ams ar e writt en as π and c onsist of one or mor e b asic instructions joine d by either c onc atenation ( ; ), un ion ( ∪ ) or r ep etition ( ∗ ): π ::= | π 1 ; π 2 | π 1 ∪ π 2 | π ∗ 4.2 Normal forms of form ulas In this section I will introduce a prop erty of formulas called norma l fo r m and use that to prov e a pro p erty o f DLA f regar ding complex steer ing fra gments. I will start w ith the former . Definition 5. A formula is said to b e in its nor mal form iff al l ne gations (if any) that o c cur in the formula ar e on atomic level, that is if the ne gations only have primitive formulas as their ar gument (i.e. ar e of the form ¬ ϕ ). Prop ositi o n 1. Any formula c an b e r ewritten into its normal form s u ch that its re lational me aning is pr eserve d. Pr o of. Left-seq uent ial versions of De Morgan’s laws a re v alid for for mulas (w e come back to this po in t in Chapter 5): given mo del M and initial v aluation g we pr ov e that M | = g ¬ ( φ 1 ∧ r ❜ φ 2 ) ⇐ ⇒ M | = g ¬ φ 1 ∨ r ❜ ¬ φ 2 F or = ⇒ , fir st assume that M | = g φ 1 , thus M 6| = h φ 2 for g J Π M g ( φ 1 ) K M h , thus M | = h ¬ φ 2 , and thus M | = g ¬ φ 1 ∨ r ❜ ¬ φ 2 . If M 6| = g φ 1 , then M | = g ¬ φ 1 , and th us also M | = g ¬ φ 1 ∨ r ❜ ¬ φ 2 . In order to s how ⇐ =, first assume that M | = g ¬ φ 1 , thus M 6| = g φ 1 ∧ r ❜ φ 2 , th us M | = g ¬ ( φ 1 ∧ r ❜ φ 2 ). If M | = g φ 1 , then M | = h ¬ φ 2 for g J Π M g ( ¬ φ 1 ) K M h , so M 6| = g φ 1 ∧ r ❜ φ 2 , and thus M | = g ¬ ( φ 1 ∧ r ❜ φ 2 ). The dual statement c an a lso e asily be pr oved. The s et of s ide effects caus ed by the ev aluatio n of a formula do es not change under rewriting s of this kind. Using no rmal forms, we can derive an interesting prop erty of DLA f : Prop ositi o n 2. L et φ b e a formula. The pr o gr am ? φ c an b e r ewritten to a form in which only primitive formulas or ne gations ther e of o c cu r in t est s, su ch that its re lational me aning is pr eserve d. Pr o of. Let φ n be a norma l form of φ a nd assume φ n is not a primitive formula or the neg a tion thereof. Then, φ n either is of the form φ 1 ∧ r ❜ φ 2 or φ 1 ∨ r ❜ φ 2 . F or conjunctions, it is easy to see that the pr ogram ? φ can b e rewritten as meant in the pr op osition: ?( φ 1 ∧ r ❜ φ 2 ) = ? φ 1 ; ? φ 2 W e can assume by induction that φ 1 and φ 2 has b een rewr itten into a form in which only primitive for mu las and negations o ccur, too . W e now need to pr ov e that these progra ms have the same rela tional meaning , that is given mo del M and initial v a luation g g J ?( φ 1 ∧ r ❜ φ 2 ) K M h iff g J ? φ 1 ; ? φ 2 K M h 4.2. NORMAL FORMS OF FORMULAS 25 If M 6| = g φ 1 , then h do es not exist in b oth ca ses. If, for g J Π M g ( φ 1 ) K M f , M 6| = f φ 2 , h do es not exist in b oth cas e s either. Otherwise, on the le ft hand side, we ge t h b y applying DLA11: g J Π M g ( φ 1 ∧ r ❜ φ 2 ) K M h which by definition of the pr ogra m extra ction function, since M | = g φ 1 , equals g J Π M g ( φ 1 ); Π M f ( φ 2 ) K M h On the right hand side, we get h by first applying QDL12 , then applying DLA11 t wice and finally applying QDL12 ag ain: g J ? φ 1 ; ? φ 2 K M h iff ∃ f s.th. g J ? φ 1 K M f and f J ? φ 2 K M h iff ∃ f s.th. g J Π M g ( φ 1 ) K M f and f J Π M f ( φ 2 ) K M h iff g J Π M g ( φ 1 ); Π M f ( φ 2 ) K M h F or disjunctions, the rewr itten v ersio n is s lig htly more c o mplex: ?( φ 1 ∨ r ❜ φ 2 ) = ? φ 1 ∪ ? ¬ φ 1 ; ? φ 2 W e can pr ove that given mo del M a nd initial v a luation g g J ?( φ 1 ∨ r ❜ φ 2 ) K M h iff g J ? φ 1 ∪ ? ¬ φ 1 ; ? φ 2 K M h in a similar fashio n as ab ov e. If M | = g φ 1 , then in b oth cas e s h is obtained by g J Π M g ( φ 1 ) K M h If M 6| = g φ 1 , then if for g J Π M g ( φ 1 ) K M f , M 6| = f φ 2 , in b oth cases h do es not exist. If M | = f φ 2 , then o n the left hand side h is obtained via g J Π M g ( φ 1 ∨ r ❜ φ 2 ) K M h = g J Π M g ( φ 1 ); Π M f ( φ 2 ) K M h And on the r ight hand side, h is obtained b y g J ? ¬ φ 1 ; ? φ 2 K M h iff ∃ f s.th. g J ? ¬ φ 1 K M f and f J ? φ 2 K M h iff ∃ f s.th. g J Π M g ( φ 1 ) K M f and f J Π M f ( φ 2 ) K M h iff g J Π M g ( φ 1 ); Π M f ( φ 2 ) K M h On a side note, a similar result can b e o btained for QDL. Here the progra m ?( φ 1 ∨ φ 2 ) can b e rewr itten to (? φ 1 ; ? φ 2 ) ∪ (? φ 1 ; ? ¬ φ 2 ) ∪ (? ¬ φ 1 ; ? φ 2 ) The differences b etw e e n the DLA f version of the same rule ar e there b e cause QDL uses full ev aluation. Therefore , φ 2 has to b e ev alua ted e ven when φ 1 is true, although φ 2 do es not have to b e true anymore. 26 CHAPTER 4. TERMINOLOGY 4.3 Deterministic programs and canonical forms Defining side effects for entire progr ams ca n be c omplicated. This is b ecause tw o comp osition op erator s , namely unio n and rep etition, can b e non-deterministic. W e ar e, how ever, not int erested in (the side effects of ) no n-deterministic pr o- grams, even though they can b e expressed in DLA f . 1 T o b e exa ct, we are only int erested in if . . . t hen . . . else constructio ns and while constructions, which in DLA f are expr essed as follows: IF φ THEN π 1 ELSE π 2 := (? φ ; π 1 ) ∪ (? ¬ φ ; π 2 ) WHILE φ DO π := (? φ ; π ) ∗ ; ? ¬ φ T o formally sp ecify this, we int ro duce deterministic pr o gr ams , which cf. [1 4, 11] are defined as follows: Definition 6. A deterministic pr o gr am dπ is a DLA f -pr o gr am in one of the fol lowing forms: dπ ::= | dπ 1 ; dπ 2 | (? φ ; dπ 1 ) ∪ (? ¬ φ ; dπ 2 ) | ((? φ ; dπ ) ∗ ; ? ¬ φ ) There are tw o interesting prop erties of deterministic prog r ams. The first is rega rding progr ams o f the form (? φ ; π ) ∗ ; ? ¬ φ . In this case ther e will only ever be ex actly one s ituation in which the progra m gets ev aluated. 2 After all, there is ex actly o ne rep etition lo op for which the test ? φ s ucceeds, but will fail the next time it is ev a lua ted. W e can formaliz e this intuition in the following prop osition: Prop ositi o n 3. L et dπ = (? φ ; dπ 0 ) ∗ ; ? ¬ φ b e a deterministic pr o gr am. L et mo del M and initial valuation g b e given and let h b e the valuation such that g J dπ K M h . Ther e is a unique n ∈ N 0 such t hat g J dπ K M h iff g J (? φ ; dπ 0 ) n ; ? ¬ φ K M h wher e ( dπ 1 ) 0 ; dπ 2 = dπ 2 and ( dπ 1 ) n +1 ; dπ 2 = dπ 1 ; ( dπ 1 ) n ; dπ 2 . Pr o of. W e first prove that there is a t lea st o ne n ∈ N 0 for which the ab ove equation holds. Assume s uch a n n do es no t ex ist. This means tha t ? ¬ φ can never b e ev aluated, which is a contradiction with o ur requir ement that there is a v aluation h such tha t g J dπ K M h . Next, we have to prove that there is at most one such n . Let g i be the v aluation such that g J (? φ ; dπ 0 ) i K M g i . By writing this out and then applying DLA11, we know that for i < n , we hav e M | = g i ? φ . Ther efore, for v aluation g i with i < n we cannot ev aluate ? ¬ φ a nd th us there is no i < n for which the ab ov e equiv alence holds. W e know that for i = n , we hav e M | = g i ? ¬ φ . This auto ma tically mea ns that for i > n , the ab ove equiv a lence will not hold either, since we cannot satisfy ? φ . Thus, we hav e e xactly one n . 1 In fact, as we already mentione d in Chapter 2, we can ask ourselves if it is r easonable to talk ab out side effects in non-deterministic programs. W e hav e left this question for f uture wo rk. 2 That i s unless we are dealing with an infinite lo op, but i n that case the program has no ev aluation and we are not interested in those. 4.3. DETERMINISTIC P R OGRAMS AND CANONICAL FORMS 27 The second interesting prop erty of a deterministic prog ram is the following: Definition 7. A deterministic pr o gr am dπ is said to b e in c anonic al form if only c onc atenations o c cur as c omp osition op er ators. This prop erty is going to b e very useful, b eca use we can prov e that given an initial v a luation g , a n y pro gram has a unique cano nical form that has the same behavior: Prop ositi o n 4. L et dπ b e a deterministic pr o gr am. L et mo del M and initial valuation g b e given and let h b e the valuation such t hat g J dπ K M h . Ther e is a unique deterministic pr o gr am dπ ′ in c anonic al form such that g J dπ K M h iff g J dπ ′ K M h and dπ ′ exe cutes the same b asic ins t ructions and the same nu mb er of b asic instructions as dπ . Pr o of. If dπ = (? φ ; dπ 1 ) ∪ (? ¬ φ ; dπ 2 ), then dπ ′ depe nds o n the truth of φ : dπ ′ = ( ? φ ; dπ ′ 1 if M | = g φ ? ¬ φ ; dπ ′ 2 o.w. By induction we can assume that dπ ′ 1 and dπ ′ 2 are the canonica l forms of dπ 1 and dπ 2 (if these are not e mpty), resp ectively . The truth of g J dπ K M h iff g J dπ ′ K M h follows directly from QDL13 in this cas e . If dπ = (? φ ; dπ 1 ) ∗ ; ? ¬ φ , we need to use n as meant in Pr op osition 3: dπ ′ = (? φ ; dπ ′ 1 ) n ; ? ¬ φ Once again we can assume b y induction that dπ ′ 1 is the ca nonical fo r m of dπ 1 (once ag ain if dπ 1 is not empty). The truth o f g J dπ K M h iff g J dπ ′ K M h now follows directly from Prop ositio n 3. It is ea s y to see tha t in b oth these ca ses, dπ ′ executes the same basic ins truc- tions as dπ . It is als o easy to s ee that dπ ′ is unique: we ca nno t add ins tructions using union or rep etition bec a use then dπ ′ will no longe r b e in ca nonical form and we canno t add instructions using co ncatenation b ecause those ins tr uctions will b e ex ecuted, which v iolates the requir ement that dπ ′ only executes the s a me basic instructions a s dπ . W e ca nnot alter or remove instructions in dπ ′ either bec ause all instructions in dπ ′ get executed, so altering o r removing one would also violate said r equirement. 28 CHAPTER 4. TERMINOLOGY 5 The lo g ic of fo rmul as in DLA f Now that we hav e DLA f defined and s hown how it works, it is time to examine the logic of formulas a little closer . As w e hav e mentioned b efore, we ar e making use of short- circuit versions of the ∧ and ∨ connectives, i.e. co nnectives that prescrib e shor t-circuit ev aluation. In [5], different flav ours of short- circuit lo gics (logics that can b e defined by short-cir cuit ev aluation) a re identified. In this chapter w e will give a short overview of these and present the short-cir cuit logic that underlies the for mulas in DLA f , which turns o ut to b e rep etition-pr o of short-circ uit logic (RPSCL). 5.1 Prop osition alg ebra Short-circuit logic can be de fined using pr op osition algebr a , a n alg ebra tha t has short-circ uit ev aluatio n as its natura l semantics. Pr op osition alg e bra is in tro- duced by Ber gstra and Ponse in [4] and makes use of Hoar e ’s terna r y connective x ⊳ y ⊲ z , which is called the c onditional [16]. A mor e common ex pression for this co nditional is if y then x else z , with x, y and z ranging of pro po sitional statements (including prop os itional v ariables ). T hr oughout this thesis, we will use atom as a s horthand for pr op ositional v ariable. Using a sig nature which includes this co nditional, Σ CP = { ⊤ , ⊥ , ⊳ ⊲ } , the following set CP of axio ms for pro po sition alge br a can b e defined: x ⊳ ⊤ ⊲ y = x (CP1) x ⊳ ⊥ ⊲ y = y (CP2) ⊤ ⊳ x ⊲ ⊥ = x (CP3) x ⊳ ( y ⊳ z ⊲ u ) ⊲ v = ( x ⊳ y ⊲ v ) ⊳ z ⊲ ( x ⊳ u ⊲ v ) (CP4) In the earlier mentioned paper [4], v arieties of so-called valuation algebr as a re defined that serve the in terpreta tio n o f a logic ov er Σ CP by means of sho rt-circuit ev aluation. The ev a luation of the conditional t 1 ⊳ t 2 ⊲ t 3 is then as follows: first t 2 gets ev a luated. That yields either T , in which case the final ev a luation re s ult is determined by the ev aluation of t 1 , or F , in which ca se the sa me go es for t 3 . All v arieties men tioned in [4] sa tisfy the ab ov e four axioms. The mos t dis- tinguishing v ariety is c a lled the v ariety of fr e e r e active valuations and is ax- 29 30 CHAPTER 5 . TH E LOGIC OF FORMULAS IN DLA F iomatized b y exactly the four axioms ab ov e (further refer red to a s c onditional pr op ositions (CP)) and nothing more . The asso cia ted v aluation congr uence is called free v aluation congr uence and written as = f r . Thus, for each pair o f closed terms 1 t, t ′ ov er Σ CP , w e hav e CP ⊢ t = t ′ ⇐ ⇒ t = f r t ′ Using the conditional, we can define neg ation ( ¬ ), left-sequential c o njunction ( ∧ r ❜ ) and left-sequential disjunction ( ∨ r ❜ ) as follows: ¬ x = ⊥ ⊳ x ⊲ ⊤ x ∧ r ❜ y = y ⊳ x ⊲ ⊥ x ∨ r ❜ y = ⊤ ⊳ x ⊲ y The ab ov e defined co nnectives are ass o ciative a nd each other’s dual. In CP , it is no t p ossible to express the conditional x ⊳ y ⊲ z using any set of Bo olean connectives, such as ∧ r ❜ and ∨ r ❜ [4]. By adding axio ms to CP , it ca n b e strengthened. The signature and axio ms of one such ex tens ion are c a lled memorizing CP . W e write CP mem for this extension that is obtained by adding the axiom CP mem to CP . This a xiom expresses that the fir s t ev a luation v alue of y is memor ized: x ⊳ y ⊲ ( z ⊳ u ⊲ ( v ⊳ y ⊲ w )) = x ⊳ y ⊲ ( z ⊳ u ⊲ w ) (CPmem) With u = ⊥ a nd by r eplacing y by ¬ y we get the cont ractio n law: ( w ⊳ y ⊲ v ) ⊳ y ⊲ x = w ⊳ y ⊲ x A consequence of contraction is the idemp otence of ∧ r ❜ . F urthermore, CP mem is the lea st identifying extension o f CP for whic h the conditional ca n b e expressed using neg a tion, conjunction and disjunction. T o b e ex act, the following holds in CP mem : x ⊳ y ⊲ z = ( y ∧ r ❜ x ) ∨ r ❜ ( ¬ y ∧ r ❜ z ) W e write = mem (memorizing v aluation congruence) for the v aluatio n c o ngruence axiomatized by CP mem . Another ex tension o f CP , the most identifying one distinguised in [4], is de- fined by a dding both the contraction law and the axiom b elow, which express es how the o rder o f u and y can be swapp ed, to CP : ( x ⊳ y ⊲ z ) ⊳ u ⊲ v = ( x ⊳ u ⊲ v ) ⊳ y ⊲ ( z ⊳ u ⊲ v ) (CPstat) The sig nature and a xioms of this extension, fo r which we write CP stat , are called static CP . W e write = stat ( static valuation c ongruenc e ) for the v aluatio n congruence a xiomatized by CP stat . A c o nsequence in CP stat is v = v ⊳ y ⊲ v , which ca n be use d to derive the commutativit y of ∧ r ❜ : x ∧ r ❜ y = y ∧ r ❜ x . CP stat is the most identifying extension of CP b ecause it is ‘equiv alent with’ prop ositiona l lo gic, that is, all tautologies in prop ositio nal logic c an be prov ed in CP stat using the ab ove tra nslations o f its common connectives [5]. 1 T erms that may conta in atoms, but not v ar iables. 5.2. SHOR T-CIR CUIT LOGICS 31 5.2 Short-Circuit Logics In this sec tion we will pr esent the definition of short-circuit log ic and its most basic for m, free short-circuit log ic (FSCL). The definitions are given using mo d- ule algebr a [2]. In mo dule algebra , S X is the op era tion that exp orts the signature S from mo dule X while declaring other s ignature e le ments hidden. Using this op eration, s ho rt-circuit logics a re defined a s follows: Definition 8 . A short-cir cui t lo gic is a lo gic that implies the c onse quenc es of t he mo dule expr ession SCL = {⊤ , ¬ , ∧ r ❜ } (CP + ( ¬ x = ⊥ ⊳ x ⊲ ⊤ ) + ( x ∧ r ❜ y = y ⊳ x ⊲ ⊥ )) Thu s, the conditiona l comp ositio n is declar e d to b e an a uxiliary op er ator. In SCL, ⊥ can b e used as a sho rthand for ¬⊤ . After all, we hav e that CP + ( ¬ x = ⊥ ⊳ x ⊲ ⊤ ) ⊢ ⊥ = ¬⊤ With this definition, we can immedia tely define the most ba sic shor t- c ircuit logic we dis ting uish: Definition 9. FSCL (fr e e short-cir cuit lo gic) is the s hort-cir cuit lo gic that implies no other c onse quenc es than those of the mo dule ex pr ession SCL . Using these definitions we can provide equatio ns that ar e deriv able from FSCL. The question whether a finite axiomatiza tion of FSCL with only sequen- tial conjunction, nega tion and ⊤ e x ists, is op en, but the following set EqFSCL of equations for FSCL is prop os ed in [5]: 2 ⊥ = ¬⊤ (SCL1 ) x ∨ r ❜ y = ¬ ( ¬ x ∧ r ❜ ¬ y ) (SCL2) ¬¬ x = x (SCL3) ⊤ ∧ r ❜ x = x (SCL4) x ∧ r ❜ ⊤ = x (SCL5) ⊥ ∧ r ❜ x = ⊥ (SCL6) ( x ∧ r ❜ y ) ∧ r ❜ z = x ∧ r ❜ ( y ∧ r ❜ z ) (SCL7) ( x ∨ r ❜ y ) ∧ r ❜ ( z ∧ r ❜ ⊥ ) = ( ¬ x ∨ r ❜ ( z ∧ r ❜ ⊥ )) ∧ r ❜ ( y ∧ r ❜ ( z ∧ r ❜ ⊥ )) (SCL8) ( x ∨ r ❜ y ) ∧ r ❜ ( z ∨ r ❜ ⊤ ) = ( x ∧ r ❜ ( z ∨ r ❜ ⊤ )) ∨ r ❜ ( y ∧ r ❜ ( z ∨ r ❜ ⊤ )) (SCL9) (( x ∧ r ❜ ⊥ ) ∨ r ❜ y ) ∧ r ❜ z = ( x ∧ r ❜ ⊥ ) ∨ r ❜ ( y ∧ r ❜ z ) (SCL10) Note that equations SCL2 and SCL3 imply a left-sequential version of De Mor- gan’s laws. An imp or tant equation tha t is absent is the following: x ∧ r ❜ ⊥ = ⊥ 2 In [ 5] it i s stated that the authors did not find an y equations deriv able fr om FSCL but not from EqFSCL. 32 CHAPTER 5 . THE LOGIC OF FORMULAS IN DLA F This is wha t we would ex pe c t, since ev aluation of t ∧ r ❜ ⊥ (with t a closed term) can generate a side effect that is absent in the ev aluation o f ⊥ , although we know that ev aluation of t ∧ r ❜ ⊥ a lwa ys yields F . W e now have the most basic short- c ir cuit logic and so me o f its equations defined, but of cour s e ther e also is a “most lib eral” short-cir cuit log ic b elow prop ositiona l lo gic. This logic is bas ed on memorizing CP a nd s a tisfies idemp o- tence of ∧ r ❜ (and ∨ r ❜ ), but not its commutativit y . It is defined as fo llows: Definition 10. MSCL (memorizing short-cir cuit lo gic) is the short-cir cuit lo gic that implies no other c onse qu enc es t han those of t he mo dule expr ession {⊤ , ¬ , ∧ r ❜ } (CP mem + ( ¬ x = ⊥ ⊳ x ⊲ ⊤ ) + ( x ∧ r ❜ y = y ⊳ x ⊲ ⊥ )) F or the set of ax ioms EqMSCL, intuitions and an example, and a complete- ness pro of of MSCL we re fer the rea der to [5]. Adding the axio m x ∧ r ❜ ⊥ = ⊥ to MSCL, or equiv alently , the axio m ⊥ ⊳ x ⊲ ⊥ = ⊥ to CP mem , yields so- called static shor t-circuit log ic (SSCL), which is equiv alent with pr op ositional log ic (be it in sequential notatio n and defined b y short-cir cuit ev aluation). Definition 1 1. SSCL (static short-cir cu i t lo gic) is the short-cir cuit lo gic that implies no other c onse quenc es than those of the mo dule expr ession {⊤ , ¬ , ∧ r ❜ } (CP mem + ( ⊥ ⊳ x ⊲ ⊥ = ⊥ ) + ( ¬ x = ⊥ ⊳ x ⊲ ⊤ ) + ( x ∧ r ❜ y = y ⊳ x ⊲ ⊥ )) 5.3 Rep etition-Pro of Short-Circuit Logic With b oth the mo st basic as well as the mo st lib eral sho rt-circuit logic we distinguish defined, we can prese nt the v ar iant o f short-circuit logic that we are int erested in b eca use it underlies the log ic of fo r mulas in DLA f : r ep etition-pr o of short-cir cuit lo gic (RPSCL). This SCL- v ariant stems fro m a n axiomatization of prop osition alg e bra ca lled r epe tition-pro of CP (CP r p ) that is in b etw een CP and CP mem and inv olves explicit refer ence to a set A of atoms (prop ositio nal v ariables ). The a xiom system CP r p is defined as the extension o f CP with the following t wo axio m schemes (for a ∈ A ), which imply that any subsequent ev aluation result o f an ato m a equals the current one: ( x ⊳ a ⊲ y ) ⊳ a ⊲ z = ( x ⊳ a ⊲ x ) ⊳ a ⊲ z (CPrp1) x ⊳ a ⊲ ( y ⊳ a ⊲ z ) = x ⊳ a ⊲ ( z ⊳ a ⊲ z ) (CPr p2) W e write E q r p ( A ) to denote the set of these axio ms schemes in the fo rmat of mo dule alg ebra. In CP r p the conditional cannot b e ex pressed in terms of ∧ r ❜ , ¬ and ⊤ : in [4] it is shown that the prop ositiona l s ta tement a ⊳ b ⊲ c (for atoms a, b, c ∈ A ) cannot b e expres sed mo dulo rep etition-pro o f v aluation co ngruence, that is , the v a luation co ngruence axio matized by CP r p . The definition of RPSCL then beco mes: 5.3. REP ETITION-PROOF SHOR T-CIRCUIT LOGIC 33 Definition 12. RPSCL (r ep etition-pr o of short-cir cuit lo gic) is the short- cir cuit lo gic that implies no other c onse quenc es than those of the mo dule ex pr es- sion {⊤ , ¬ , ∧ r ❜ , a | a ∈ A } (CP + Eq r p ( A ) + ( ¬ x = ⊥ ⊳ x ⊲ ⊤ ) + ( x ∧ r ❜ y = y ⊳ x ⊲ ⊥ )) The equations defined by RPSCL include those that ar e defined by E qFSCL as w ell as for a ∈ A : a ∧ r ❜ ( a ∨ r ❜ x ) = a ∧ r ❜ ( a ∨ r ❜ y ) (RP1) a ∨ r ❜ ( a ∧ r ❜ x ) = a ∨ r ❜ ( a ∧ r ❜ y ) (RP2) ( a ∨ r ❜ ¬ a ) ∧ r ❜ x = ( ¬ a ∧ r ❜ a ) ∨ r ❜ x (RP3) ( ¬ a ∨ r ❜ a ) ∧ r ❜ x = ( a ∧ r ❜ ¬ a ) ∨ r ❜ x (RP4) ( a ∧ r ❜ ¬ a ) ∧ r ❜ x = a ∧ r ❜ ¬ a (RP5) ( ¬ a ∧ r ❜ a ) ∧ r ❜ x = ¬ a ∧ r ❜ a (RP6) ( x ∨ r ❜ y ) ∧ r ❜ ( a ∧ r ❜ ¬ a ) = ( ¬ x ∨ r ❜ ( a ∧ r ❜ ¬ a )) ∧ r ❜ ( y ∧ r ❜ ( a ∧ r ❜ ¬ a )) (RP7) ( x ∨ r ❜ y ) ∧ r ❜ ( ¬ a ∧ r ❜ a ) = ( ¬ x ∨ r ❜ ( ¬ a ∧ r ❜ a )) ∧ r ❜ ( y ∧ r ❜ ( ¬ a ∧ r ❜ a )) (RP8) ( x ∨ r ❜ y ) ∧ r ❜ ( a ∨ r ❜ ¬ a ) = ( x ∧ r ❜ ( a ∨ r ❜ ¬ a )) ∨ r ❜ ( y ∧ r ❜ ( a ∨ r ❜ ¬ a )) (RP9) ( x ∨ r ❜ y ) ∧ r ❜ ( ¬ a ∨ r ❜ a ) = ( x ∧ r ❜ ( ¬ a ∨ r ❜ a )) ∨ r ❜ ( y ∧ r ❜ ( ¬ a ∨ r ❜ a )) (RP10) (( a ∧ r ❜ ¬ a ) ∨ r ❜ y ) ∧ r ❜ z = ( a ∧ r ❜ ¬ a ) ∨ r ❜ ( y ∧ r ❜ z ) (RP11) (( ¬ a ∧ r ❜ a ) ∨ r ❜ y ) ∧ r ❜ z = ( ¬ a ∧ r ❜ a ) ∨ r ❜ ( y ∧ r ❜ z ) (RP12) It is an o p e n questio n whether the eq ua tions SCL1- SCL10 and the equation schemes RP1-RP1 2 axiomatize RPSCL, but it will be shown b elow that RPSCL is the logic that mo dels equiv a lence o f fo r mulas in DLA f , where A = { Rt 1 . . . t n , t 1 = t 2 , [ v := t ] ⊤ } F or this reaso n, we add the conditional φ 1 ⊳ φ 2 ⊲ φ 3 and the constant ⊥ to DLA f (th us making ∨ r ❜ and ¬ definable). In order to decide whether different DLA f formulas are equiv alent , just translate these to CP r p and decide their equiv alence (either by axiomatic reaso ning or by c hecking their rep etition-pro of v alua tion congruence). So, we extend the formulas in DLA f in o rder to character ize the logic that mo dels their equiv alence. In this extension of DLA f , which w e ba ptize DLCA f (for Dynamic Logic with the Conditional and Assignments as F orm ulas), truth in M relative a n initial v a luation g for the conditional is defined as follows: M | = g ( φ 2 ⊳ φ 1 ⊲ φ 3 ) iff for g J Π M g ( φ 1 ) K M h , ( M | = h φ 2 if M | = g φ 1 M | = h φ 3 o.w. (DLCA) This means that we need an ex tr a equation for the progr am extraction function Π to o which handles the co nditional. F or mo del M , initial v alua tion g and g J φ 1 K M h Π M g ( φ 2 ⊳ φ 1 ⊲ φ 3 ) = ( Π M g ( φ 1 ); Π M h ( φ 2 ) if M | = g φ 1 Π M g ( φ 1 ); Π M h ( φ 3 ) o.w. 34 CHAPTER 5 . THE LOGIC OF FORMULAS IN DLA F In the re mainder of this section we consider formulas ov er this signature , th us formulas ov er A comp osed with ⊳ ⊲ . Below we will prov e for a ll men tioned axioms that they ar e v a lid in DLCA f . Prop ositi o n 5 . L et M b e a mo del for DLCA f . The axiom CP1, that is x ⊳ ⊤ ⊲ y = x (CP1) is valid in M . Pr o of. Let t 1 , t 2 be arbitra r y formulas a nd let g b e an initial v aluation. Regar d- less of g , we hav e M | = g ⊤ (by QDL3), so by DLCA, we g et M | = g ( t 1 ⊳ ⊤ ⊲ t 2 ) iff for g J ? ⊤ K M h , M | = h t 1 . Since g = h , w e indeed hav e that M | = g ( t 1 ⊳ ⊤ ⊲ t 2 ) iff M | = g t 1 . Prop ositi o n 6 . L et M b e a mo del for DLCA f . The axiom CP2, that is x ⊳ ⊥ ⊲ y = y (CP2) is valid in M . Pr o of. Let t 1 , t 2 be arbitr ary formulas and let g b e a n initial v alua tio n. ⊥ is a shorthand fo r ¬⊤ , so we first need QDL6, which states that M | = g ¬⊤ iff not M | = g ⊤ , which is never the ca se. So for any initial v alua tion g , M | = g ⊥ is false. Thus by DLCA, we get M | = g ( t 1 ⊳ ⊥ ⊲ t 2 ) iff for g J ? ⊤ K M h , M | = h t 1 . Since g = h , we indeed have that M | = g ( t 1 ⊳ ⊥ ⊲ t 2 ) iff M | = g t 2 . Prop ositi o n 7 . L et M b e a mo del for DLCA f . The axiom CP3, that is ⊤ ⊳ x ⊲ ⊥ = x (CP3) is valid in M . Pr o of. Let t b e an ar bitrary formula and let g b e an initial v a lua tion. If M | = g t then by DLCA w e get for g J Π M g ( t ) K M h , M | = h ⊤ , which a ls o is true. If M 6| = g t then by DLCA we obtain M | = h ⊥ (note that also in this ca se, h is defined), which also is false. Thus M | = g t iff M | = g ⊤ ⊳ t ⊲ ⊥ and hence the axiom CP3 is v a lid. Prop ositi o n 8 . L et M b e a mo del for DLCA f . The axiom CP4, that is x ⊳ ( y ⊳ z ⊲ v ) ⊲ u = ( x ⊳ y ⊲ u ) ⊳ z ⊲ ( x ⊳ v ⊲ u ) (CP4) is valid in M . Pr o of. Let t 1 , t 2 , t 3 , t 4 , t 5 be ar bitrary formulas and let g b e an initial v alua tion. W e ar e going to hav e to show that M | = g t 1 ⊳ ( t 2 ⊳ t 3 ⊲ t 4 ) ⊲ t 5 iff M | = g ( t 1 ⊳ t 2 ⊲ t 5 ) ⊳ t 3 ⊲ ( t 1 ⊳ t 4 ⊲ t 5 ) W e have to apply DLCA mult iple times her e. By applying it to the left hand side we get for g J Π M g ( t 2 ⊳ t 3 ⊲ t 4 ) K M f M | = g t 1 ⊳ ( t 2 ⊳ t 3 ⊲ t 4 ) ⊲ t 5 iff ( M | = f t 1 if M | = g ( t 2 ⊳ t 3 ⊲ t 4 ) M | = f t 5 o.w. 5.3. REP ETITION-PROOF SHOR T-CIRCUIT LOGIC 35 By applying DLCA a gain to M | = g ( t 2 ⊳ t 3 ⊲ t 4 ) w e g et for g J Π M g ( t 3 ) K M f ′ M | = g ( t 2 ⊳ t 3 ⊲ t 4 ) iff ( M | = f ′ t 2 if M | = g t 3 M | = f ′ t 4 o.w. So if M | = g t 3 and M | = f ′ t 2 , we get M | = f t 1 . If on the other hand M 6| = g t 3 but M | = f ′ t 4 , w e also get M | = f t 1 . In all other situations we g et M | = f t 5 . Let us now consider the rig ht hand side of the equa tion. Here we get for g J Π M g ( t 3 ) K M h ′ : M | = g ( t 1 ⊳ t 2 ⊲ t 5 ) ⊳ t 3 ⊲ ( t 1 ⊳ t 4 ⊲ t 5 ) iff ( M | = h ′ ( t 1 ⊳ t 2 ⊲ t 5 ) if M | = g t 3 M | = h ′ ( t 1 ⊳ t 4 ⊲ t 5 ) o.w. Let us firs t turn our a ttention to the situation whe r e M | = g t 3 . W e need to apply DLCA again and ge t for h ′ J Π M h ′ ( t 2 ) K M h M | = h ′ ( t 1 ⊳ t 2 ⊲ t 5 ) iff ( M | = h t 1 if M | = h ′ t 2 M | = h t 5 o.w. In the s ituation where M 6| = g t 3 , w e get for h ′ J Π M h ′ ( t 4 ) K M h ′′ M | = h ′ ( t 1 ⊳ t 4 ⊲ t 5 ) iff ( M | = h ′′ t 1 if M | = h ′ t 4 M | = h ′′ t 5 o.w. So o n the right hand side, if M | = g t 3 and M | = h ′ t 2 , we get M | = h t 1 . If M 6| = g t 3 but M | = h ′ t 4 , we a lso get M | = h ′′ t 1 . In the o ther situations we g et either M | = h t 5 or M | = h ′′ t 5 . T o prov e that is the same re s ult as on the left-hand side, we need to prov e that f ′ = h ′ , f = h if M 6| = g t 3 , and f = h ′′ if M | = g t 3 . The last tw o statements seem con tradictory , but as w e will see f can actually take tw o differ e nt v aluations depe nding on the truth o f t 3 . The mentioned v aria tio ns are all determined using the progra m extraction function. T o reca p, we hav e the following: g J Π M g ( t 2 ⊳ t 3 ⊲ t 4 ) K M f g J Π M g ( t 3 ) K M f ′ g J Π M g ( t 3 ) K M h ′ h ′ J Π M h ′ ( t 2 ) K M h h ′ J Π M h ′ ( t 4 ) K M h ′′ W e c a n immediately see that f ′ = h ′ . Using the up dated definition for the progra m extraction function we get that g J Π M g ( t 2 ⊳ t 3 ⊲ t 4 ) K M f iff ( g J Π M g ( t 3 ); Π M h ′ ( t 2 ) K M f if M | = g t 3 g J Π M g ( t 3 ); Π M h ′ ( t 4 ) K M f o.w. Using the new rule for the conditional, we get tha t: g J Π M g ( t 3 ); Π M h ′ ( t 2 ) K M f if M | = g t 3 g J Π M g ( t 3 ); Π M h ′ ( t 4 ) K M f if M 6| = g t 3 36 CHAPTER 5 . THE LOGIC OF FORMULAS IN DLA F T o determine if f = h , we need to hav e M | = g t 3 and we need to ev aluate: g J Π M g ( t 3 ) K M h ′ and h ′ J Π M h ′ ( t 2 ) K M h By Q DL12, w e k now that is equiv a lent to g J Π M g ( t 3 ); Π M h ′ ( t 2 ) K M h So indeed we have that if M | = g t 3 , then f = h . Using the sa me argument, we get that if M 6| = g t 3 , then g J Π M g ( t 3 ); Π M h ′ ( t 4 ) K M h ′′ Therefore, if M 6| = g t 3 then f = h ′′ . With thos e four ax ioms proven, we already know for a fact that the logic of formulas in DLA f indeed is a sho r t-circuit logic. T o pr ove that it is a rep etition- free short- circuit logic, we need to prove the axiom schemes CPrp1 and CPrp2, to o. Those axiom schemes make use of atoms a ∈ A . Prop ositi o n 9 . L et M b e a mo del for DLCA f . The axiom CPrp1, that is ( x ⊳ a ⊲ y ) ⊳ a ⊲ z = ( x ⊳ a ⊲ x ) ⊳ a ⊲ z (CPrp1) is valid in M . Pr o of. Let t 1 , t 2 , t 3 be a rbitrary formulas and g an initial v a lua tion. M | = g a can either b e true o r false. If it is false, b oth the left hand side and the r ight hand side, by DLCA, are de ter mined for g J Π M g ( a ) K M h by M | = h t 3 . If it is true, the question if M | = h a is asked. W e hav e to pr ove that for every atom a ∈ A , the r e ply to this will b e the same a s the r eply to M | = g a (namely , true), that is: M | = h a iff M | = g a Recall that a can b e o f the forms { Rt ′ 1 . . . t ′ n , t ′ 1 = t ′ 2 , [ v := t ′ ] ⊤} . F o r the first t wo atoms we ca n immediately see o ur claim is true, s inc e Π M g ( a ) = ? ⊤ and therefore g = h . F o r [ v := t ′ ] ⊤ the c la im immediately follows from DLA9: it is, regar dless of the v aluation, alwa ys true. Prop ositi o n 1 0 . L et M b e a mo del for DLCA f . The axiom CPrp2, that is x ⊳ a ⊲ ( y ⊳ a ⊲ z ) = x ⊳ a ⊲ ( z ⊳ a ⊲ z ) (CPrp2 ) is valid in M . Pr o of. This is the symmetric v ariant of CPrp1 and proven simila rly . By proving the v alidity of these axiom schemes in DLCA f we have proven that the equations SCL1 -SCL10 tog e ther with RP1-RP1 2 a r e axio ms for for- m ulas in DLCA f . CP r p indeed is the most identifying extension o f CP whic h is v alid for formulas. After all, the first mor e identifying extension we distinguish is CP con ( c ontr active CP ) [5], from which among st others the following weak contraction rule c a n b e derived: for a ∈ A a ∧ r ❜ a = a Clearly this is not v alid for DLA f -formulas such as [ x := x + 1] ⊤ . 6 A treatment of side effects 6.1 In tro duction Now that we hav e defined a system to mo del pr o gram instructions and pr ogram states, we can r eturn to o ur or ig inal problem: that of fo rmally defining side effects. Lik e I said in Section 2.1, the basic idea is that a side effect has o c- curred in the ex ecution o f a pro gram if there is a difference be tw een the a ctual ev aluation and the expec ted ev a lua tion of a progr am given a n initial v a lua tion. W e ca n immediately see howev er , that we cannot build a definition of side effects bas e d on the actual and exp ected ev aluation of an entire progra m. Such a definition will get in to trouble when there are m ultiple side effects, esp ecially if those cancel each other out or reinfor ce each other. Co nsider for example the following progr am: π = ?([ x := x + 1] ⊤ ); ?([ x := x + 1] ⊤ ) If we are only going to lo ok a t the entire progra m, we will detect one side effect here, that has incremented the v alue of x by tw o. Ho wev er, it a ppea rs to b e more ac c eptable to say that two side effects hav e o ccurred, that ha pp en to affect the same v a lue. It g e ts even mor e interesting if ther e is a fo rmula in b etw een the tw o claus es ab ov e and the clauses themself ca ncel ea ch other o ut: π = ?([ x := x + 1] ⊤ ∧ r ❜ φ ∧ r ❜ [ x := x − · 1] ⊤ ) If we again only lo ok at the entire prog ram, w e will detect no side effects (unless side effects o ccur in φ ). Howev er , b ecause φ might us e or mo dify x as well, it seems we will hav e to pay attention to the side effect of the firs t clause, even though it will be ca nc e lled out on by the last cla us e. So instead of building a definition of side effects by lo oking only at the actual a nd exp ected ev aluation of an entire prog ram, we are going to build it up starting at the ins tr uction level. 37 38 CHAPTER 6 . A TREA TME NT OF SIDE EFFE CTS 6.2 Side effects in single instructions As said, we are going to use a b ottom-up appro ach to define side effects, so we will firs t define s ide effects fo r single instructions, then mov e up to basic instructions and end with a full definition of s ide effects for pro g rams. The idea is that the side effect of a single instruction is the difference betw e en the actual and exp ected ev a luation of a s ingle instruction. T his differe nc e is essentially the difference b etw een the resulting v aluations after, resp ectively , the actual and exp ected ev aluatio n of the s ingle instruction. The differenc e betw een t wo v aluations is defined as follows: Definition 13. Given a mo del M , the differ enc e b etwe en valuations g and h is define d as those variables that have a differ ent assignment in g and h : ( x 7→ k ′ ) ∈ δ M ( g , h ) iff g ( x ) = k , h ( x ) = k ′ and M 6| = k = k ′ This notion o f difference is not symmetric. W e alr eady know what the actua l ev alua tio n of a single instruction is: for this we can us e DLA f . This leav es us to define the exp ected ev a luation. F o r this we need to k now for each s ingle instruction how w e expect it to ev aluate, that is, what changes w e ex pec t it to make to the initial v aluation. W e hav e the following e x pe c tations of e a ch single ins tr uction: • Ass ignments change the initial v alua tio n by up dating the v ar iable as- signment o f the v a riable under consideration to the (in terpretatio n of the) new v ariable assig nment. • T e sts do not change the initial v a luation: they o nly y ie ld T or F a nd steer the r est of the pro gram a ccordingly . W e need the following equations for determining the expe c ted ev aluatio n E of a sing le instr uction: M | = E g ⊤ a lwa ys (EV1) M | = E g Rt 1 . . . t n iff ( J t 1 K M g , . . . , J t n K M g ) ∈ R M (EV2) M | = E g t 1 = t 2 iff J t 1 K M g is the same as J t 2 K M g (EV3) M | = E g [ v := t ] ⊤ alwa y s ( EV4) g J v := t K M , E h iff h = g [ v 7→ J t K M g ] (EV5) g J ? ϕ K M , E h iff g = h a nd M | = E g ϕ (EV6) Now that we hav e the actual and the ex p ected ev aluation of a single ins truc- tion, we c a n define its side effects. As s aid, this is going to b e the difference betw een the tw o resulting v aluations. Definition 14. L et ρ b e a single inst ruction. L et mo del M b e given and let g b e an initial valuation. F urthermor e, let h b e a valuation such that g J ρ K h and let h ′ b e a valuation such that g J ρ K E h ′ . The set of side effe cts of single instruct ion ρ given mo del M and initial valuation g is define d as S M g ( ρ ) = δ M ( h ′ , h ) 6.3. SIDE EFFECTS IN BASIC INSTR UCTIONS 39 It is imp ortant to note that the v aluations h and h ′ as meant in the a b ove definition may not exist. W e a re not in terested in those situations, how ever. If h and h ′ do exist, they ar e uniq ue. Also no te that δ M ( h ′ , h ) retur ns the v ariable assignment o f v aluatio n h if there is a difference with the v ar iable assig nmen t of v a luation h ′ . Thus, the se t of side effects is defined a s a set containing those v aria bles that hav e a different assignment after the ac tua l and expected v aluation, with a s ass ignments the ones the v ariables a ctually ge t (that is, the assignments they will hav e after ev aluating the s ingle instruction with the a ctual ev aluation). W e will illustr ate this with tw o examples. Firs t, consider the single instruc- tion ρ = ( x := 1), ev aluated under mo del M in initial v a luation g with g ( x ) = 0. W e wan t to know if this causes a side effect, so we nee d to know the actual ev al- uation and the exp ected ev a luation. T o calc ulate the actual ev aluatio n, w e need to know if g J x := 1 K M h and if yes, for whic h v aluation h . The equations fo r DLA f immediately give us the ans wer, in this c a se via Q DL10: h = g [ x 7→ J 1 K M g ]. So we get h ( x ) = 1. Getting the exp ected ev aluation works in a s imilar fas hion, but instead of DLA f we now use the equations ab ov e to ev a luate ρ . Since the eq ua tion fo r ev aluating an assig nment (EV5) is the sa me a s QDL10, we now get the exact same exp ected ev a lua tion as the actual ev aluation. Th us we g et h ′ = g [ x 7→ J 1 K M g ] and therefo r e h ′ ( x ) = 1. W e ca n immediately see that this r esults in the set of side effects b eing empty: S M g ( x := 1) = δ M ( h ′ , h ) = ∅ This is of course what w e would exp ect: an ass ignment should no t hav e a side effect if it do es not o ccur in a steering frag ment . Let us now cons ider an example where w e do expect a side effect: namely if an assignment do es occur in a steering fragment: ρ = ?([ x := 1] ⊤ ). W e use the sa me initial v aluation g . First we try to find the actua l ev alua tion again, which we do by e v aluating g J ?([ x := 1] ⊤ ) K M h . W e now need DLA11 , which tells us that (in this ca se) g J ?([ x := 1] ⊤ ) K M h iff M | = g ([ x := 1] ⊤ ) and g J Π M g ([ x := 1] ⊤ ) K M h = g J x := 1 K M h . Both ev a luate to true, the latter with h = g [ x 7→ 1]. The exp ected upda te once ag ain takes us to the eq uations ab ov e; we need to deter mine h ′ such that g J ?([ x := 1] ⊤ ) K M , E h ′ . F or tests, the demands are fairly simple: g = h ′ and M | = E g [ x := 1] ⊤ (see EV6). The latter is by EV4 defined to be alwa y s true. As a res ult, we ge t h ′ ( x ) = g ( x ) = 0. Thus we get the fo llowing set of side effects: S M g (?[ x := 1 ] ⊤ ) = δ M ( h ′ , h ) = { x 7→ 1 } Again, this is exactly what we wan t: s ince we exp ect for mulas to only yield true or false, the change this for mula makes to the progra m state up on ev a lua tion is a side effect. 6.3 Side effects in basic instructions With side effects for single instructio ns defined, we can mov e up a step to side effects in basic instructio ns. The difference b etw een single and basic instructions 40 CHAPTER 6 . A TREA TME NT OF SIDE EFFE CTS is that in basic instructions, complex steering fr agments are allow ed. This means that we a re g oing to hav e to define how side effects ar e handled in tests that contain a disjunction ( ∨ r ❜ ), conjunction ( ∧ r ❜ ) or negation ( ¬ ). The idea is tha t the set of side effects of the whole formula is the union of the sets of side effects of its primitive parts. How ever, we also hav e to pay attention to the shor t-circuit character of ∨ r ❜ . Only the primitiv e formulas that g et ev alua ted can contribute to the set of side effects. With this in mind, we can give the definition for side effects in (p ossibly) complex steering fragments. Like b efore, we ar e o nly interested in the side effects if the tes t actua lly succeeds. W e need to define this for disjunctions, conjuctions and negations: Definition 15. L et φ = φ 1 ∨ r ❜ φ 2 b e a disjunction. Le t mo del M and initial valuation g b e given, with M | = g φ and wher e φ is in its normal form. F urther- mor e, let f b e the valuation after evaluation of formula φ 1 , that is, g J ? φ 1 K M f . The set of s ide effe cts S M g (? φ ) is define d as: S M g (? φ ) = ( S M g (? φ 1 ) if M | = g φ 1 S M g (? φ 1 ) ∪ S M f (? φ 2 ) o.w. The case distinction is in plac e b ecause of the short-cir cuit c hara c ter of ∨ r ❜ . F or the definition of its dual ∧ r ❜ we do not need this case distinction, beca use since we are again o nly interested in the side effects if the (en tire) formula s ucceeds, all the formulas in the conjunction hav e to yield true. There fo re, the definition for c onjunction is a bit easier: Definition 16. L et φ = φ 1 ∧ r ❜ φ 2 b e a c onjunction. L et mo del M and initial valuation g b e given, with M | = g φ and wher e φ is in its normal form. F urther- mor e, let f b e the valuation after evaluation of primitive formula φ 1 , that is, g J ? φ 1 K M f . The set of s ide effe cts S M g (? φ ) is define d as: S M g (? φ ) = S M g (? φ 1 ) ∪ S M f (? φ 2 ) The recursive definitions for disjunction and conjunction work b ecause even- tually , a primitive formula will b e encountered, for which the s ide effects are already defined. Unfortunately , we ca nnot use a similar construction for nega- tion. This is beca use the side effects in a primitive formula are only defined if that for mula yields true upon ev aluation, so we c annot simply treat neg ation as a transparent op erato r (that is, it is typically not true that S M g ( ¬ φ ) = S M g ( φ )). So we will hav e to define nega tion the hard way instead. Be cause w e are using formulas in normal fo r m in the other definitions, we only hav e to define negation for pr imitive formulas: Definition 1 7 . L et ¬ ϕ b e a ne gation. L et mo del M b e given and let g b e an initial valuation. F urthermor e, let h b e a valuation su ch that g J ? ¬ ϕ K h and let h ′ b e a valuation such that g J ? ¬ ϕ K E h ′ . The set of side effe cts of b asic instruct ion ? ¬ ϕ given mo del M and initial valuation g is define d as S M g (? ¬ ϕ ) = δ M ( h ′ , h ) Now that w e hav e a definition for side effects in (co mplex) s teering fr agments, the ex tens io n of our definition of side effects in single instructio ns to side effects in basic instr uctions is trivia l: 6.4. SIDE EFFECTS IN PROGRAMS 41 Definition 18. L et b e a b asic inst ruction. L et mo del M and initial valuation g b e given and let h b e a valuation su ch that g J K M h . The set of side effe cts S M g ( ) is define d as: S M g ( ) = ( S M g ( ρ ) if = ρ S M g (? φ ) if = ? φ ′ and φ is the normal form of φ ′ W e can illustr ate this with a simple, y et interesting example. Co nsider the following basic instr uction: =?([ x := x + 1] ⊤ ∧ r ❜ [ x := x − · 1] ⊤ ) with initial v aluation g suc h tha t g ( x ) = 1. In this s ituation we hav e tw o side effects that ha pp en to cancel each other out. The resulting v aluatio n after the a ctual ev aluation of this basic instruction will b e the same as the initial v aluation g . First we observe that the formula in this basic instruction is in its no rmal form, a trivial observ atio n since no negatio ns o ccur in it. Ther e are t wo primitive formulas in this conjunction, so the set of side effects is: S M g (?([ x := x + 1] ⊤ ∧ r ❜ [ x := x − · 1] ⊤ )) = S M g (?([ x := x + 1] ⊤ )) ∪ S M g 1 (?([ x := x − · 1] ⊤ )) Here g 1 is de ter mined by g J ?([ x := x + 1] ⊤ ) K M g 1 , so we get g 1 ( x ) = 2. W e hav e already seen in the prev io us section how the parts o f the union ab ove ev aluate, so w e get: S M g (?([ x := x + 1] ⊤ ∧ r ❜ [ x := x − · 1] ⊤ )) = { x 7→ 2 } ∪ { x 7→ 1 } = { x 7→ 2 , x 7→ 1 } So with this definition we hav e av oided the tra p o f not detecting any side effects when there ar e tw o s ide effects that cancel each other out. Instead we hav e tw o side effects here, the last of which happ ens to restore the v aluation of x to its original one. 6.4 Side effects in pr ograms If we are going to extend our definition to that o f side effects in prog r ams, we are going to have to define how conca tenation, unio n and re p etition are handled. Defining side effects for ent ire pro grams is mor e complicated than defining side effects for single and basic instr uctions. This is b ecause tw o co mpo sition op erator s, namely union and r ep e titio n, can be non-deterministic. As we hav e men tioned befo r e, how ever, we are only interested in (the side effects o f ) deter- ministic pr ogra ms . This leaves us to define how side effects are calculated for the comp osition ope rators o f deter ministic pr ogra ms. F or concatena tio n, this is trivial. W e o nce ag ain require that the entire progr am can b e ev aluated with the given initia l v alua tio n. The set of side effects of a pro g ram then is the union of the side e ffects in its basic instructions that are executed given so me initial v aluation: Definition 19 . L et dπ = dπ 1 ; dπ 2 b e a deterministic pr o gr am. Le t mo del M and initial valuation g b e given and let h b e the valuation such that g J dπ K M h . 42 CHAPTER 6 . A TREA TME NT OF SIDE EFFE CTS F urthermor e, let f b e the valuation such that g J dπ 1 K M f . The set of side effe cts S M g ( dπ ) is define d by: S M g ( dπ ) = S M g ( dπ 1 ) ∪ S M f ( dπ 2 ) This works in a similar fashion a s the definition of side effects in complex steering fragments. W e can return now to our e xample given in the Introduction of this chapter: dπ = ?([ x := x + 1] ⊤ ); ?([ x := x + 1] ⊤ ). The ab ov e definition indeed av o ids the trap presented there, namely that this progra m o nly yie lds a single side effect. T o see this, consider initial v a lua tion g such that g ( x ) = 0. W e will then get g J ?([ x := x + 1] ⊤ ) K M f and therefore f ( x ) = 1 , so the set of side effects beco mes: S M g ( dπ ) = S M g (?([ x := x + 1] ⊤ )) ∪ S M f (?([ x := x + 1] ⊤ )) = { x 7→ 1 } ∪ { x 7→ 2 } = { x 7→ 1 , x 7→ 2 } Similarly , side effects that cancel ea ch other o ut, such as in dπ = ?([ x := x + 1] ⊤ ); ?([ x := x − · 1] ⊤ ) will now pe r fectly b e detected, resulting for the same initial v aluation g in a set of side effects S M g ( dπ ) = { x 7→ 1 , x 7→ 0 } . Another interesting obser v ation is that the transformation as defined in Prop ositio n 2, w hich elimina tes o ccure nc e s of ∧ r ❜ and ∨ r ❜ in steer ing fra gments, not only preser ves the relational meaning, but also the s ide effects of such a steering fr a gment. The progr ams ?([ x := x + 1] ⊤ ∧ r ❜ [ x := x − · 1] ⊤ ) and its transformed version ?([ x := x + 1 ] ⊤ ); ?([ x := x − · 1] ⊤ ) are a n illustr ation of this: we c an ea sily s ee that b oth hav e the sa me se t of side effects. With conca tenation defined, we can move on to the next co mpo sition op- erators : union and rep etition. F or this we can use the prop er t y that given an initial v aluatio n, every (ter minating) deterministic progr am has a unique ca non- ical form that exe cutes the same bas ic instr uctions (see Pro p osition 4 in Chapter 4). This makes the definition of side effects for pro g rams co ntaining a union or rep etition straight-forward: Definition 20 . L et dπ b e a deterministic pr o gra m. L et mo del M and initial valuation g b e given and let h b e the valuation su ch that g J dπ K M h . F urthermor e, let dπ ′ b e the deterministic pr o gr am in c anonic al form as me ant in Pr op osition 4. The set of side effe cts S M g ( dπ ) is define d by: S M g ( dπ ) = S M g ( dπ ′ ) W e can illustrate how this works by returning to o ur running ex a mple, dis- cussed in detail in Section 3.2: x := 1; IF ( x := x + 1 ∧ r ❜ x = 2) THEN y := 1 ELSE y := 2 6.4. SIDE EFFECTS IN PROGRAMS 43 In DLA f , this trans la tes to the following deterministic progr am dπ : x := 1; (?([ x := x + 1] ⊤ ∧ r ❜ x = 2); y := 1 ) ∪ (? ¬ ([ x := x + 1] ⊤ ∧ r ❜ x = 2); y := 2) W e have already seen that for g ( x ) = g ( y ) = 0, there is a v aluation h such that g J dπ K M h (namely h = g [ x 7→ 2 , y 7→ 1]). W e c a n break this progr am down as follows: dπ ::= ρ 1 ; dπ 1 ρ 1 ::= ( x := 1) dπ 1 ::= (? φ 0 ; ρ 2 ) ∪ (? ¬ φ 0 ; ρ 3 ) ρ 2 ::= ( y := 1) ρ 3 ::= ( y := 2) φ 0 ::= ϕ 1 ∧ r ❜ ϕ 2 ϕ 1 ::= [ x := x + 1] ⊤ ϕ 2 ::= ( x = 2) W e w ant to know the se t o f side effects in this pr o gram. This is determined as follows: S M g ( dπ ) = S M g ( ρ 1 ; dπ 1 ) = S M g ( ρ 1 ) ∪ S M f ( dπ 1) where w e g e t f by ev aluating g J x := 1 K M f . Th us, f = g [ x 7→ 1 ]. W e can easily see that the fir st s et of side effects S M g ( ρ 1 ) = ∅ . The interesting part is the second set of side effects, since we now hav e a deterministic pr ogra m of the form dπ 1 = (? φ ; dπ 2 ) ∪ (? ¬ φ ; dπ 3 ). Here φ = φ 0 , dπ 2 = ρ 2 and dπ 3 = ρ 3 . W e now ha ve to ask ours elves wha t the ca nonical for m of dπ 1 given v aluation f is . This is determined b y the outcome of the test ?([ x := x + 1] ⊤ ∧ r ❜ x = 2) It is easy to see that this yields tr ue. Thus, the ca nonical for m dπ ′ of dπ 1 is dπ ′ = ? φ 0 ; ρ 2 Therefore accor ding to our definitio n, for f J ? φ 0 K M h : S M f ( dπ 1 ) = S M f ( dπ ′ ) = S M f (? φ 0 ; ρ 2 ) = S M f (? φ 0 ) ∪ S M h ( ρ 2 ) W e can once ag ain immediately see that the second set of s ide effects S M h ( ρ 2 ) = ∅ . The first se t of side effects is determined in a similar fashion as in the example in the pr evious section. In the end, it gives us : S M f (? φ 1 ) = S M f (?([ x := x + 1] ⊤ ∧ r ❜ ( x = 2))) = S M f (?([ x := x + 1] ⊤ ) ∪ S M f ′ (?( x = 2))) 44 CHAPTER 6 . A TREA TME NT OF SIDE EFFE CTS So we aga in g e t a union of tw o sets of side effects, where we get f ′ by ev aluating f J [ x := x + 1] ⊤ K M f ′ . Thus, f ′ = f [ x 7→ 2]. It sho uld b e clea r b y now tha t the first set of side effects co ntains one side effect, namely { x 7→ 2 } , whereas the latter do es not contain any side effects. This g ives us as final set of side effects: S M g ( dπ ) = S M g ( ρ 1 ) ∪ (( S M f (?([ x := x + 1] ⊤ ) ∪ S M f ′ (?( x = 2)))) ∪ S M h ( ρ 2 )) = ∅ ∪ (( { x 7→ 2 } ∪ ∅ ) ∪ ∅ ) = { x 7→ 2 } This is exa ctly the side effect we hav e come to exp ect from o ur running example. W e can now move on to an example of side e ffects in prog rams containing a rep etition. Rec all that r e p etition is defined as follows: g J π ∗ K M h iff g = h o r g J π ; π ∗ K M h (QDL14) So, π either gets exe cuted not at a ll or at least once. The form of prog rams we are interested in is dπ = (? φ ; π ) ∗ ; ? ¬ φ In this ca s e there will only ever be exactly o ne situation in which the progr am gets ev a luated (see P r op osition 3 in Chapter 4 ). Our definition o f canonica l forms tells us that given an initial v aluation g and n a s meant in Prop ositio n 3, the canonical for m dπ ′ of dπ is dπ ′ = ( π r ) n ; ? ¬ φ Using this we ge t the following set of side effects of a deterministic progra m of the above for m: S M g ( dπ ) = S M g (( π r ) n ; ? ¬ φ ) As a n example of this, we can return to a slightly mo dified v e rsion of the example we g av e in Section 3 .3.2. x := 0; y := 0; WHILE ( x := x + 1 ∧ r ❜ x ≤ 3) DO y := y + 1 In DLA f , this tra nslates to the following deterministic progr am dπ given mo del M and initial v aluation g such that g ( x ) = g ( y ) = 0: dπ = (?([ x := x + 1] ⊤ ∧ r ❜ ( x ≤ 3)); y := y + 1 ) ∗ ; ? ¬ ([ x := x + 1] ⊤ ∧ r ❜ ( x ≤ 3)) Clearly this is a deterministic pr o gram in the fo r m we a re interested in and there is a v aluation h such that g J dπ K M h . In this case we hav e π r = ? φ ; y := y + 1 with φ = [ x := x + 1] ⊤ ∧ r ❜ ( x ≤ 3). T o ge t the cano nical form dπ ′ of dπ , we need to find the iteration n fo r which ? φ will succeed, but for w hich the test will not succeed ano ther time. This will b e for n = 3. After all, a fter three iterations we will hav e v aluation g 3 = g [ x 7→ 3 , y 7→ 3]. With this v alua tio n, the test ?([ x := 6.5. SIDE EFFECTS OUTSIDE STEERING FRA GMENTS 45 x + 1] ⊤ ∧ r ❜ ( x ≤ 3)) will fa il, or to put it formally: M 6| = g 3 [ x := x + 1] ⊤ ∧ r ❜ ( x ≤ 3). This means tha t w e w ill g et the following set of side effects: S M g ( dπ ) = S M g ( dπ ′ ) = S M g (( π r ) 3 ; ? ¬ φ ) = S M g (( π r ) 3 ) ∪ S M g 3 (? ¬ φ ) = S M g ( π r ; π r ; π r ) ∪ S M g 3 (? ¬ φ ) = { x 7→ 1 , x 7→ 2 , x 7→ 3 } ∪ { x 7→ 4 } = { x 7→ 1 , x 7→ 2 , x 7→ 3 , x 7→ 4 } Is this the result we would expe ct? The a nswer is yes. It is c le ar that for e ach time the test is ev aluated, a side effect o ccurs. The test is p erformed four times: three times it succeeds (after which the progra m exe cutes the b o dy o f its lo op) and the fourth time it fails, but not a fter up dating the v alua tio n of x . The progra m ev aluates with as final v a luation h = g [ x 7→ 4 , y 7→ 3]. 6.5 Side effects outside stee ring fragmen ts The keen observer will ha ve noticed by now that under our curr ent definition, side effects ca n only o cc ur in steering frag ment s. I hav e b een going thr o ugh quite s ome trouble, how ever, to make m y definitions of side effects as g eneral as p ossible. Even though in this thesis I a m only interested in side effects in steering fra gments, I am fully aware that views can differ on wha t the main effect a nd what the side effect of a n instruction is. That may either b e a matter of opinion or a matter of necessity , as in differ ent systems, the same instructio n may hav e a side effect in one system and not in the other . The wa y m y definitions of side effects 1 are built up, one need only change the exp e ct e d evaluation of an instruction in order to change if it is viewed as a side effect in a certain context. Co nsider, for example, the so metimes a ccepted view that a n assignment ca uses a side e ffect, no ma tter where it o ccurs in a progr am. This view is for example expressed by No r rish in [1 7]. The o nly change we would need to ma ke to our system to inco rp orate that view is a change to the exp ected ev aluation of the assignment, which w ould then b eco me: g J v := t K M , E h iff g = h The consequence o f this in our curr ent setting would b e that the e x pe cted ev aluation of every progra m alwa ys has a resulting v aluation h that is equa l to the initial v aluation g , since only assignments ca n make changes to a v aluation currently and by the ab ov e de finitio n we do not exp e ct a ny ass ig nment to do so, wherever it o ccur s in the pr ogram. As a consequence , any change to the v aluation (caused by the actual ev aluation) will automatically b e a side effect. It is almost as simple to add new instructions to our s e tting. I definitely do not wan t to claim that the instructions I ha ve defined in DLA f are exhaustive, so this need may arise . If we were, for instance, to re-intro duce the ra ndom assignment v := ? , a ll we w ould hav e to do was to define the actual a nd expe c ted 1 As well as the definitions of classes of side effects presen ted in Chapter 7. 46 CHAPTER 6 . A TREA TME NT OF SIDE EFFE CTS ev aluation of this. The actual ev aluation is alr eady given by Harel in [14] and V an E ijck in [11]: g J v := ? K M h iff g ∼ v h If we also would want to allow r andom as s ignments in tests, we would hav e to add a r ule for that a s well, simila r to the o ne alrea dy in place for no rmal assignments: M | = g [ v := ?] ⊤ iff g J v := ? K M h The definition of the exp ected ev alua tio n is dictated by wha t we really exp ect the random a ssignment to do. This can be the sa me as what it actually do es, in which case w e have to define the expected ev aluatio n to b e the same as the actual ev aluation ab ov e : g J v := ? K M , E h iff g J v := ? K M h M | = E g [ v := ?] ⊤ iff M | = g [ v := ?] ⊤ If we exp ect r andom assignments to do so mething differ ent , a ll we hav e to do is define the exp ected ev a luation acco rdingly . This exp ected e v aluation can literally be anything: fro m simply not updating the v aluation at all to alwa ys setting a c o mpletely unr e lated v a riable to 4 2: g J v := ? K M , E h iff h = g [ the answer to life, the universe and everything 7→ 42] On a side note, this example p oses s o me interesting questions ab out ‘negative’ side effects. Under our curr ent definition, setting the ab ove mentioned v ar ia ble to 42 regis ters a s a side effect, but in a somewhat strange fashio n. After all v := ? is a sing le instruction and for g J ρ K M h and g J ρ K M , E h ′ , S M g ( ρ ) = δ ( h ′ , h ). There will a ctually b e tw o difference s b etw een v a lua tions h ′ and h here: the actual ev aluation upda tes v a riable v , whereas the exp ected ev aluation leav es v a lone but do es up date the v aria ble the answer to life, the universe and everything . Both v ariables will s how up in the set o f side effects, b oth with the as signment the actual ev aluation has a ssigned to them. This fails to capture what has actually happened her e: after all, not only did an unexp ected change to the initial v a luation ha pp en (a ‘regular ’ side effect), but an exp ected change also did not happ en (a ‘nega tive’ side effect). At least part of the informatio n what should hav e happe ned is lost, namely the v alue the v ariable the answer to life, the universe and everything was supp osed to g e t. 2 It is an o p en q uestion if we should even allow these s omewhat o dd situatio ns where the actual ev alua tion do es something completely different than w e exp ect, thereby ge nerating a negative s ide effect. W e leave this question, a s well as the question how we should handle these situations if w e do choo se to allow them, for future work. 2 Whic h is quite a shame, considering the trouble it cost to get it. 7 A classification of side eff ects 7.1 In tro duction In this chapter we will ta ke a clos er lo ok at side effects in steering fragments. In particular, we will give a classifica tion of side effects. This class ific a tion gives us a mea sure of the impact of a side effect. As we have alrea dy mentioned in our introduction in Chapter 1, Berg stra has given a n informal class ific a tion of side effects in [1]. Ber gstra makes a distinction betw een steering instructions and w o rking instructions. This distinction is bas ed on a setting called Prog ram Alge bra (PGA). In PGA, there is no dis tinction betw een formulas and single instructions other than formulas, which is why the prop osed dis tinctio n by Be r gstra is mea ningful in that setting. Ev ery basic instruction a in PGA yields a Bo olean reply upo n execution and can therefor e be ma de int o a pos itive or neg ative test instruction + a or − a . Naturally , this cannot be done in our setting of DLA f , so instead of giving a n overview of Bergstra ’s pap er , I will just present the ma jor c la sses of side effects Bergstr a distinguishes and what they come down to in our setting. Bergstra ’s first cla ss of side effects is what he ca lls ‘trivial side effects’. By this he means side effects that can only b e found in e .g . co nsequences for the length o f the progr am or its running time. W e ar e usua lly not interested in those k inds of side effects, which is exa ctly why Bergstr a ca lls them trivial and why we would say that no s ide effects o ccur a t all. An instruction that only returns a mea ningful Bo olean reply (that is, a Bo ole an reply that may differ depe nding on the v aluatio n the instr uction is ev aluated in) is an instruction that only has trivial side effects. Exa mples o f s uch instructions ar e the compa r ision instructions such a s ( x = 2) or ( x ≤ 2). These instr uctions can b e turned into meaningful test ins tr uctions by pr efixing them with a + or − symbol. W e will return to this in our expla nation of PGA in Cha pter 8. In our terms, thes e kinds of instructions can only b e formulas, o ccuring in steering fragments such as ?( x = 2) or ?( x ≤ 2 ). T o b e precis e , they can only b e for mulas that hav e the same actual and ex pec ted ev aluation, and thus no side effects. The ab ove describ ed situation, wher e only trivial side effects o ccur, is o ne extreme. The other extre me is when an instruction alwa ys yields the same Bo olean reply , r egardles s of when it is executed. Ber gstra says that in tha t ca se, 47 48 CHAPTER 7 . A CLASSIFICA TION O F SIDE EFFE CTS only ‘trivial Bo o lean r e s ults’ o ccur and that the ins truction should b e c la ssified as a working ins truction (that is, a single instructio n no t b eing a formula). In our setting this is also true with one nota ble e x ception: tha t of a ssignments. As we know, assignments alwa y s return true, so their Bo olean res ult is trivial. Still, we a llow them in formulas, to o . If a n instruction with trivia l B o olean results o ccurs outside a formula, its only rele v ance w ould b e its effect other tha n the Bo olean reply , in which case you can hardly call that effect a side effect. If it o ccurs in a formula, howev er , the Bo olea n result — alb eit trivial — do es have relev ance, so the e ffect o ther than the Bo o lean reply can indeed b e called a side effect. This is exactly what happ ens in o ur setting. What the classificatio n b etw een steering instructions and working instruc- tions g ives us in the end, is a recommendation o n how to use a particular kind of instruction. Instructio ns such as compa r ision ( x ≤ 2), that only give a Bo o lean reply , have no meaning as a working instr uction and therefor e ideally should only o ccur in steering frag ments. Other instr uc tio ns such a s as s ignment ( x := 2) can be b oth steering instr uctions as w ell a s working instructions and can th us o ccur bo th inside a s well as outside steering fragments. Finally , instructions suc h as writing to the scr een ( write x ) do not retur n a meaningful Bo olean reply and should ther efore ideally no t o ccur in steer ing frag ment s. 7.2 Marginal side effects 7.2.1 In tro duction Having seen the base cla ss of side effects, we can mov e on to the next level, that of mar ginal side effe cts . The intuition b ehind a marginal side effect is fairly simple: the side effect of a single instr uction is mar ginal if the remainder of the execution of the prog r am is unaffected by the o ccurrence of the side effect. The following pr ogra m is a typical exa mple of one where a ma rginal s ide effect o ccurs: dπ = dπ 1 ; ?([ x := x + 1] ⊤ ); y := 1 Here dπ 1 can b e any (deterministic) prog ram. The side effect o ccur s in the test. How ever, since the v a riable x is no lo nger used in the remainder of the progra m (which o nly co nsists of the single instruction y := 1 ), the re mainder o f the progra m is unaffected by the o ccurr e nc e of the side effect. Ther e fo re, this side effect is mar ginal. So what if x do e s o ccur in the remainder of the pro gram, for e xample in this progra m: dπ = dπ 1 ; ?([ x := x + 1] ⊤ ); x := x + 1 This is a typical example of a pr ogram in whic h the o ccuring side effect is not marginal. The reason is that the assignment in the rema inder of the pro gram ( x := x + 1) has a different effect o n the v a riable x than when it would hav e had if the side effect ha d not o ccurred. F or instance, for initial v aluation g such that g ( x ) = 1 (and a ssuming x do es not o ccur in π 1 ), the as signment maps x to 3 . If the s ide effect had not o ccurred, it would have had a differ e nt effect on x (na mely , it w ould hav e ma ppe d it to 2). Another typical example of a pr ogra m in which an o ccuring side effect is not 7.2. MARGINAL SIDE EFFECTS 49 marginal is our r unning ex a mple: dπ = dπ 1 ; ?([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)); y := 1 Here dπ 1 can again be any deter minis tic progra m and the side effect o ccurs in the sa me place as in our fir s t example. Howev er , the test is now a complex test and in the sec ond part o f the test, x is us e d. Suppo se the v aluation after ev aluation o f dπ 1 is f such that f ( x ) = 1 , f ( y ) = 2. The s econd part of the test ( x = 2) will now give a different reply if a side effect do es not o ccur in the fir st part (or if that side effect would hav e affected a different v a riable). As a r e s ult, the r e mainder of the progr am is affected by the side effect: it will b e executed differently if a s ide effect o ccurs. Perhaps the answer to the que s tion if the side e ffect is marginal is less clear when the initia l v aluation in the previo us example would not have b een g with g ( x ) = 1 , but for exa mple with g ( x ) = 42 . It is still the ca s e that the v ar ia ble x , that is affected by a side effect, is us ed again in the remainder of the pro g ram, but no w it do es not c hange the o utcome o f the (complex) tes t. Is tha t side effect still no t marg inal then? The same ques tion can b e po sed ab o ut the following example: dπ = dπ 1 ; ?([ x := x + 1] ⊤ ); x := 42 Regardless of initial v aluation g , at the end o f this progr am (assuming dπ 1 terminates), x will a lwa ys b e mapp ed to 42. So is the side effect in the test marginal or not? The answer can b e found by chec k ing if the r emainder of the progra m is executed in the s ame wa y , or more forma lly: if the actual up date of the remainder of the progr am is the same regar dless of whether a side effect has o ccurred. In b oth o ur last exa mples, the answer to that last question is yes. After a ll, in the first ex ample the tes t x = 2 will fail whether x has been incremented first or no t, a nd in the seco nd ex ample x will always b e mapp ed to 42, again regar dless o f the side effect that incremented x ea rlier. Therefore , the side effects in the discuss ed instruc tio ns a r e ma r ginal. 7.2.2 Marginal side effect s in single instr uctions Although the intuition of marg inal s ide effects should be clear eno ugh by now, formally defining it is tricky be cause we have to define precis e ly what the r e - mainder of a (deterministic) progr am dπ given a single instruc tio n ρ and a n initial v aluation g is. Befo re we c a n define that, w e als o need to know the his- tory of that same pr ogra m given single instruction ρ , which is lo os e ly descr ib ed as those (single or basic) instr uctions that hav e alrea dy b een ev aluated when ρ is ab out to g et ev a luated. In what follows we are g oing to assume that in a cer tain deterministic pro - gram dπ a s ing le instructio n ρ o ccurs that is caus ing a side effect. F urthermore, we are going to use that g iven initial v aluation g , a ny deterministic pro gram has a unique cano nic a l form that has the same b ehavior (s e e Prop ositio n 4 in Chapter 4). Defining the history and rema inder of a deterministic progra m is straight-forward if that pr ogram is in cano nical form. Also, we ca n actually immediately give a more ge neral definition than what we need here, namely the histor y and remainder of a deterministic prog r am given a ba sic instruction. This extra generality will come in handy later on. 50 CHAPTER 7 . A CLASSIFICA TION O F SIDE EFFE CTS Definition 21. L et dπ b e a deterministic pr o gra m in c anonic al form. L et mo del M and initial valuation g b e given and let h b e the valuation such that g J dπ K M h . L et b e a b asic instruction o c curing in dπ , that is, dπ is of the form dπ 1 ; ; dπ 2 , with dπ 1 and dπ 2 b eing p ossibly empty deterministic pr o gr ams in c anonic al form. The hi story of pr o gr am dπ given b asic instruction is define d as: H M g ( dπ , ) = ( ? ⊤ if dπ 1 is empty dπ 1 o.w. The r emainder of pr o gr am dπ given b asic inst ruction is define d as: R M g ( dπ , ) = ( ? ⊤ if dπ 2 is empty dπ 2 o.w. Using P rop osition 4 the extension of the definitions of history a nd re ma inder of a pro gram to all deterministic pr o grams (not just the ones in cano nical form) is triv ial: Definition 22 . L et dπ b e a deterministic pr o gra m. L et mo del M and initial valuation g b e given and let h b e the valuation su ch that g J dπ K M h . F urthermor e, let dπ ′ b e the deterministic pr o gr am in c anonic al form as me ant in Pr op osition 4. The history of pr o gr am dπ given b asic instruction is define d as: H M g ( dπ , ) = H M g ( dπ ′ , ) The r emainder of pr o gr am dπ given b asic inst ruction is define d as: R M g ( dπ , ) = R M g ( dπ ′ , ) With de finitio ns for the histor y and the remainder o f a pro gram in ha nd, we can define mar ginal side effects. Acco rding to our intuition, a side effect should b e marginal if the ev aluation o f the r emainder of the pr ogra m is the same regar dless of whe ther the side effect o ccurr e d. W e can tell if that is the case b y ev aluating the remainder of the progr am with tw o different v a lua tions: o ne in which the single instruction in which the side effect o ccurs has been ev a luated using the actua l e v aluation, and one in which is has b een ev a luated using the exp ected ev aluatio n. 1 If the only difference be t ween those tw o v alua tions is exactly the side effect tha t o ccurred in the single ins tr uction, or if there is no difference b etw een tho s e tw o v aluations at a ll, then we can s ay that the ev aluation of the rema inder of the prog ram has be e n the same. This is formally defined as follows: Definition 23 . L et dπ b e a deterministic pr o gra m. L et mo del M and initial valuation g b e given and let h A b e the valuation su ch that g J dπ K M h A . L et ρ b e a sin- gle instruction in pr o gr am dπ c ausing a side effe ct, t hat is, for g J H M g ( dπ , ρ ) K M f , S M f ( ρ ) 6 = ∅ . L et f A b e the valuation such that f J ρ K M f A and let f E b e the valuation such t hat f J ρ K M , E f E . The side effe ct in ρ is m ar ginal iff for f A J R M g ( dπ , ρ ) K M h A ∃ h E s.th. f E J R M g ( dπ , ρ ) K M , E h E and δ M ( h E , h A ) = ( S M f ( ρ ) or ∅ ) 1 W e now need to restrict our s elv es again to s i ngle instructions because the exp ected ev al- uation is (curren tly) undefined for complex steering fr agmen ts. 7.2. MARGINAL SIDE EFFECTS 51 So w ha t happ ens here exactly? T o show this, we return to the examples we hav e given e arlier in this section. First, consider the progr am dπ = x := 1; ?([ x := x + 1] ⊤ ); y := 1, with initial v aluatio n g such tha t g ( x ) = g ( y ) = 0. W e can observe that dπ is in ca no nical form. In this progr a m, a side effect o ccurs in the single instruction ρ = ?([ x := x + 1] ⊤ ). So is this side effect marginal or not? Here we have the following: H M g ( dπ , ρ ) = ( x := 1) R M g ( dπ , ρ ) = ( y := 1) f = g [ x 7→ 1 , y 7→ 0] f A = f [ x 7→ 2 , y 7→ 0] f E = f [ x 7→ 1 , y 7→ 0] h A = f A [ x 7→ 2 , y 7→ 1] h E = f E [ x 7→ 1 , y 7→ 1] As we can see, the v aluations f and f E are the same. Using our cur rent definition of the exp ected ev alua tion, this will alwa ys b e the cas e, s o we could just use v aluation f here. Howev er , a s I have said in Section 6.5 of Chapter 6, I wan t to keep generality in the definitions of side effects. W e might w ant to change the definition of the exp ected ev aluatio n in the future or add new instructions or connectives tha t do mo dify the initial v aluation. Therefore, we use v a luation f E , the r e sulting v alua tio n after ev aluating the single instruction ρ with the exp ected ev aluation. T o determine if the side e ffects a re ma rginal, we have to ask ours elves if δ M ( h E , h A ) = S M f ( ρ ) or ∅ W e know how to ca lculate the set of side effects; it is { x 7→ 2 } . In this case , δ M ( h E , h A ) is { x 7→ 2 } to o, so the side effect o ccur ring in ρ is ma rginal, which is what w e w ant. W e ca n also clearly see in this case that it is no coincidence that w e are testing δ M ( h E , h A ) and not δ M ( h A , h E ): we need the v aluation that is the result of ev aluating the single instruction using the ac tua l ev aluation in order to pr op erly compare this with the set of side effects. W e can now take a lo o k at an example in which the side effect should no t be ma rginal. Consider the prog ram dπ = x := 1; ?([ x := x + 1] ⊤ ); x := x + 1 , with initial v aluatio n g such that g ( x ) = 0. This progra m is in canonical form to o and the side effect o cc ur s in the same sing le instruction ρ . This time we get the following: H M g ( dπ , ρ ) = ( x := 1) R M g ( dπ , ρ ) = ( x := x + 1) f = g [ x 7→ 1] f A = f [ x 7→ 2 ] f E = f [ x 7→ 1 ] h A = f A [ x 7→ 3 ] h E = f E [ x 7→ 1 ] 52 CHAPTER 7 . A CLASSIFICA TION O F SIDE EFFE CTS W e hav e the same set of side effects: { x 7→ 2 } . How ever, δ M ( h E , h A ) now is { x 7→ 3 } . Therefo re the side effect is not ma rginal, which is again what we would expe ct. W e hav e given a third example which clos ely r esembles the ones we hav e discussed ab ov e, na mely dπ = x := 1 ; ?([ x := x + 1] ⊤ ); x := 42 . If we take the sa me initial v aluation g as ab ove, everything except the remainder of the progra m g iven ρ will be the same: H M g ( dπ , ρ ) = ( x := 1) R M g ( dπ , ρ ) = ( x := 42) f = g [ x 7→ 1] f A = f [ x 7→ 2] f E = f [ x 7→ 1] h A = f A [ x 7→ 42] h E = f E [ x 7→ 42] With this example we ca n s ee why o ur definition of marginal side effects allows the difference b etw een h A and h E to be ∅ , to o. W e have s een b efore that in situations like these, the side effects s hould be margina l, a nd by allowing the difference to b e ∅ , that indeed is the ca se. 7.2.3 Margina l side effects caused b y primitive form ulas As we hav e se e n, our current definition o f margina l side effects is capable of determining whether a side effect o ccur ring in a single ins tr uction is mar ginal or no t. W e still hav e to define marg inal side effects for ba sic instruc tio ns. In particular, we need to hav e a definition for the situatio n in which a primitive formula in a complex test ca us es a side effect 2 and in that sa me test, the v ariable affected by that side effect is used ag a in, such as in the following progra m: dπ = dπ 1 ; ?([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)); y := 1. In order to define how to determine if a side effect is ma rginal or not in these situations, we need to extend our definitions of the history a nd remainder of a prog r am such that it not only works given a single instruction, but also given a primitive formula. Before we can give that definition, we first need to define the history a nd r emainder of a comp ound formula given a primitive formula. W e ar e once ag ain only interested in those tw o concepts if the primitive formula ϕ gets ev aluated. T o get an idea of what the histor y and the rema inder of a compo und for mula given a pr imitive fo rmula should b e, co nsider the following exa mple: ϕ = [ x := 6] ⊤ φ = ¬ ϕ ∨ r ❜ ( x ≤ 10) = ¬ ([ x := 6] ⊤ ) ∨ r ❜ ( x ≤ 10) In this exa mple, the history of φ given ϕ and giv en model M and initial v aluation g is e mpty . The remainder, how ever, is not: R g ( φ, ϕ ) = x ≤ 10 2 W e say that a primitive f ormula causes a side effec t here because a si de effect cannot o ccur in a pri mitive form ula. It can, how ever, occur in a single or basi c instruction whic h tests that formula. 7.2. MARGINAL SIDE EFFECTS 53 Notice that this remainder sho uld b e empty if ¬ ϕ would hav e been true. The history of a formula of course is not alw ays empty . T o illus trate that, we will first in tro duce a no tational conv ention. Notation. We wil l write φ ( ϕ ) to r efer to the primitive formula ϕ o c curring in formula φ at a sp e cific p osition. As an example of this, c o mpare the formulas φ 1 ( ϕ ) = ϕ ∧ r ❜ ϕ and φ 2 ( ϕ ) = ϕ ∧ r ❜ ϕ . The difference betw een the formulas φ 1 ( ϕ ) a nd φ 2 ( ϕ ) is in the instanc e of primitive formula ϕ we a r e r eferring to . Let ϕ = [ x := 6 ] ⊤ and φ ( ϕ ) as in the example a b ove. Now consider the following example: ψ ( ϕ ) = ( x = 2 ∧ r ❜ φ ( ϕ )) Here the history of ψ given ϕ and given mo de l M and initial v alua tion g s uch that g ( x ) = 2 is not empt y: H M g ( ψ , ϕ ) = ( x = 2) Now that w e ha ve given an in tuition what the histo r y and rema inder o f a for mu la given a primitive formula a nd an initial v aluation ar e going to b e, w e can move on to giving the actual definitions. In wha t follows we will as sume tha t the φ in H f ( φ, ϕ ) is in no rmal fo r m and that the sp ecific primitive formula ϕ actually app ears exactly once in formula φ ( ϕ ) (although other instances of ϕ may o ccur in the for mula). φ ( ϕ ) c an take the following forms: ϕ ( ϕ ) , ¬ ϕ ( ϕ ) , φ 1 ( ϕ ) ∨ r ❜ φ 2 , φ 1 ∨ r ❜ φ 2 ( ϕ ) , φ 1 ( ϕ ) ∧ r ❜ φ 2 , φ 1 ∧ r ❜ φ 2 ( ϕ ) Here ϕ ( ϕ ) is the same as ϕ . F o r each of these forms, we will hav e to define how the history and the r emainder is ca lculated. Definition 24 . L et φ b e a formula of one of the ab ove forms. L et mo del M and initial valuation g b e given. L et ϕ b e a primitive formula o c curring in φ such that ϕ gets evaluate d du r ing the evaluation of φ given initial valuation g . The history of formula φ given primitive formula ϕ is define d as: H M g ( ϕ ( ϕ ) , ϕ ) = ⊤ H M g ( ¬ ϕ ( ϕ ) , ϕ ) = ⊤ H M g ( φ 1 ( ϕ ) ∨ r ❜ φ 2 , ϕ ) = H M g ( φ 1 ( ϕ ) , ϕ ) H M g ( φ 1 ∨ r ❜ φ 2 ( ϕ ) , ϕ ) = φ 1 ∨ r ❜ H M g ( φ 2 ( ϕ ) , ϕ ) H M g ( φ 1 ( ϕ ) ∧ r ❜ φ 2 , ϕ ) = H M g ( φ 1 ( ϕ ) , ϕ ) H M g ( φ 1 ∧ r ❜ φ 2 ( ϕ ) , ϕ ) = φ 1 ∧ r ❜ H M g ( φ 2 ( ϕ ) , ϕ ) The r emainder of formula φ given primitive formula ϕ is define d as: R M g ( ϕ ( ϕ ) , ϕ ) = ⊤ R M g ( ¬ ϕ ( ϕ ) , ϕ ) = ⊤ R M g ( φ 1 ( ϕ ) ∨ r ❜ φ 2 , ϕ ) = R M g ( φ 1 ( ϕ ) , ϕ ) ∨ r ❜ φ 2 R M g ( φ 1 ∨ r ❜ φ 2 ( ϕ ) , ϕ ) = R M g ( φ 2 ( ϕ ) , ϕ ) R M g ( φ 1 ( ϕ ) ∧ r ❜ φ 2 , ϕ ) = R M g ( φ 1 ( ϕ ) , ϕ ) ∧ r ❜ φ 2 R M g ( φ 1 ∧ r ❜ φ 2 ( ϕ ) , ϕ ) = R M g ( φ 2 ( ϕ ) , ϕ ) 54 CHAPTER 7 . A CLASSIFICA TION O F SIDE EFFE CTS The r e ason we are only in terested in the history and remainder of a pr imitive formula if that formula is actually ev alua ted, is straight-forward: we use these definitions to calculate the side effects caused by that primitive formula a nd those side effects o nly exist if the pr imitive formula is ev aluated. As straight- forward as this is , the restriction is a n impor tant one. Because we know that ϕ gets ev aluated (not be be confused with ‘y ielding true’), we do not hav e to take po tent ially troublesome formulas int o acco unt such as ⊥ ∧ r ❜ ϕ . The a bove definitions make the histor y and remainder of a for mula given a primitive formula, partial functions. T o see in which situa tions the histo r y and remainder ar e defined and for which they are not, consider the following formula: φ = ( x = 5 ∧ r ❜ [ x := x + 1] ⊤ ) ∨ r ❜ [ x := x + 2] ⊤ Now assume we wan t to know the history o f φ given ϕ = [ x := x + 1] ⊤ . This history H M g ( φ ( ϕ ) , ϕ ) is only defined if [ x := x + 1 ] ⊤ g e ts ev alua ted, which in turn only is the case if we have a initial v aluation g s uch that g ( x ) = 5 . F or all initial v aluations g ′ such that g ( x ) 6 = 5, the histor y of φ given ϕ is undefined. If we w ould b e interested in the history o f φ given ϕ ′ = [ x := x + 2 ] ⊤ , the situation would be reversed: in that case the history H M g ( φ ( ϕ ′ ) , ϕ ′ ) is only undefined with initial v aluation g such that g ( x ) = 5. That the histor y (and the remainder ) is undefined in these cases is no t problematic b ecause a s said, we are going to use these definitions to chec k if the side effects caused b y ϕ are margina l and ϕ can only cause side effects if it g ets ev aluated. Using these definitions, we can mov e on to define the histor y and r emainder of a pr ogram given a primitive for mula: Definition 25. L et dπ b e a deterministic pr o gra m in c anonic al form. L et mo del M and initial valuation g b e given and let h b e the valuation such that g J dπ K M h . L et ? φ b e a test o c curring in pr o gr am dπ , wher e φ is a formula in normal form. Final ly, let ϕ b e a primitive formula o c curing in φ su ch that ϕ gets evaluate d during the evaluation of φ given initial valuation g . The history of pr o gr am dπ given primitive formula ϕ is, for g J ? H M g ( dπ , ? φ ) K M f , define d as: H M g ( dπ , ϕ ) = H M g ( dπ , ? φ ); ? H M f ( φ ( ϕ ) , ϕ ) The r emainder of pr o gr am dπ given primitive formula ϕ is define d as: R M g ( dπ , ϕ ) = ? R M f ( φ ( ϕ ) , ϕ ); R M g ( dπ , ? φ ) The final step is to give a definition to deter mine if a side effect o c curring in a pr imitive for m ula is ma rginal. Given the ab ove, this definition should not be s ur prising: Definition 26 . L et dπ b e a deterministic pr o gra m. L et mo del M and initial valuation g b e given and let h A b e the valuation such t hat g J dπ K M h A . L et ϕ b e a primitive formula in pr o gr am dπ c ausing one of the side effe cts of dπ . L et f b e the valuation su ch that g J H M g ( dπ , ϕ ) K M f . L et f A b e t he valuation such that f J ? ϕ K M f A or f J ? ¬ ϕ K M f A and let f E b e the valuation such that f J ? ϕ K M , E f E or 7.2. MARGINAL SIDE EFFECTS 55 f J ? ¬ ϕ K M , E f E . 3 The side effe ct c ause d by ϕ is mar ginal iff for f A J R M g ( dπ , ϕ ) K M h A ∃ h E s.th. f E J ? R M g ( dπ , ϕ ) K M , E h E and δ M ( h E , h ) = ( S M f (? ϕ ) or ∅ ) T o show how this works, w e return to the ex a mple given in the beg inning of this section: dπ = x := 1; ?([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)); y := 1, with initial v aluation g s uch that g ( x ) = g ( y ) = 0. Here the primitive for m ula ϕ = [ x := x + 1] ⊤ causes a side effect. W e can now use our definition to find out if that side effect is margina l. F or that, we fir st need the history of dπ given primitive formula ϕ . T o calculate H M g ( dπ , ϕ ), we firs t o bs erve that φ is in norma l fo r m. This g ives us a go to us e Definition 25. This definition tells us to fir st calculate v aluatio n f , which we get by ev aluating g J H M g ( dπ , ? φ ) K M f . Here ? φ is a basic instruction, so we can use Definition 22 to ca lculate it. W e hav e seen b efore ho w that ev aluates: H M g ( dπ , ? φ ) = ( x := 1) Thu s we get g J x := 1 K M f , so f = g [ x 7→ 1 , y 7→ 0]. All w e need to do now to g et the history w e are lo oking for, is the history of formula φ given primitive formula ϕ : H M f ( φ ( ϕ ) , ϕ ). W e can use Definition 24 here and are in the situation where φ ( ϕ ) = φ 1 ( ϕ ) ∧ r ❜ φ 2 . Here φ 1 = ϕ and φ 2 = ( x = 2), so as histo r y we ge t: H M f ( φ ( ϕ ) , ϕ ) = H M f ( φ 1 ( ϕ ) ∧ r ❜ φ 2 , ϕ ) = H M f ( φ 1 ( ϕ ) , ϕ ) = H M f ( ϕ ( ϕ )) , ϕ ) = ⊤ Thu s, the history of pr ogra m dπ given primitive formula ϕ is: H M g ( dπ , ϕ ) = H M g ( dπ , ? φ ); ? H M f ( φ ( ϕ ) , ϕ ) = ( x := 1); ? ⊤ With the information ab ove we ca n also immediately calculate the remainder of formula φ given primitive formula ϕ : R M f ( φ ( ϕ ) , ϕ ) = R M f ( φ 1 ( ϕ ) ∧ r ❜ φ 2 , ϕ ) = R M f ( φ 1 ( ϕ ) , ϕ ) ∧ r ❜ φ 2 = R M f ( ϕ ( ϕ ) , ϕ ) ∧ r ❜ φ 2 = ⊤ ∧ r ❜ ( x = 2) Then all we need to determine the r emainder of pro gram dπ given primitive formula ϕ is the remainder of pro gram dπ given basic instruc tio n ? φ . T o see how this e v aluates, see the previous section. W e can use Definition 2 2 for this again and g et: f A = f [ x 7→ 2 , y 7→ 0] R M f A ( dπ , ϕ ) = ( y := 1) 3 This distinction is necessary b ecause we can only ev aluate a test if its argument yields true. M | = f ϕ might actually yield false if ϕ is part of a lar ger f ormula φ that despite that yields true, such as φ = ϕ ∨ q ❛ φ 1 suc h that M | = f A φ 1 . Thus, we need either ϕ or ¬ ϕ . 56 CHAPTER 7 . A CLASSIFICA TION O F SIDE EFFE CTS So the rema inder o f pro gram dπ given primitive formula ϕ is: R M g ( dπ , ϕ ) = ? R M f ( φ ( ϕ ) , ϕ ); R M f A ( dπ , ? φ ) = ?( ⊤ ∧ r ❜ ( x = 2)); ( y := 1 ) Now that w e have the history and the remainder of dπ g iven ϕ , we can finally determine if the side effect o c curring in ϕ is mar ginal. T o quickly reca p, we hav e: H M g ( dπ , ϕ ) = ( x := 1); ? ⊤ R M g ( dπ , ϕ ) = ? ( ⊤ ∧ r ❜ ( x = 2)); ( y := 1) f = g [ x 7→ 1 , y 7→ 0] f A = f [ x 7→ 2 , y 7→ 0] f E = f [ x 7→ 1 , y 7→ 0] h A = f A [ x 7→ 2 , y 7→ 1] h E do es not ex ist Here we hav e an example wher e we do not even have to determine if δ M ( h E , h A ) is the same as S M f (? ϕ ), beca use ther e is no v alua tion h E such that f E J R M g ( dπ , ϕ ) K M , E h E This is beca use for v aluation f E the test ?( ⊤ ∧ r ❜ ( x = 2)) will fail. Therefore , the side effect in ϕ is ‘a utomatically’ not ma r ginal, which is indeed what we w anted. 7.3 Other classes of side effects There a r e tw o more class es of side effects that I wan t to disc us s. The first is the class dete ctible side effe cts . According to Ber gstra, a side effect in a n instruc tio n is detectible if the fact that that s ide effect ha s o ccur ed can b e measur ed by means of a s teering fragment containing that instruction [1]. This is the most general class of side effects: in my terms, any differ ence betw een the actual and the expec ted ev aluation of a single instructio n is a detectible side e ffect. The pres ence of detectible side effects sug gests there are non-detectible side effects as well. This c a n indeed b e the case. A side effect is undetectible if the ev aluation of a (single) instruction causing a side effect would normally change the progra m state, but b ecause of the sp ecific initial v aluation, it do es no t. As a s imple example, co ns ider the s ing le ins tr uction ?([ v := 1] ⊤ ). Under any initial v aluation g this would c hange the pro gram state and cause a side effect, with one ex ception: na mely if g ( v ) = 1. W e ca n for mally define this as follows: Definition 27. L et ρ b e a s ingle inst ruction in mo del M u nder initial valuation g , up dating t he valuation of a variable v . 4 F urthermor e, let S M g ( ρ ) = ∅ . ρ c ontains an undete cti ble side effe ct iff for h such that h ( v ) 6 = g ( v ) : S M h ( ρ ) 6 = ∅ 4 In DLA f , this would mean that ρ either is v := t or ?[ v := t ] ⊤ . 7.3. OTHER CLASSES OF SIDE EFFECTS 57 It remains to b e s e en whether these no n-detectible s ide effects are worth our attent ion. After all, not being able to detect side effects sugges ts that the presence o f the side effects do es not make mu ch difference, in any case not to the further execution of the pr ogra m. Possible exce ptio ns to this are the execution sp eed o r the efficiency of the progra m, especia lly if there are a lot of undetectible side effects. In contrast to non-de tectible side effects, margina l side effects can p oten- tially b e very useful b ecause they ca n o c c ur far more o ften. Like non-de tec tible side effects, they ar e a meas ure of the impact of a side effect. If a side effect is marg ina l, tha t means that the res t of the pro gram is unaffected by it and therefore, the side effect is essentially pretty harmles s. One could at this point imagine a claim that a progra m in which o nly margina l side effects o ccur can be considered a well-written progr am, wherea s a pr ogra m in which non-marg inal side effects o ccur is one that should probably b e rewritten to av oid unexp ected behavior. W e will leave further inv estigation of this claim for future work, how- ever. 58 CHAPTER 7 . A CLASSIFICA TION O F SIDE EFFE CTS 8 A case study: Progr a m Algeb ra In Chapter 6, I presented the system I will b e using for the treatment o f side effects. In this chapter I will provide a case study to see my sy s tem in action. F or this, we will use Pr o gr am Algebr a (PGA) [3 ]. Since PGA is a basic framework for seq uent ial progra mming, it pr ovides an ideal ca s e study for o ur treatment of s ide effects. B y showing how s ide effects a re deter mined in the very g eneral setting of P GA, we a r e ess e ntially showing how they are dealt with o n a host of different, more sp ecific progr amming languages . I will fir st summarize PGA and explain how we ca n use it. Next, some extensions necessary for our purp ose will b e pres ent ed. Finally , I will present some examples to s ee in full how my system deals with s ide effects. 8.1 Program Algebra 8.1.1 Basics of PGA PGA is built fr om a set A of basic instruc tio ns (not to b e c o nfused w ith the DLA f -notion by the same name), which are regar ded as indivisible units. B asic instructions alwa ys provide a Bo olea n r eply , which may b e used for prog r am control (i.e. in stee r ing fra gments). There ar e tw o co mpo sition constructs: co n- catenation a nd rep etition. If X a nd Y are pr ograms , then so is their c oncatena- tion X ; Y and its rep etition X ω . PGA has the following primitive instructions: • Basi c ins truction Basic instr uc tio ns ar e typically notated as a ,b, . . . . As said they ge ner ate a Bo olean v alue. Esp ecia lly impor tant for our purp o s e is that their as so ciated b ehavior may mo dify a (prog ram) state. • T e rmination instruction This instruction, notated as !, terminates the progra m. • T e st instruction T est ins tructions come in tw o flavours: the p os itive test instruction, no tated as + a (where a is a basic instruction), a nd its negative counterpart, − a . F o r the p ositive test ins truction, a is e v aluated and if it yields true , all r emaining instr uctions are executed. If it yields false , the next instruction is skipp ed and ev aluatio n contin ues with the 59 60 CHAPTER 8 . A CASE STUDY: P ROGRAM ALGE BRA instruction after that. F or the negative test instructio n, this is the other wa y aro und. • F orw ard jump ins truction A jump instruction, notated as # k where k can b e any natural num b er. This instruction pres crib es a jump to k instructions from the curr ent one. If k = 0, the pro gram jumps to the same instructio n and inaction o ccur s. If k = 1, the progr am jumps to the next instruction (so this is essentially useless). If k = 2, the next instruction is skipp e d and the progr am pr o ceeds with the one after tha t, and so on. If tw o prog rams execute identical sequences of instructions, instru ction se- quenc e c ongruenc e holds b etw een them. This ca n be axio matized by the follow- ing four axio ms: ( X ; Y ); Z = X ; ( Y ; Z ) (PGA1) ( X n ) ω = X ω (PGA2) X ω ; Y = X ω (PGA3) ( X ; Y ) ω = X ; ( Y ; X ) ω (PGA4) The first c anonic al form of a PGA program is then defined to b e a PGA program which is in o ne of the fo llowing t wo forms: 1. X not containing a rep etition 2. X ; Y ω , with b oth X and Y not containing a r ep etition An y PGA pr o gram can b e rewr itten int o a firs t canonical form using the ab ov e four equations. The next four a x iom schemes for PGA deal with the simplifica- tion of chained jumps: # n + 1; u 1 ; . . . ; u n ; #0 = #0 ; u 1 ; . . . ; u n ; #0 (PGA5) # n + 1; u 1 ; . . . ; u n ; # m = # n + m + 1; u 1 ; . . . ; u n ; # m (PGA6) (# n + k + 1 ; u 1 ; . . . ; u n ) ω = (# k ; u 1 ; . . . ; u n ) ω (PGA7) X = u 1 ; . . . ; u n ; ( v 1 ; . . . ; v m +1 ) ω → # n + m + k + 2; X = # n + k + 1; X (PGA8) Progr ams are consider e d to b e structur al ly c ongruent if they can b e prov en e q ual using the ax ioms P GA1-8. The se c ond c anonic al form of a PGA prog ram is defined to b e a PGA pro- gram in firs t canonica l form for which additionally the following holds: 1. Ther e are no chained jumps 2. Co un ters used fo r a jump into the r ep eating part of the e x pression a re as short as p o s sible Each P GA ex pr ession can b e rewr itten into a shortest s tructurally equiv alent second canonical for m using the ab ove eight equations [3]. 8.1. PROGRAM ALGEBRA 61 8.1.2 Beha vior extraction The previous sectio n describ es the forms a PGA prog r am can take. In this section I will expla in the b ehavioral semantics defined in [3]. The pro cess of determining the b ehavior o f a PGA pro gram g iven its instructions is called b ehavior ext r action . The be havioral semantics itself is based on thread alg ebra, T A in shor t. Like PGA, T A ha s a set A of basic instr uctions, which in this setting are referred to as actions. F urther more, it has the following tw o constants and tw o comp osition mechanisms: • T e rmination This is nota ted as S (for Stop) and terminates the behavior. • Di vergen t b eha vior This is notated a s D (for Div ergence). Divergence (or inaction) means there no longer is a ctive b ehavior. F or instance, in- finite jump lo ops caus e divergen t b ehavior since the progr am o nly makes jumps a nd do es no t per form a ny actions . • Postconditional com p ositi on This is notated as P E a D Q a nd means that firs t a is executed; if its r eply is true then the b ehavior pro ceeds with P , o therwise it pro ceeds with Q . • Action prefix This is notated as a ◦ P and is a shorthand for P E a D P : regar dless of the r eply o f a , the b ehavior will pro ceed with P . As said, b ehavior extraction determines the b ehavior of a P GA progr am given its instructions. F or that, the b ehavior extraction op era tor, nota ted a s | | , is defined. If a prog ram ends without an explicit termina tion instruction, it is defined to end in inaction by the following equation: | X | = | X ; (#0) ω | (8.1) A termination instr uctio n followed by other ins tructions ends in termina tio n and nothing e ls e, which is defined by the following equation: | !; X | = S (8.2) Behavior ex traction is further defined by the following equations dealing with the compo sition mec hanisms: | a ; X | = a ◦ | X | (8.3) | + a ; u ; X | = | u ; X | E a D | X | (8.4) |− a ; u ; X | = | X | E a D | u ; X | (8.5) The jump instructio n requires a set of equations as well. The first equation defines that a jump ins truction which is jumping to itself leads to ina ction. The second and third define how a jump instruction can skip subsequent instr uctions. | #0; X | = D (8.6) | #1; X | = | X | (8.7) | # k + 2 ; u ; X | = | # k + 1 ; X | (8.8) 62 CHAPTER 8 . A CASE STUDY: P ROGRAM ALGE BRA 8.1.3 Extensions of P GA PGA is a mo st basic fr amework [18]. How ever, there ar e many extens ions tha t int ro duce more ‘adv a nced’ programming features such as go to ’s and backw ar d jump instructions. Via pr o jections, ea ch of these ex tens io ns can b e pro jected to PGA in such a wa y that the re s ulting P GA-pr ogra m is b ehaviorally equiv alent to the o riginal progr am. Exa mples o f such extensions are PGLB , in which P GA is extended with a backw ard jump instr uction ( \ # k ) and PGLB g , in which PGLB is further ex tended with a labe l catch instruc tio n ( Lσ ) and a n absolute goto instr uc tio n (## Lσ ). Of particular interest for our purp ose is the e xtension of PGA with the unit instruction op er ator (PGA u ), introduced in [18]. The idea of the unit instruction op erator , nota ted as u ( ), is to wrap a se quence o f instructions in to a sing le unit of length 1. That way , a more flex ible style o f PGA-pro gramming is po ssible. In particular, pr ogra ms of the form if a then { b, c, d } else { f, g, h } now have a more intuitiv e translation: + a ; u ( b ; c ; d ; #4; ); f ; g ; h . 1 Because, thanks to the unit instruction o p erator, the instructions b , c , d a nd #4 ar e viewed as a single ins truction, the execution of tho s e is skipp ed when a yields false . 8.2 Logical connectiv es in PGA 8.2.1 In tro duction As mentioned in Section 8.1, in PGA a lo t of bas ic notations for ass emb ly- like pro gramming langua ges a re defined, esp ecially with its extensio n with unit instruction op erator s (PGA u ) [18]. How ever, one imp or ta nt basic notatio n is missing: tha t o f complex tests, of the form if (a and b) then c . As we have seen, cur rently there are p ositive and ne g ative test instructions in PGA, whic h can only test the B o olean reply of a single instruction. Mor e complex cons truc- tions such as the o ne in the working example of Section 3.2 are how ever very common in progra mming practice and also app ear in resea r ch pap ers such as [1], where they are referred to a s complex steering fragments. This means that for our purp ose, PGA will have to b e extended to accommo da te for complex steering fragments. I will do so b elow. A tomic s tee r ing fragments (that is, steering fragments cont aining only one instruction) are alrea dy prese nt in PGA in the for m of the pos itive and negative test instruction (+ a and − a resp ectively). If we were to extend this with com- plex steering fragments, an ob vious notation w ould b e + φ and − φ . The question now is what for ms φ can take and what it means to hav e such a complex test. Since the instruc tio ns in the steering fra gment nee d to pro duce a Bo o lean reply , the ans wer to the question a b ove in my o pinion should b e that a complex 1 The jump i s necessary to preve nt the instructions f , g and h f rom being executed when a yields true. 8.2. LOGICAL CONNECTIVE S IN PGA 63 test can only b e mea ningful if all the instructions in the complex tes t may be used to determine the r eply . It is not neces sary that all instructio ns a re always used to determine the reply : for instance when using shor t-circuit ev aluation, in some situations not all compo nents of a complex tes t hav e to b e (a nd therefor e are not) use d. Howev er , my claim here is that if a cer tain instruction is never necessary to determine the B o olean reply of the whole steering fr agment, then is should no t be in the steering fragment. Currently , PGA ha s tw o co mpo sition cons tr ucts (co mpo sition and r ep eti- tion). Neither of those define anything, how e ver, ab out the Bo ole a n v alue of m ultiple instructions. That is, the Bo olean v a lue of φ ; . . . ; ψ and of φ ω is un- defined. The int uitive wa y to determine the Bo olean reply of a se q uence o f instructions is v ia log ical co nnections such as And ( ∧ ) a nd Or ( ∨ ). How ever, these are no t present y et in PGA. This means that I will have to introduce them in an extension of PGA u , which we baptize PGA ul . Before I do so, howev er , I need to say so mething mo re ab out the type of And and O r I will b e using. There are m ultiple flav o urs av ailable: • Logi cal And / Or These versions a r e notated as ∧ and ∨ , resp ectively . They use full ev a lua tion a nd the o rder o f ev aluatio n is undefined. • Short-circuit Left And / Or These versions are the ones w e use in DLA f (see Chapter 6). They ar e no tated as ∧ r ❜ and ∨ r ❜ . F rom here on I will refer to them as SCLAnd and SCLOr. They use s hort-circuit ev aluation and are therefo re not commutativ e. The left conjunct or disjunct is ev a lua ted first. There natura lly are r ight-hand versions as well, but I will no t b e using them. • Logi cal Left And / O r These versions are a combination of the other t wo: they use full ev aluation, but the left conjunct or disjunct is ev aluated first. I will notate this as & and | , resp ectively and refer to them as LLAnd and LLO r. I will not discuss r ight-hand versions. The la tter tw o are interesting for our pur po se, b ecause they are very suitable to demo nstrate side effects. How ever, since we cur rently only hav e SCLAnd and SCLOr a t our disp osa l in DLA f , I will co ncentrate on those connectives. Although LLAnd a nd LLO r can b e added to b oth PGA and DLA f , this would raise mor e ques tions than it answers, for instance with reg ard to the logic which would then b e behind the system, which is why we leav e it fo r future work. The a b ove co nnectives will almost a lwa y s be used in combination with e ither a p ositive or a neg ative test. This will b e written as +( a ∧ r ❜ b ) (and s imila r for the negative test and the ∨ r ❜ connective). 8.2.2 Implemen tation of SCLAnd and SCLOr If I a m to introduce the mentioned logic al connectives in PGA ul , I will have to be able to pro ject this extention into PGA. Since the pro jection of PGA u to PGA is alr eady given in [18], it is s ufficient to pro ject P GA ul to PGA u to show that the former can b e pro jected to PGA. Below is a prop osal of a pro jection of the SCLAnd ( ∧ r ❜ ) connective from P GA ul to PGA u , for a, b ∈ A : pgaul2pga u( +( a ∧ r ❜ b )) = u (+ a ; u (+ b ; #2); #2) (8.9) 64 CHAPTER 8 . A CASE STUDY: P ROGRAM ALGE BRA T o see why this pro jection works, co nsider the following ex ample: supp ose we hav e the sequence + φ ; c ; d with φ = a ∧ r ❜ b . This means that if a and b are true, c and d will b e executed. Otherwise , o nly d will be e xecuted. In PGA ul this se quence would b e +( a ∧ r ❜ b ); c ; d . T he pro jection to PGA u would then b e u (+ a ; u (+ b ; #2); #2); c ; d . If a is false, the execution s kips the unit and executes the jump ins truction, ending up executing d . If a is true, the unit is en tered, starting with the test b . If b is false , the execution again arrives at the same jump as b efore, skipping c and ex ecuting d . If b is tr ue, a different jump is executed whic h mak es the pr ogram jump to c first and only then mov es o n to d , whic h is e x actly the desir ed b ehaviour. The ent ire pro jection is wrapp ed in a unit b ecause, as we will see later , the SCLAnd and o ther op er a tors we define here als o are to b e co nsidered units. Therefore, a pr ogram se q uence prior to (or after) the op erator s discussed here cannot jump into the execution of that op erator. By wrapping the pr o jection int o a unit I ensure that cannot happen a fter the pro jection either. F or the SCLOr co nnec tive, the pro jection is a little easier. It lo oks like this, again for a, b ∈ A : pgaul2pga u( +( a ∨ r ❜ b )) = u ( − a ; + b ) (8.10) T o see why this pro jection works, consider the sa me ex ample as ab ov e: + φ ; c ; d , but now with φ = a ∨ r ❜ b . So, if a and / or b are true, c and d should b e exe c uted. If they are b oth fals e , only d should b e executed. In PGA ul this lo o k s like this: +( a ∨ r ❜ b ); c ; d . The pr o jection to PGA u then is u ( − a ; + b ); c ; d . So , if a is true, execution skips testing b and moves on directly to c . If a is false, b is tested first. If b is a lso false, execution skips c and d is executed. If b is true, c gets executed fir st: exa ctly the desir ed b ehaviour. So far, we hav e only b een considering pr ogra ms of the fo rm + φ ; c ; d , that is, with a p ositive test. Of cours e, we also hav e the nega tive test instruction. F or a negative test, the pro jection of SCLAnd r esembles that of SCLOr. This comes as no sur pr ise s ince SCLAnd and SCLO r are each other ’s dual. It lo o ks like this, a gain for a , b ∈ A : pgaul2pga u( − ( a ∧ r ❜ b )) = u (+ a ; − b ) (8.11) The pro jection of ∨ r ❜ for a negative test rese mbles the pro jection of ∧ r ❜ for a po sitive test: pgaul2pga u( − ( a ∨ r ❜ b )) = u ( − a ; u ( − b ; #2); #2) (8.12) 8.2.3 Complex Steering F ragmen ts The implementations in the previo us section work for steer ing fragments con- taining a single logica l connective (that is , with disjuncts or conjuncts a, b ∈ A ). How ever, we also need to define what happ ens for larger complex stee r ing frag - men ts (for instance a ∧ r ❜ ( b ∨ r ❜ c )). In order to accommo date this, we need one more prop er ty for the ∧ r ❜ and ∨ r ❜ op erator s in PGA: they ha ve to b e treated as units. I f we do this, w e can give a recursive definition for the pro jection, with as base ca ses the ones given in the previous se c tions. In what follows, the formulas φ 1 and φ 2 can ta ke the following for m: φ ::= ⊤ | a ∈ A | ¬ φ | φ ∧ r ❜ ψ | φ ∨ r ❜ ψ (8.13) 8.2. LOGICAL CONNECTIVE S IN PGA 65 As we can see , this includes neg ation. F or mor e on negation, see the ne x t section. W e ge t the following pro jections: pgaul2pga u( +( φ 1 ∧ r ❜ φ 2 )) = u (pgaul2pg au(+ φ 1 ); u (pgaul2 pgau(+ φ 2 ); #2); #2) pgaul2pga u( +( φ 1 ∨ r ❜ φ 2 )) = u (pgaul2pg au( − φ 1 ); pga ul2pgau(+ φ 2 )) pgaul2pga u( − ( φ 1 ∧ r ❜ φ 2 )) = u (pgaul2pg au(+ φ 1 ); pga ul2pgau( − φ 2 )) pgaul2pga u( − ( φ 1 ∨ r ❜ φ 2 )) = u (pgaul2pg au( − φ 1 ); u (pgaul2 pgau( − φ 2 ); #2); #2) This works as follows. Consider the example + φ ; d ; !, with φ = a ∧ r ❜ ( b ∧ r ❜ c ). In PGA ul this w ould be wr itten a s: + ( a ∧ r ❜ ( b ∧ r ❜ c )); d ; ! (8.14) W e can use our new recursive definition of ∧ r ❜ and get: pgaul2pga u( +( a ∧ r ❜ ( b ∧ r ❜ c )); d ; !) = u pgaul2pga u( + a ); u (pgaul2pgau(+( b ∧ r ❜ c )); #2); #2 ; d ; ! The pro jections left now are base ca ses o f + a and + ( b ∧ r ❜ c ), resp ectively . Thus, we get pgaul2pga u( +( a ∧ r ❜ ( b ∧ r ❜ c )); d ; !) = u pgaul2pga u( + a ); u (pgaul2pgau(+( b ∧ r ❜ c )); #2); #2 ; d ; ! = u + a ; u ( u (+ b ; u (+ c ; #2); #2); #2); #2 ; d ; ! An interesting questio n is w he ther these pro jections ma ke ∧ r ❜ an a sso ciative op erator . T o find out, we compare the ab ove with the exa mple + φ ; d ; ! where this time φ = ( a ∧ r ❜ b ) ∧ r ❜ c . W e g et: pgaul2pga u(+(( a ∧ r ❜ b ) ∧ r ❜ c ); d ; !) = u pgaul2pga u( +( a ∧ r ❜ b )); u (pgaul2pgau(+ c ); #2); #2 ; d ; ! = u u (+ a ; u (+ b ; #2 ); #2); u (+ c ; #2 ); #2 ; d ; ! W e can use b ehavior extr action to chec k if these progra ms are b ehavioral equiv- alent. It turns out that b oth progr a ms indeed hav e the same b ehavior: (( d ◦ S E c D S ) E b D S ) E a D S Thu s, we ca n co nc lude that ∧ r ❜ is a sso ciative in P GA ul , as w e w ould expe c t g iven SCL7. W e can analyze ∨ r ❜ in a simila r manner . 66 CHAPTER 8 . A CASE STUDY: P ROGRAM ALGE BRA 8.2.4 Negation Now that w e hav e the pro jections for p ositive and negative tests defined, we can turn our atten tion to one more op era tor that is common b oth in prog ramming practice a nd in log ic: ne g ation. In P GA, neg a tion is a bsent, so we need to de fine it her e . Not all instructio ns o r seq uences of instructions can b e neg ated: after all, there is no int uition for the mea ning of the negation o f a ce rtain b ehavior. W e can, how ever, negate bas ic instructions: by this we mean its Bo olea n reply changes v alue. Sequences of instr uctions consisting of the o p er ators I hav e defined ab ov e can be negated as well, which I will write as ¬ φ . First, I define the following sta ndard pr o jection rules: +( ¬ φ ) = − φ (8.15) − ( ¬ φ ) = + φ (8.16) ¬¬ φ = φ (8.17) Now that we have this, we need to take a lo o k at how neg ation in teracts with the ∧ r ❜ and ∨ r ❜ connectives. In particular, we are int erested in what happens if one or b oth o f the instructions in suc h a connective a re neg ated. F or this, the De Morgan’s laws will come in handy: ¬ ( φ 1 ∧ r ❜ φ 2 ) = ¬ φ 1 ∨ r ❜ ¬ φ 2 (8.18) ¬ ( φ 1 ∨ r ❜ φ 2 ) = ¬ φ 1 ∧ r ❜ ¬ φ 2 (8.19) With the ab ov e eq uations in combination with the equations 8.1 5-8.17, we al- ready have the pro jections for tw o p oss ible ca s es (namely when no instructions are negated and when b oth instructions are neg ated). That leaves us tw o other cases for both ∧ r ❜ and ∨ r ❜ : one in which the fir st instruction is negated, and one in whic h the o ther is. Below ar e the pro jections of these ca ses: pgaul2pga u( +( ¬ φ 1 ∧ r ❜ φ 2 )) = pgaul2pgau( − ( φ 1 ∨ r ❜ ¬ φ 2 )) = u (pg aul2pgau(+ φ 1 ); #3; pgaul2 pgau(+ φ 2 )) (8.20) pgaul2pga u( +( φ 1 ∧ r ❜ ¬ φ 2 )) = pgaul2pgau( − ( ¬ φ 1 ∨ r ❜ φ 2 )) = u (pg aul2pgau( − φ 1 ); #3; pgaul2 pgau( − φ 2 )) (8.21) pgaul2pga u( +( ¬ φ 1 ∨ r ❜ φ 2 )) = pgaul2pgau( − ( φ 1 ∧ r ❜ ¬ φ 2 )) = u (pg aul2pgau( − φ 1 ); #2; pgaul2 pgau(+ φ 2 )) (8.22) pgaul2pga u( +( φ 1 ∨ r ❜ ¬ φ 2 )) = pgaul2pgau( − ( ¬ φ 1 ∧ r ❜ φ 2 )) = u (pg aul2pgau(+ φ 1 ); #2; pgaul2 pgau( − φ 2 )) (8.23) F or mo re on the ∧ r ❜ and ∨ r ❜ connectives and the rules that apply to them, see the pap er by Bergstra and Ponse on short- c ircuit logic [5] as well as Chapter 5. 8.2.5 Other instruct ions In the previous subsections we have seen wha t the pro jections of the new lo gical connectives in PGA ul to PGA u lo ok like. T o complete the list of pr o jections, 8.3. DETECTING SIDE EFFECTS IN PGA 67 we hav e to define the pro jections for the ‘r egular’ instructions, as well as how concatenation a nd re petitio n are pro jected. This is trivial, s ince these ‘regular’ instructions a re the same in PGA ul and PGA u . W e get for a ∈ A and PGA ul - progra ms X , Y pgaul2pga u( a ) = a pgaul2pga u( + a ) = + a pgaul2pga u( − a ) = − a pgaul2pga u( !) = ! pgaul2pga u( # k ) = # k pgaul2pga u( X ; Y ) = pgaul2pg au( X ); pgaul2pg a u( Y ) pgaul2pga u( X ω ) = (pgaul2pgau( X )) ω pgaul2pga u( u ( X )) = u (pgaul2pga u( X )) 8.3 Detecting side effects in PG A In this s ection I will show how to detect side effects in a PGA ul progra m us ing our trea tmen t of side effects. In essence, all we ha ve to do is tra nslate the P GA ul progra m to an equiv alent DLA f -progr am, which can then b e used to determine the side effects that o ccur. T o reca p, we hav e the following op erator s in P GA ul that have to be tr ans- lated: • Co ncatenation ( X ; Y ) • Rep etition ( X ω ) • Unit instruction op er ator ( u ( )) • T ermination (!) • Positive a nd neg a tive tests (+ φ, − φ ) • O nly in tests: conjunction, dis junction and negatio n ( φ 1 ∧ r ❜ φ 2 , φ 1 ∨ r ❜ φ 2 , ¬ φ ) There are t wo notable differences b etw een PGA ul and DLA f . The firs t is that in PGA ul a pro gram unsuccess fully terminates unless explicitly instructed other- wise b y the termination instruction, whereas in DLA f the default is a s uccessful termination. This is an is s ue that has to be address e d to prop erly tra ns late PGA ul to DLA f and the b est wa y to do this, is to add the ter mina tion instruc- tion to DLA f . This illustrates the p oint I made in Section 6 .5 in Chapter 6: the instructions I defined so fa r in DLA f are by no means exhaustive and new instructions may have to b e added to them. This can usua lly b e done by s imply defining the actual a nd expe cted ev aluation o f the new instruction. The natur e o f the termination instr uction r equires us to do a little more than just that. After a ll, the termination instruction has a control element to it: just like for instance the test instruc tio n it has a n influence on which instruc- tions are to be ev a luated next. T o b e e xact, no instructions a re to be ev alua ted next when a termina tion instruc tio n is encountered during ev aluation of a pro - gram. Because o f this, we hav e to s lightly mo dify the co ncatenation op era tor 68 CHAPTER 8 . A CASE STUDY: P ROGRAM ALGE BRA in DLA f to o when we intro duce the ter mina tion instruction. W e baptize the extension of DLA f with the termination instruction DL T A f (for Dynamic Log ic with T ermina tion and Assignment in F orm ulas). The equation for the relational meaning o f ! in a g iven model M a nd ini- tial v aluation g is straight-forward. Execution simply finishes with the same resulting v aluation as the initial v aluation: g J ! K M h iff g = h (DL T A15) The up dated r ule for concatena tio n has to express that when a termina tion instruction is encountered, nothing should b e e v aluated a fterwards. W e use a case distinction for this o n the first instruction of a conca tenation: g J ; dπ K M h iff ( g = h if = ! ∃ f s.th. g J K f and f J dπ K M h o.w. (DL T A12) W e only define the termination instruction in the setting of deterministic pro- grams here. This is sufficient b ecause this is the only setting w e are currently int erested in. DL T A1 2 replaces Q DL12, but k eeps the as so ciative character o f concatenation intact: g J ( dπ 0 ; dπ 1 ); dπ 2 K M h = g J dπ 0 ; ( dπ 1 ; dπ 2 ) K M h The addition of the termination instruction allows us to eas ily expr ess PGA ul - progra ms such as + a ; !; b in DL T A f . They would otherwise have caused a prob- lem b ecause there would have b een no easy way to stop the ev a lua tion of the progra m from contin uing to ev aluating b , which it of course is not supp osed to do if a yields true. The other notable differ e nce b etw een P GA ul and DLA f is that in the former, anything can b e used as a basic instruction. That includes wha t w e refer to in DLA f as primitive formulas s uch as x ≤ 2 or t 1 = t 2 . In PGA the exec ution of an instructio n a lwa ys succeeds , even if the Bo ole an reply that it gener ates, is false. T o mo del this in DL T A f , we hav e to add the primitive for m ulas ϕ to the set of instructio ns, a s follows: π ::= ϕ | ! | v := t | ? φ | π 1 ; π 2 | π 1 ∪ π 2 | π ∗ The rela tio nal mea ning in M given initial v aluation g for these new instructions is simply that they always succeed without mo difying g : g J ϕ K M h iff g = h With the termination instruction and the for mulas-as-instructions defined, we can take a firs t lo ok at the mapping from PGA ul to DL T A f . F or this we define a trans la tion function f t : PGA ul → DL T A f . W e define this tra nslation function for PGA progr ams in first or seco nd canonical form o nly; this is suf- ficient b eca us e as we hav e se en, e very PGA pro gram can b e rewritten to first and se c o nd ca no nical form. First, we define the se t A of ba sic instructio ns in PGA to b e equal to the set of primitive formulas a nd single instructions, no t b eing tests, in DLA f : A ::= ϕ | ρ − 8.3. DETECTING SIDE EFFECTS IN PGA 69 where ρ − denotes the set of sing le instructions not b eing tests. In DLA f , this set only c o nsists of the a ssignment instr uc tio n v := t . F or finite sequences of ins tructions with length n = 1, a, b ∈ A a nd k ∈ N 0 , and φ a formula as meant in section 8.2.3, f t is defined as follows: f t ( a ) = a ; ? ⊥ f t (+ φ ) = ? φ ; ? ⊥ f t ( − φ ) = ? ¬ φ ; ? ⊥ f t (# k ) = ? ⊥ f t (!) = ! f t ( u ( a 1 ; . . . ; a k )) = f t ( a 1 ; . . . ; a k ) Here we ca n clearly see what effect it has tha t PGA ul has unsucces sful termina- tion as its default. W e hav e to explicitly intro duce uns ucc essful terminatio n in DL T A f by adding ? ⊥ (a test that alwa ys fa ils ) at the end of every ins truction. F urther more, notice the unit instruction op era tor that here has length n = 1, but is transpar ent when it has to b e translated and thus b ecomes a sequence of instructions with length k that is po tentially larg er than 1. Finally , notice that there is no need to translate p os s ibly comp ound formulas φ . This is b eca use formulas hav e the exact same s yntax in PGA ul and DL T A f . Next, we can show the definition o f f t for finite sequences of ins tructions with length n = m + 1. F o r a, b 1 , . . . , b m ∈ A , k ∈ N 0 and φ a formula as meant in section 8.2.3, we have f t ( a ; b 1 ; . . . ; b m ) = a ; f t ( b 1 ; . . . ; b m ) f t (+ φ ; b 1 ; . . . ; b m ) = (? φ ; f t ( b 1 )) ∪ (? ¬ φ ; ? ⊥ ) if m =1 (? φ ; f t ( b 1 ; . . . ; b m )) ∪ (? ¬ φ ; f t ( b 2 ; . . . ; b m )) o .w. f t ( − φ ; b 1 ; . . . ; b m ) = (? φ ; ? ⊥ ) ∪ (? ¬ φ ; f t ( b 1 )) if m =1 (? φ ; f t ( b 2 ; . . . ; b m )) ∪ (? ¬ φ ; f t ( b 1 ; . . . ; b m )) o .w. f t (#0; b 1 ; . . . ; b m ) = ? ⊥ f t (#1; b 1 ; . . . ; b m ) = f t ( b 1 ; . . . ; b m ) f t (#(2+ k ); b 1 ; . . . ; b m ) = ( f t ( b k +2 ; . . . ; b m ) if k + 2 < m ? ⊥ o.w. f t (!; b 1 ; . . . ; b m ) = ! f t ( u ( a 1 ; . . . ; a k ); b 1 ; . . . ; b m ) = f t ( a 1 ; . . . ; a k ; b 1 ; . . . ; b m ) With the ab ov e tr anslation rules , w e ca n now translate finite PGA ul -progr ams to their DL T A f -versions. A complete tr anslation would require a translation of rep etition as well. This, how ever, is quite a complex ta sk. The reason for that bec omes clear when c o nsidering ex amples like these: ( a ; b ; + c ) ω (+ a ; + b ; + c ) ω ( a ; + b ; #5; c ; + d ;) ω 70 CHAPTER 8 . A CASE STUDY: P ROGRAM ALGE BRA Because of the be havior of + c , we g e t into tro uble he r e if we attempt to use the regular translatio n. The problem is that + c can p ossibly skip the fir st instruction of the next rep etition lo op, which is b ehavior that is hard to transla te without explicitly introducing this v aria n t of rep etition ( ω ) in DLA f . The s ame problem arises with the jump instructio n. At first gla nce, the b est solution there is to intro duce the jump instruction to DLA f as well. In that case the second canonica l for m o f PGA-prog rams comes in handy , as it is designed to manipulate ex pressions with r ep e tition such tha t no infinite jumps o ccur. Since this case study is mea nt as a re latively clear e xample of how to use DLA f to mo del side effects in other sy s tems such as PGA, it is beyond our in- terest here to present these rather complex tra nslations of rep etition. Instea d, we re strict our selves to finite PGA ul -progr ams a nd leav e the relationa l seman- tics for DLA f which mo dels side effects, as the basis for future work on P GA inv olving rep etition. 8.4 A w orking example In this sectio n I will present a working example of the tra nslation from finite PGA ul -progr ams, which we write as PGA fin ul , to DL T A f . In addition, I w ill show that we get sufficiently similar res ults if we first translate PGA fin ul to DL T A f compared to first pro jecting P GA fin ul to P GA fin u and then transla ting that to DL T A f . T o b e exact, we a re going to show tha t the following diag ram defines a progra m tr ansformation E on finite deterministic progr ams in DL T A f : PGA fin ul f t / / pgaul2pgau DL T A f E PGA fin u f t / / DL T A f Here E is a r eduction function on DL T A f that yields deterministic DL T A f - progra ms where o ccurrences of ∧ r ❜ and ∨ r ❜ hav e been eliminated. F or the working example, we return to a v aria nt o f our r unning e x ample. Consider the PGA fin ul -progr am X = +([ x := x + 1] ⊤ ∧ r ❜ x = 2); u ( w [ x = 2 ]; !); w [ x 6 = 2 ]; ! where w [ ... ] s uggests a write command. This is a pro gram o f the form +( b ∧ r ❜ c ); u ( d ; !); e ; ! with b = [ x := x + 1] ⊤ , c = ( x = 2) , d = w [ x = 2 ] a nd e = w [ x 6 = 2 ]. Thus, w e get the following tra nslation, where we for clarity have underlined the instruction 8.4. A WORKING EXAMPLE 71 that w e are going to tra ns late nex t: f t (+( b ∧ r ❜ c ) ; u ( d ; !); e ; !) = (?( b ∧ r ❜ c ); f t ( u ( d ; !) ; e ; !)) ∪ (? ¬ ( b ∧ r ❜ c ); f t ( e ; !)) = (?( b ∧ r ❜ c ); f t ( d ; !; e ; !)) ∪ (? ¬ ( b ∧ r ❜ c ); f t ( e ; !)) = (?( b ∧ r ❜ c ); d ; f t (! ; e ; !)) ∪ (? ¬ ( b ∧ r ❜ c ); f t ( e ; !)) = (?( b ∧ r ❜ c ); d ; !) ∪ (? ¬ ( b ∧ r ❜ c ); f t ( e ; !)) = (?( b ∧ r ❜ c ); d ; !) ∪ (? ¬ ( b ∧ r ❜ c ); e ; f t (! )) = (?( b ∧ r ❜ c ); d ; !) ∪ (? ¬ ( b ∧ r ❜ c ); e ; !) So there we have it: if w e replace the sho r thands with their or iginal instr uctions or for mulas ag ain, we get the following DL T A f -progr am, which we baptize dπ ul : dπ ul = (? ([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)); w [ x = 2]; !) ∪ (? ¬ ([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)); w [ x 6 = 2]; !) Clearly , g iven mo del M , g J f t ( X ) K M h implies that h = g [ x 7→ g ( x ) + 1]. So, if g ( x ) = 1, the instruction w [ x = 2] is executed, after which the progr am terminates, while for g ( x ) 6 = 1, the instr uctio n w [ x 6 = 2 ] is executed a fter which the progra m terminates. Now let Y = pga ul2 pgau( X ), so Y = u + ([ x := x + 1] ⊤ ); u (+( x = 2); #2); #2 ; u ( w [ x = 2 ]; !); w [ x 6 = 2]; ! W e compute f t ( Y ) = f t (+([ x := x + 1] ⊤ ); u (+( x = 2 ); #2); #2; u ( w [ x = 2]; !); w [ x 6 = 2]; !) = (?([ x := x + 1] ⊤ ); f t (+( x = 2); #2; #2; u ( w [ x = 2]; !); w [ x 6 = 2]; !)) ∪ (? ¬ ([ x := x + 1] ⊤ ); f t (#2; u ( w [ x = 2 ]; !); w [ x 6 = 2]; !)) = (?([ x := x + 1] ⊤ ); ( (?( x = 2); f t (#2; #2; u ( w [ x = 2]; !); w [ x 6 = 2]; !)) ∪ (? ¬ ( x = 2); f t (#2; u ( w [ x = 2 ]; !); w [ x 6 = 2]; !)) ) ) ∪ (? ¬ ([ x := x + 1] ⊤ ); w [ x 6 = 2]; !) = (?([ x := x + 1] ⊤ ); ( (?( x = 2); w [ x = 2]; !) ∪ (? ¬ ( x = 2); w [ x 6 = 2 ]; !) ) ) ∪ (? ¬ ([ x := x + 1] ⊤ ); w [ x 6 = 2]; !) 72 CHAPTER 8 . A CASE STUDY: P ROGRAM ALGE BRA Note that for each mo del M a nd initial v aluation g , M 6| = g ¬ ([ x := x + 1] ⊤ ), so g J f t ( Y ) K M h iff g J ?([ x := x + 1] ⊤ ); ( (?( x = 2); w [ x = 2]; !) ∪ (? ¬ ( x = 2); w [ x 6 = 2]; !) ) K M h Thu s, writing dπ u for the r ightmost deterministic DL T A f -progr am, w e find g J f t ( Y ) K M h iff g J dπ u K M h W e now need to ask ourse lves if dπ u is ‘sufficiently similar ’ to the ea rlier derived dπ ul . In tuitively , we would say tha t in this working example, this indeed is the case. After all, [ x := x + 1] ⊤ always yie lds true, so the truth o f [ x := x + 1] ⊤ ∧ r ❜ ( x = 2) dep ends so lely on the Bo o lean reply that x = 2 yields. It therefore do es not matter if we lift ?[ x := x + 1] ⊤ out of the unio n, which is essentially what w e hav e done in the cas e of dπ u . W e ca n call tw o progra ms ‘sufficiently similar’ if they ev a luate the same single instructions, not being tests, or primitive formulas in the same o r der. W e can for malize that no tion with the following pro po sition: Prop ositi o n 11. L et X b e a pr o gr am in PGA fin ul , let dπ ul = f t ( X ) and let dπ u = f t (pgaul2pgau( X )) . L et mo del M b e given and let g b e an initial valuation such that t her e exists a valuation h such that g J dπ ul K M h . Then g J dπ ul K M h iff g J dπ u K M h and t he same single instructions, not b eing test s , and primitive formulas ar e evaluate d in the same or der during evaluation of dπ ul and dπ u given g . As said, we do not consider rep etition as pr o gram co nstructor in o ur ca se study . F ur thermore, our mo del of side effects is limited to terminating progra ms, as oppo sed to pr ogra ms that can either end in termination or in divergence. A pro of of this prop os ition might b e found, but is for these r easons p erhaps not very much to the p oint. In Chapter 9 (Co nclusions) w e return to this iss ue . It is, how ever, worthwhile to check the prop osition for our working example. Recall tha t w e have the following dπ ul and dπ u : dπ ul = (?([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)); w [ x = 2]; !) ∪ (? ¬ ([ x := x + 1] ⊤ ∧ r ❜ ( x = 2)); w [ x 6 = 2]; !) dπ u = ?([ x := x + 1] ⊤ ); (?( x = 2); w [ x = 2]; !) ∪ (? ¬ ( x = 2); w [ x 6 = 2 ]; !) It is not hard to c heck in this case that fo r any mo del M and initia l v a luation g such that dπ ul can b e ev aluated, g J dπ ul K M h iff g J dπ u K M h . It is also easy to see that the same single instructions, not b eing tests, and pr imitive formulas are ev aluated (in the s ame order). After all, dπ ul , first ev a luates the primitive 8.4. A WORKING EXAMPLE 73 formulas [ x := x + 1] ⊤ and x = 2 and us e s those to determine the reply of [ x := x + 1] ⊤ ∧ r ❜ ( x = 2 ). Dep e nding on the reply , it then either ev aluates the single instructions w [ x = 2] and !, or w [ x 6 = 2] and !. Almost the same go es for dπ u . It fir st ev a luates the primitiv e for mula [ x := x + 1] ⊤ and depending on the r eply (whic h happ ens to be a lways true), either stops ev aluation (which therefore is never the case) or c o ntin ues with the ev aluation of primitive formula x = 2. Dep e nding on the re ply , it like dπ ul then either ev aluates the single instructions w [ x = 2] and !, or w [ x 6 = 2] and !. So at least in our working exa mple, Pr op osition 1 1 ho lds. In a simila r way , we can analyze the PGA fin ul -progr am +( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); u ( w [ x = 2 ]; !); w [ x 6 = 2]; ! W e can co mpute dπ ul = f t ( X ): f t ( X ) = f t (+( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); u ( w [ x = 2 ]; !); w [ x 6 = 2]; !) = (?( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); f t ( u ( w [ x = 2]; !); w [ x 6 = 2]; !) ∪ (? ¬ ( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); f t ( w [ x 6 = 2]; !)) = (?( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); f t ( w [ x = 2]; !; w [ x 6 = 2]; !) ∪ (? ¬ ( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); w [ x 6 = 2 ]; f t (!)) = (?( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); w [ x = 2]; f t (!; w [ x 6 = 2]; !) ∪ (? ¬ ( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); w [ x 6 = 2 ]; !) = (?( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); w [ x = 2]; !) ∪ (? ¬ ( ¬ [ x := x + 1] ⊤ ∨ r ❜ x = 2); w [ x 6 = 2 ]; !) W e once a gain define Y = pgaul2pg au( X ), so Y = u − ([ x := x + 1 ] ⊤ ); #2; +( x = 2) ; u ( w [ x = 2]; !); w [ x 6 = 2 ]; ! 74 CHAPTER 8 . A CASE STUDY: P ROGRAM ALGE BRA W e compute f t ( Y ) = f t ( − ([ x := x + 1] ⊤ ); #2; +( x = 2); u ( w [ x = 2]; !); w [ x 6 = 2]; !) = (?( ¬ ([ x := x + 1] ⊤ )); f t (#2; + ( x = 2 ); u ( w [ x = 2]; !); w [ x 6 = 2]; !) ∪ (? ¬ ( ¬ ([ x := x + 1] ⊤ )); f t (+( x = 2); u ( w [ x = 2 ]; !); w [ x 6 = 2]; !) = (?( ¬ ([ x := x + 1] ⊤ )); f t ( u ( w [ x = 2]; !); w [ x 6 = 2]; !) ∪ (? ¬ ( ¬ [ x := x + 1] ⊤ ); ( (?( x = 2); f t ( w [ x = 2]; !; w [ x 6 = 2]; !)) ∪ (? ¬ ( x = 2); f t ( w [ x 6 = 2]; !)) ) = (?( ¬ ([ x := x + 1] ⊤ )); w [ x = 2]; !) ∪ (? ¬ ( ¬ ([ x := x + 1] ⊤ )); ( (?( x = 2); w [ x = 2]; !) ∪ (? ¬ ( x = 2); w [ x 6 = 2 ]; !) ) W e can directly eliminate a s ituation: ¬ ([ x := x + 1] ⊤ ) is false for any initial v aluation g . Thus, writing dπ u for the sec ond pa r t of the topmost union: dπ u = ? ¬ ( ¬ ([ x := x + 1] ⊤ )); ( (?( x = 2); w [ x = 2]; !) ∪ (? ¬ ( x = 2); w [ x 6 = 2]; !) ) we g et given mo del M for any initial v aluation g g J f ( Y ) K M h iff g J dπ u K M h W e can chec k in similar fashio n as b efore that Pro po sition 11 ho lds (for a ny initial v aluation g ). W e ca n conclude that at least for these working examples, the mentioned prop osition is v alid. As said, we leav e the pr o of for future work. This c ase study started fro m the abstract appr oach to attempt decomp os i- tion o f complex steer ing fra g ments in instruction sequences in PGA fin ul as ad- vocated in [5]. W e show that we can apply this appr oach to a rather concrete instance in impe r ative progr amming (namely the set A o f basic instructio ns given in this chapter) a nd we obtain some interesting results. In the fir s t place, it inspir e d our definition o f DL T A f and the analysis a nd cla s sification of side effects a s discuss ed in this thesis. Secondly , by the pre s erv ation pr op erty for- m ulated in Pro po sition 1 1, it justifies our pro p osal for the pro jection function pgaul2pga u. It is an int eresting r esult that we are able to s how that the pro- jection pgaul2pg au, which do es not have to anything to do with v aluations, preserves the relationa l semantics (a nd there fore side effects) of a prog ram via the diag ram a t the b eginning of this section, which is based on a very natural translation. 9 Conclusions and future w o r k In this thesis I have given a for mal de finitio n of side effects. I hav e done so by mo difying a system for modelling pro gram instructions and progra m states, Quantified Dynamic Logic, to a system called DLA f (Dynamic Lo gic with As- signments as F ormulas), which in c o ntrast to QDL allows a ssignments in formu- las a nd makes use of short- circuit ev aluation. I hav e shown the underly ing log ic in those formulas to b e a v ariant of sho rt-circuit lo gic ca lled r ep e tition-pro of short-circ uit logic . Using DLA f I hav e defined the actual and the e xp e cted ev aluation of a single instruction. The s ide effects are then defined to be the difference b etw een the t wo. I hav e g iven rules for comp o s ing those s ide effects in sing le instructions, th us sca ling up our definition o f side effects to a definition of side effects in deterministic DLA f -progr ams. Using this definition I have given a class ification of s ide effects, introducing as most imp ortant class that of mar ginal side effects. Finally , I hav e shown how to use our system for calculating the side effects in a real system s uch as P GA. Our definition gives us an intuitiv e wa y to c alculate the side effects in a progra m. Because of the definition in terms o f a ctual a nd exp ected ev a luation, one can ea sily ada pt the sys tem to ones own needs without having to change the definition of side effects. All one has to do is up date the exp ected ev aluation of a single ins truction, or if an entirely new single ins truction is added to the system, define the actual a nd exp ected ev a luation for it. In Chapter 5 we have s e e n how a sound ax io matization of the fo rmulas in DLA f can b e given using the signature {⊤ , ⊥ , ⊳ ⊲ } . I hav e no t used this sig nature in the first place b ecause I wan ted to stick to the conv entions in dynamic log ic . It is no teworth y , how ever, that this alternative and p ossibly more elegant signature exists, esp ecially be c a use an axiomatiza tion can b e g iven for it. The definition of side effects g iven here can p oint the wa y to a lo t more resear ch. I ca n see future work being done in the following are a s: • I do no t wan t to claim tha t the instruc tio ns I hav e defined in DLA f are exhaustive. Finding out what p ossible other ins tr uctions might have to be a dded to DLA f can b e an interesting pro ject. • Another p ossible sub ject for future work is the issue of ‘negative’ side 75 76 CHAPTER 9 . CONCLUSIONS AND FUTURE W ORK effects I briefly touc hed upo n in Section 6.5. It is an op e n question whether or not we should allow situatio ns in which ‘negative’ side effects o ccur and if so, how we s ho uld handle them. • In this thesis, we hav e mostly bee n lo o king at impe r ative pr ogra ms. It should b e interesting to see if our definition can b e extended to, for exam- ple, functional prog rams. Perhaps the work done by V a n Eijck in [10], in which he defines functional pro grams making use of pr ogram states, ca n be use d for this. • Another interesting question, which has b e en rais e d b efore in Chapters 2 and 6, is that o f s ide effects in non-deterministic prog rams. It warrants further res e arch if it is reaso nable to talk ab out side effects there. One can imagine that if the set of s ide effects in all p ossibilities o f a no n- deterministic pro gram are the same, the side effects of the whole can b e defined a s exactly that set. What needs to b e done if that’s not the cas e how ever, or if we s hould even wan t to define side effects of such prog rams, are op en questions. • In Cha pter 7, the c o ncept of marginal side effects w as introduced and the suggestion w as made that this notion can be linked to claims ab o ut how well-written a pr ogram is. I hav e not pursued such claims, but can imagine further resear ch be ing done in that ar ea. • T o develop a direct mo delling o f side effects for the v a riant o f PGA dis- cussed in Cha pter 8 , one ca n intro duce v aluatio n functions as progr am states and define a relational meaning that separates termination from deadlo ck/inaction, say g J [ X K ] h The idea of this would b e to ev aluate X as far as p ossible, which is a reasona ble re quirement if X is in s econd canonical for m. In addition, we could define a termination predicate, e.g. T erm( X, g ), which states that X terminates for initial v aluation g . Using this we could define a “b ehaviora l equiv alence” on progra ms X and Y as follows: ∀ g , g J [ X K ] h iff g J [ Y K ] h AND T er m( X, g ) iff T erm( Y , g ) Using this, Pr op osition 1 1 can probably b e proven, esp ecially consider- ing the in Chapter 4 proven prop erty of DLA f that any pro gram can be rewritten into a form in which its steering fra gments only contain primitive formulas and their negations. • Also men tioned in Cha pter 8 is the p ossibility to int ro duce extra lo gical op erator s, namely Lo gical Left And (LLAnd) and its dual Lo gical Left O r (LLOr). Introducing these in DLA f is fa ir ly stra ight-forw ard: one only needs to define its truth in M : M | = g φ 1 | φ 2 iff M | = g φ 1 ∨ r ❜ φ 2 (DLA7c) M | = g φ 1 & φ 2 iff M | = g φ 1 ∧ r ❜ φ 2 (DLA7d) as w ell as upda te the pr ogra m extra ction function: Π M g ( φ 1 φ 2 ) = Π M g ( φ 1 ); Π M h ( φ 2 ) if g J Π M g ( φ 1 ) K M h and ∈ {| , & } 77 T o introduce the same op e rator in PGA ul , pro jectio n functions in the same style as the one s given in Chapter 8 for SCLAnd a nd SCLO r need to be defined. • Another p ossible matter for further study is whether side effects can b e used in natural la ng uage. In the Introduction, we hav e alr eady seen that they ca n o ccur in the preg nant wife ex ample, where your wife to ld you to do the gr o cery shopping if s he did not call you, which she later did, but to tell you tha t she was pr egnant. Possibly ther e is a r ole for side effects whe n expla ining misunders tandings. There is no doubt that side effects can b e the cause of misunderstandings. The pr egnant wife example illustrates that: you could decide to do gro cery shopping to b e on the sa fe side a fter her call, claiming her ca ll indicated you might ha ve to shop, only to run into your wife at the sto re also shopping (who, of cour se, didn’t wan t to co nv ey the message that you should shop at all). When w e take the Dynamic E pistemic Lo gic sys tem mentioned in [12], the knowledge of tw o communicating agents is captured by a n epistemic state, one for each agent. The agents also hav e an epistemic state for what they think is the (relev ant) knowledge of the other agent with whom they are in conv er sation. A misunders tanding ha s o ccurred when an agent up dates his own epistemic state in a differ e n t wa y tha n the o ther ag ents exp ects him to. There ar e a lot of wa ys in which this can happ en, but re le v ant for us is that o ne o f those ways is, when a side effect from an utterance o ccurs of which o ne o f the age n ts is not aw a re. If one of the agents is aw are o f the side effect a nd also o f the fac t the other ag ent might not b e aw a re o f it, it may b e recommended to p oint out this side effect to the o ther ag e nt. In our example of the preg nant wife calling, this would mean that you would hav e to ask your wife on the phone that the fact she called le av es you in doubt ab out the gro cery shopping. Naturally , though, we recommend a mor e enth usias tic resp ons e to the news she is pregnant first. 78 CHAPTER 9 . CONCLUSIONS AND FUTURE W ORK Biblio g raphy [1] J .A. Bergstra . Steering F ragments of Instruction Sequences. arXiv:101 0.285 0 , Octob er 2010 . [2] J .A. Bergs tra, J. Heer ing a nd P . Klint. Mo dule algebra. In: Journal of the ACM , V o lume 37, Number 2 , pp. 335 -372 , 1990 . [3] J .A. Berg stra and M.E. Lo ots. Pro gram algebra for se quential co de. In: Journal of L o gic and Alg ebr aic Pr o gr amming , v olume 51, pp. 125-1 5 6, 2002. [4] J .A. Bergstr a and A. Ponse. Pr op osition algebra. In: ACM T r ansactions on Computational L o gic , V olume 12 , Number 3, Article 21, 2011. [5] J .A. Ber gstra and A. Ponse. Short-Cir c uit Logic. arXiv:1010 .3674, 2011 . [6] H. B¨ ohm. Side effects and a liasing can hav e s imple axio matic descriptions. In: ACM T r ansactions on Pr o gr amming L anguage and Systems , volume 7, nu mber 4, pp. 637-6 5 5, 1 985. [7] P .E. Bla ck and P .J. Windley . I nfer ence rules for pr ogramming la nguages with side effects in expressions. In: J. von W r ight, J. Grundy and J. Har- rison (eds.), The or em Pr oving in Higher O rd er L o gics: 9th In ternational Confer enc e , pp. 51 -60. Springer -V er lag, Berlin, Germa ny , 1996. [8] P .E. Bla ck and P .J. Windley . F ormal V erification of Secure Pr ograms in the Presence of Side Effects. ht tp:// phil.w indley.org/papers/hicss31.ps , 1998. [9] P . Dekker. A G uide to Dynamic Semantics. http:/ /www. illc. uva.nl/Publications/ResearchReports/PP- 2008- 42.text.pdf , 2008. [10] J. v an Eijc k, Purely F unctiona l Algo rithm Sp ecificatio n. http:/ /home pages .cwi.nl/ ~ jve/pf as/ , 2011 . 79 80 BIBLIOGRAPHY [11] J. v an Eijck a nd M. Stokhof. The Gamut of Dynamic Logics. In: D. Gabbay and J. W oo ds (eds.), Handb o ok of the History of Lo gic , volume 7 , pp. 49 9- 600. E lsevier, 2 006. [12] J. v an E ijc k and A. Visser. Dynamic Semantics. In: E. Za lta (ed.), Stanfor d Encyclop e dia of Philosophy , F all 20 10 Editio n. 20 10. [13] R. Goldblatt. Axiomatising the Logic of Computer Prog ramming. Spr inger- V er la g, Berlin and New Y ork, 1982. [14] D. Hare l. Firs t-Order Dynamic Logic. Num b er 68 of L e ctur e Notes in Com- puter Scienc e. Springer , Ber lin, 1 9 79. [15] D. Harel. Dynamic Logic. In: D. Gabbay and F. G ¨ un thner (eds.), Handb o ok of Philosophi c al L o gic , V olume I I, pp. 497-6 0 4, 1984 . [16] C.A.R. Hoar e. A couple o f nov e lties in the pr op ositional calculus . In: Zeitschrift f¨ ur Mathematische L o gik und Gru nd lagen der Mathematik , 31(2), pp. 173 -178, 19 85. [17] M. Norrish. An abstract dynamic semantics for C. http:/ /www. cl.cam.ac.uk/techreports/UCAM- CL- TR- 4 21.pdf , Com- puter La bo ratory , Univ ersity of Cambridge, T ec hnical Rep or t, 19 97. [18] A. Ponse. P rogr am algebra with unit instruction op erato rs. In: Journal of L o gic and Algebr aic Pr o gr amming , v olume 51, pp. 157 -174, 20 0 2. [19] V. Pratt. Semantical considerations on Floyd-Hoare logic. In: P . Abrahams, R. Lipton and S. Bourne (eds.), Pr o c e e dings 17th IEEE Symp osium on F oundations of Computer S cienc e , pp. 109-1 21. IE EE Computer Science So ciety Press, Lo ng Beach, CA, 197 6. [20] V. P r att. Application of mo dal lo gic to pr ogramming . Studia L o gic a , V ol- ume 39 , pp. 257 -274, 19 80. [21] A. v an Wijngaa rden et al. Revised r ep ort on the alg orithmic languag e Algol 68. In: A ct a Informatic a , V olume 5, Numbers 1-3 , pp. 1-2 36, 1975 .
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment