Probabilistic Deduction with Conditional Constraints over Basic Events

We study the problem of probabilistic deduction with conditional constraints over basic events. We show that globally complete probabilistic deduction with conditional constraints over basic events is NP-hard. We then concentrate on the special case …

Authors: T. Lukasiewicz

Probabilistic Deduction with Conditional Constraints over Basic Events
Journal of Articial In telligence Researc h 10 (1999) 199-241 Submitted 9/98; published 4/99 Probabilistic Deduction with Conditional Constrain ts o v er Basic Ev en ts Thomas Luk asiewicz lukasiewicz@inf orma tik.uni-giessen.de Institut f  ur Informatik, Universit at Gieen A rndtstr ae 2, D-35392 Gieen, Germany Abstract W e study the problem of probabilistic deduction with conditional constrain ts o v er basic ev en ts. W e sho w that globally complete probabilistic deduction with conditional constrain ts o v er basic ev en ts is NP-hard. W e then concen trate on the sp ecial case of probabilistic deduction in conditional constrain t trees. W e elab orate v ery ecien t tec hniques for globally complete probabilistic deduction. In detail, for conditional constrain t trees with p oin t probabilities, w e presen t a lo cal approac h to globally complete probabilistic deduction, whic h runs in linear time in the size of the conditional constrain t trees. F or conditional constrain t trees with in terv al probabilities, w e sho w that globally complete probabilistic deduction can b e done in a global approac h b y solving nonlinear programs. W e sho w ho w these nonlinear programs can b e transformed in to equiv alen t linear programs, whic h are solv able in p olynomial time in the size of the conditional constrain t trees. 1. In tro duction Dealing with uncertain kno wledge pla ys an imp ortan t role in kno wledge represen tation and reasoning. There are man y dieren t formalisms and metho dologies for handling uncertain t y . Most of them are directly or indirectly based on probabilit y theory . In this pap er, w e fo cus on probabilistic deduction with conditional constrain ts o v er basic ev en ts (that is, in terv al restrictions for conditional probabilities of elemen tary ev en ts). The considered probabilistic deduction problems consist of a probabilistic kno wledge base and a probabilistic query . W e giv e a classical example. As a probabilistic kno wledge base, w e ma y tak e the probabilistic kno wledge that all ostric hes are birds, that the probabilit y of Tw eet y b eing a bird is greater than 0.90, and that the probabilit y of Tw eet y b eing an ostric h pro vided she is a bird is greater than 0.80. As a probabilistic query , w e ma y no w w onder ab out the en tailed greatest lo w er and least upp er b ound for the probabilit y that Tw eet y is an ostric h. The solution to this probabilistic deduction problem is 0.72 for the en tailed greatest lo w er b ound and 1.00 for the en tailed least upp er b ound. More generally , probabilistic deduction with conditional constrain ts o v er prop ositional ev en ts can b e done in a global approac h b y linear programming or in a lo cal approac h b y the iterativ e application of inference rules. Note that it is immediately NP-hard, since it generalizes the satisabilit y problem for classical prop ositional logic (see Section 2.2). Researc h on the global approac h spread in particular after the imp ortan t w ork on prob- abilistic logic b y Nilsson (1986) (see also the w ork b y P aa, 1988). The main fo cus w as on analyzing the computational complexit y of satisabilit y and en tailmen t in probabilis- tic logic and on dev eloping ecien t linear programming algorithms for these problems. c  1999 AI Access F oundation and Morgan Kaufmann Publishers. All righ ts reserv ed. Lukasiewicz Georgak op oulos et al. (1988) sho w that the satisabilit y problem in probabilistic logic is NP-complete and prop ose to apply column generation tec hniques for its pro cessing. This approac h w as further dev elop ed b y Ka vv adias and P apadimitriou (1990), Jaumard et al. (1991), Andersen and Ho ok er (1994), and Hansen et al. (1995). In particular, Jaumard et al. (1991) rep ort promising exp erimen tal results on the eciency in sp ecial cases of prob- abilistic satisabilit y and en tailmen t. Moreo v er, Ka vv adias and P apadimitriou (1990) and Jaumard et al. (1991) iden tify sp ecial cases of probabilistic satisabilit y that can b e solv ed in p olynomial time. Other w ork on the global approac h concen trates on reducing the n um- b er of linear constrain ts (Luo et al. 1996) and the n um b er of v ariables (Luk asiewicz, 1997). Finally , F agin et al. (1992) presen t a sound and complete axiom system for reasoning ab out probabilities that are expressed b y linear inequalities o v er prop ositional ev en ts. They sho w that the satisabilit y problem in this quite expressiv e framew ork is still NP-complete. In early w ork, Dub ois and Prade (1988) use inference rules to mo del default reason- ing with imprecise n umerical and fuzzy quan tiers. F or this reason, subsequen t researc h on inference rules esp ecially aims at analyzing patterns of h uman commonsense reasoning (Dub ois et al. 1990, 1993; Amarger et al. 1991; Th one, 1994; Th one et al. 1995). F risc h and Hadda wy (1994) discuss the use of inference rules for deduction in probabilistic logic. Recen t w ork on inference rules concen trates on in tegrating probabilistic kno wledge in to de- scription logics (Heinsohn, 1994) and on analyzing the in terpla y b et w een taxonomic and probabilistic deduction (Luk asiewicz 1998a, 1999a). W e no w summarize the main c haracteristics of the global and the lo cal approac h. The global approac h can b e p erformed within quite ric h probabilistic languages (F agin et al., 1992). Crucially , probabilistic deduction b y linear programming is globally complete (that is, it really pro vides the requested tigh test b ounds en tailed b y the whole probabilistic kno wledge base). Ho w ev er, a main dra wbac k of the global approac h is that it generally do es not pro vide useful explanatory information on the deduction pro cess. Finally , results on the sp ecial-case tractabilit y of global approac hes are driv en b y the tec hnical p ossibilities of linear programming tec hniques and not b y the needs of articial in telligence applications. Hence, they do not seem to b e v ery useful in the articial in telligence con text. A main adv an tage of the lo cal approac h is that the deduced results can b e explained in a natural w a y b y the sequence of applied inference rules (Amarger et al. 1991; F risc h & Hadda wy , 1994). Ho w ev er, the iterativ e application of inference rules is generally re- stricted to quite narro w probabilistic languages. Moreo v er, it is v ery rarely and only within v ery restricted languages globally complete (F risc h and Hadda wy , 1994, giv e an example of globally complete lo cal probabilistic deduction in a v ery restricted framew ork). Finally , as far as the computational complexit y is concerned, there are v ery few exp erimen tal and theoretical results on the sp ecial-case tractabilit y of lo cal approac hes. The main motiv ating idea of this pap er is to elab orate ecien t lo cal tec hniques for globally complete probabilistic deduction. Inspired b y previous w ork on inference rules, w e fo cus our researc h on the language of conditional constrain ts o v er basic ev en ts: Dub ois and Prade (1988) study the c haining of t w o bidirectional conditional constrain ts o v er basic ev en ts (\quan tied syllogism rule") and some of its sp ecial cases. Dub ois et al. (1990) additionally discuss probabilistic deductions ab out conjunctions of basic ev en ts. F urthermore, they describ e the op en problem of probabilistic deduction along a c hain of more than t w o bidirectional conditional constrain ts o v er basic ev en ts. In later w ork, Dub ois 200 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events et al. (1993) use a qualitativ e v ersion of the \quan tied syllogism rule" in an approac h to reasoning with linguistic quan tiers. Amarger et al. (1991) prop ose to apply the \quan- tied syllogism rule" and the \generalized Ba y es' rule" to sets of bidirectional conditional constrain ts o v er basic ev en ts. They rep ort promising exp erimen tal results on the global com- pleteness and the computational complexit y of the presen ted deduction tec hnique. Ho w ev er, this deduction tec hnique is generally not globally complete. Th one (1994) examines trees of bidirectional conditional constrain ts o v er basic ev en ts. He presen ts a linear-time deduction tec hnique that is based on a system of inference rules and that computes certain logically en tailed greatest lo w er b ounds (in the tec hnical notions of this pap er, whic h will b e dened b elo w, tigh t lo w er answ ers to conclusion-restricted queries are computed). As a rst con tribution of this pap er, w e sho w that globally complete probabilistic de- duction with conditional constrain ts o v er basic ev en ts is NP-hard. It is surprising that this quite restricted class of probabilistic deduction problems is still computationally so di- cult. Hence, it is unlik ely that there is an algorithm that ecien tly solv es all probabilistic deduction problems with conditional constrain ts o v er basic ev en ts. Ho w ev er, w e can still hop e that there are ecien t sp ecial-case, a v erage-case, or appro ximation algorithms. In this pap er, w e then elab orate ecien t sp ecial-case algorithms. In detail, w e concen- trate on probabilistic deduction in conditional constrain t trees. It is an in teresting sub- class of all probabilistic deduction problems with conditional constrain ts o v er basic ev en ts. Conditional constrain t trees are undirected trees with basic ev en ts as no des and with bidi- rectional conditional constrain ts o v er basic ev en ts as edges b et w een the no des (that is, deduction in conditional constrain t trees is a generalization of deduction along a c hain of bidirectional conditional constrain ts o v er basic ev en ts). Lik e Ba y esian net w orks, conditional constrain t trees represen t a w ell-structured probabilistic kno wledge base. Dieren tly from Ba y esian net w orks, they do not enco de an y probabilistic indep endencies. As a main con tribution of this pap er, w e ha v e the follo wing results. F or conditional con- strain t trees with p oin t probabilities, w e presen t functions for deducing greatest lo w er and least upp er b ounds in linear time in the size of the conditional constrain t trees. Moreo v er, for conditional constrain t trees with in terv al probabilities, w e sho w that greatest lo w er b ounds can b e deduced in the same w a y , in linear time in the size of the conditional constrain t trees. Ho w ev er, computing least upp er b ounds turns out to b e computationally more dicult. It can b e done b y solving sp ecial nonlinear programs. W e sho w ho w these nonlinear programs can b e transformed in to equiv alen t linear programs. The resulting linear programs ha v e a n um b er of v ariables and inequalities linear and p olynomial, resp ectiv ely , in the size of the conditional constrain t trees. Th us, our w a y of deducing least upp er b ounds still runs in p olynomial time in the size of the conditional constrain t trees, since linear programming runs in p olynomial time in the size of the linear programs. Another imp ortan t con tribution of this pap er is related to the question whether to p erform probabilistic deduction with conditional constrain ts b y the iterativ e application of inference rules or b y linear programming. On the one hand, the idea of inference rules carries us to v ery ecien t tec hniques for globally complete probabilistic deduction in conditional constrain t trees. In particular, the considered deduction problems generalize patterns of commonsense reasoning. Ho w ev er, on the other hand, the corresp onding pro ofs of soundness and global completeness are tec hnically quite complex. Hence, it seems unlik ely that the results of this w ork can b e extended to signican tly more general probabilistic deduction 201 Lukasiewicz problems. Note that a companion pap er (1998a, 1999a) rep orts similar limits of the lo cal approac h in probabilistic deduction under taxonomic kno wledge. The rest of this pap er is organized as follo ws. In Section 2, w e form ulate the proba- bilistic deduction problems considered in this w ork. Section 3 fo cuses on the probabilistic satisabilit y of conditional constrain t trees. Section 4 deals with globally complete proba- bilistic deduction in exact and general conditional constrain t trees. In Section 5, w e giv e a comparison with Ba y esian net w orks. Section 6 summarizes the main results of this w ork. 2. F orm ulating the Probabilistic Deduction Problem In this section, w e in tro duce the syn tactic and seman tic notions related to probabilistic kno wledge in general and to conditional constrain t trees in particular. 2.1 Probabilistic Kno wledge Before fo cusing on the details of conditional constrain t trees, w e giv e a general in tro duction to the kind of probabilistic kno wledge considered in this w ork. W e deal with conditional constrain ts o v er prop ositional ev en ts. They represen t in terv al restrictions for conditional probabilities of prop ositional ev en ts. Note that the formal bac kground in tro duced in this section is commonly accepted in the literature (see esp ecially the w ork b y F risc h and Had- da wy , 1994, for other w ork in the same spirit). W e assume a nonempt y and nite set of b asic events B = f B 1 ; B 2 ; : : : ; B n g . The set of c onjunctive events C B is the closure of B under the Bo olean op eration ^ . W e abbreviate the conjunctiv e ev en t C ^ D b y CD . The set of pr op ositional events G B is the closure of B under the Bo olean op erations ^ and : . W e abbreviate the prop ositional ev en ts G ^ H and : G b y GH and G , resp ectiv ely . The false event B 1 ^ : B 1 and the true event : ( B 1 ^ : B 1 ) are abbreviated b y ? and > , resp ectiv ely . Conditional c onstr aints are expressions of the form ( H j G )[ u 1 ; u 2 ] with real n um b ers u 1 ; u 2 2 [0 ; 1] and prop ositional ev en ts G and H . In the conditional constrain t ( H j G )[ u 1 ; u 2 ], w e call G the pr emise and H the c onclusion . T o dene probabilistic in terpretations of prop ositional ev en ts and of conditional con- strain ts, w e in tro duce atomic ev en ts and the binary relation ) b et w een atomic and prop o- sitional ev en ts. The set of atomic events A B is dened b y A B = f E 1 E 2    E n j E i = B i or E i = B i for all i 2 [1 : n ] g . Note that eac h atomic ev en t can b e in terpreted as a p ossible w orld (whic h corresp onds to a mapping from B to f true ; false g ). F or all atomic ev en ts A and all prop ositional ev en ts G , let A ) G i A G is a prop ositional con tradiction. A pr ob abilistic interpr etation Pr is a mapping from A B to [0 ; 1] suc h that all Pr ( A ) with A 2 A B sum up to 1 . Pr is extended in a w ell-dened w a y to prop ositional ev en ts G b y: Pr ( G ) is the sum of all Pr ( A ) with A 2 A B and A ) G . Pr is extended to conditional constrain ts b y: Pr j = ( H j G )[ u 1 ; u 2 ] i u 1  Pr ( G )  Pr ( GH )  u 2  Pr ( G ). Note that conditional constrain ts c haracterize conditional probabilities of ev en ts, rather than pr ob abilities of c onditional events (Coletti, 1994; Gilio & Scozzafa v a, 1994). Note also that Pr ( G ) = 0 alw a ys en tails Pr j = ( H j G )[ u 1 ; u 2 ]. This seman tics of conditional probabilit y statemen ts is also assumed b y Halp ern (1990) and b y F risc h and Hadda wy (1994). The notions of mo dels, satisabilit y , and logical consequence for conditional constrain ts are dened in the classical w a y . A probabilistic in terpretation Pr is a mo del of a conditional constrain t ( H j G )[ u 1 ; u 2 ] i Pr j = ( H j G )[ u 1 ; u 2 ]. Pr is a mo del of a set of conditional 202 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events constrain ts KB , denoted Pr j = KB , i Pr is a mo del of all ( H j G )[ u 1 ; u 2 ] 2 KB . KB is satisable i a mo del of KB exists. ( H j G )[ u 1 ; u 2 ] is a lo gic al c onse quenc e of KB ; denoted KB j = ( H j G )[ u 1 ; u 2 ], i eac h mo del of KB is also a mo del of ( H j G )[ u 1 ; u 2 ]. F or a conditional constrain t ( H j G )[ u 1 ; u 2 ] and a set of conditional constrain ts KB , let u denote the set of all real n um b ers u 2 [0 ; 1] for whic h there exists a mo del Pr of KB with u  Pr ( G ) = Pr ( GH ) and Pr ( G ) > 0. No w, w e easily v erify that ( H j G )[ u 1 ; u 2 ] is a logical consequence of KB i u 1  inf u and u 2  sup u . This observ ation yields a canonical notion of tigh tness for logical consequences of con- ditional constrain ts. The conditional constrain t ( H j G )[ u 1 ; u 2 ] is a tight lo gic al c onse quenc e of KB ; denoted KB j = tight ( H j G )[ u 1 ; u 2 ], i u 1 = inf u and u 2 = sup u . The set u is a closed in terv al in the real line (F risc h & Hadda wy , 1994). Note that for u = ; , w e canonically dene inf u = max [0 ; 1] = 1 and sup u = min [0 ; 1] = 0. Th us, u = ; i KB j = ( G j> )[0 ; 0] i KB j = tight ( H j G )[1 ; 0] i KB j = ( H j G )[ u 1 ; u 2 ] for all u 1 ; u 2 2 [0 ; 1]. Based on the just in tro duced notion of tigh t logical consequence, probabilistic deduction problems and their solutions are more formally sp ecied as follo ws. A pr ob abilistic know le dge b ase ( B ; KB ) consists of a set of basic ev en ts B and a set of conditional constrain ts KB o v er G B with u 1  u 2 for all ( H j G )[ u 1 ; u 2 ] 2 KB . A pr ob abilistic query to a probabilistic kno wledge base ( B ; KB ) is an expression of the form 9 ( F j E )[ x 1 ; x 2 ] with E ; F 2 G B and t w o dieren t v ariables x 1 and x 2 . Its tight answer is the substitution  = f x 1 =u 1 ; x 2 =u 2 g with u 1 ; u 2 2 [0 ; 1] suc h that KB j = tight ( F j E )[ u 1 ; u 2 ] (w e call  1 = f x 1 =u 1 g the tight lower answer and  2 = f x 2 =u 2 g the tight upp er answer ). A c orr e ct answer is a substitution  = f x 1 =u 1 ; x 2 =u 2 g with u 1 ; u 2 2 [0 ; 1] suc h that KB j = ( F j E )[ u 1 ; u 2 ]. Finally , w e dene the notions of soundness and of completeness related to inference rules and to tec hniques for probabilistic deduction. An inference rule KB ` ( H j G )[ u 1 ; u 2 ] is sound i KB j = ( H j G )[ u 1 ; u 2 ], where ( H j G )[ u 1 ; u 2 ] is a conditional constrain t and KB is a set of conditional constrain ts. It is sound and lo c al ly c omplete i KB j = tight ( H j G )[ u 1 ; u 2 ]. A tec hnique for probabilistic deduction is sound for a set of probabilistic queries Q i it com- putes a correct answ er to an y giv en query from Q . It is sound and glob al ly c omplete for Q i it computes the tigh t answ er to an y giv en query from Q . 2.2 Computational Complexit y In the framew ork of conditional constrain ts o v er propositional ev en ts, the optimization prob- lem of computing the tigh t answ er to a probabilistic query is immediately NP-hard, since it generalizes the satisabilit y problem for classical prop ositional logic (the NP-complete prob- lem of deciding whether a prop ositional form ula in conjunctiv e normal form is satisable; see esp ecially the surv ey b y Garey and Johnson, 1979). Surprisingly , the optimization problem of computing the tigh t answ er to a probabilistic query remains NP-hard ev en if w e just consider conditional constrain ts o v er basic ev en ts: Theorem 2.1 The optimization pr oblem of c omputing the tight answer to a pr ob abilistic query over b asic events that is dir e cte d to a pr ob abilistic know le dge b ase over b asic events is NP-har d. Pro of. The NP-complete decision problem of graph 3-colorabilit y (Garey & Johnson, 1979) can b e p olynomially-reduced to the optimization problem of computing the tigh t answ er 203 Lukasiewicz to a probabilistic query o v er basic ev en ts that is directed to a probabilistic kno wledge base o v er basic ev en ts. The pro of follo ws similar lines to the pro of of NP-hardness of 2PSA T giv en b y Georgak op oulos et al. (1988). Let ( V ; E ) b e a nite undirected graph. W e construct a probabilistic kno wledge base ( B ; KB ) as follo ws. W e initialize ( B ; KB ) with ( f B g ; ; ). F or eac h no de v 2 V , w e increase B b y the new basic ev en ts B 1 v , B 2 v , and B 3 v . F or eac h no de v 2 V and for eac h i 2 f 1 ; 2 ; 3 g , w e increase KB b y ( B j B i v )[1 ; 1] and ( B i v j B )[1 = 3 ; 1 = 3]. F or eac h no de v 2 V and for eac h i; j 2 f 1 ; 2 ; 3 g with i < j , w e increase KB b y ( B j v j B i v )[0 ; 0]. F or eac h edge f u; v g 2 E and for eac h i 2 f 1 ; 2 ; 3 g , w e increase KB b y ( B i v j B i u )[0 ; 0]. It is easy to see that the probabilistic kno wledge base ( B ; KB ) can b e constructed in p olynomial time in the size of ( V ; E ). No w, w e sho w that ( V ; E ) is 3-colorable i f x 1 = 1 ; x 2 = 1 g is the tigh t answ er to the probabilistic query 9 ( B j B )[ x 1 ; x 2 ] to ( B ; KB ), or equiv alen tly , i KB is satisable: If ( V ; E ) is 3-colorable, then there exists a mapping c 1 from V to f 1 ; 2 ; 3 g with c 1 ( u ) 6 = c 1 ( v ) for all edges f u; v g 2 E . Th us, if  is a cyclic p erm utation of the mem b ers in f 1 ; 2 ; 3 g and if c 2 ; c 3 : V ! f 1 ; 2 ; 3 g are dened b y c 2 ( v ) =  ( c 1 ( v )) and c 3 ( v ) =  ( c 2 ( v )) for all no des v 2 V , then also c 2 ( u ) 6 = c 2 ( v ) and c 3 ( u ) 6 = c 3 ( v ) for all edges f u; v g 2 E . F or j 2 f 1 ; 2 ; 3 g , let A j 2 A B suc h that A j ) B and A j ) B i v i c j ( v ) = i for all no des v 2 V and i 2 f 1 ; 2 ; 3 g . If Pr : A B ! [0 ; 1] is dened b y Pr ( A ) = 1 = 3 for all A 2 f A 1 ; A 2 ; A 3 g and b y Pr ( A ) = 0 for all A 2 A B n f A 1 ; A 2 ; A 3 g , then Pr is a mo del of KB . Con v ersely , if there is a mo del Pr of KB , then there is an atomic ev en t A 2 A B with Pr ( A ) > 0. Th us, if c : V ! f 1 ; 2 ; 3 g is dened b y c ( v ) = i i A ) B i v for all no des v 2 V , then c ( u ) 6 = c ( v ) for all edges f u; v g 2 E . Hence, ( V ; E ) is 3-colorable. 2 Hence, it is unlik ely that there is an ecien t algorithm for computing the tigh t answ er to all probabilistic queries o v er basic ev en ts that are directed to an y giv en probabilistic kno wledge base o v er basic ev en ts. Ho w ev er, there ma y still b e ecien t algorithms for solving more sp ecialized probabilistic deduction problems. The rest of this w ork deals with probabilistic deduction in conditional constrain t trees. The next section pro vides a motiv ating example, whic h giv es evidence of the practical imp ortance of this kind of probabilistic deduction problems. 2.3 Motiv ating Example A senior studen t in mathematics describ es her exp erience ab out b eing successful at the uni- v ersit y as follo ws. The success of a studen t ( su ) is inuenced b y ho w w ell-informed ( wi ) and ho w w ell-prepared ( wp ) the studen t is. W ell-informedness can b e reac hed b y in terviewing professors ( p r ) or b y asking senior studen ts ( st ). Being w ell-prepared is inuenced b y ho w m uc h time is in v ested in b o oks ( b o ), exercises ( ex ), and hobbies ( ho ). It is estimated that the probabilit y of a studen t b eing successful giv en she is w ell- informed lies b et w een 0.60 and 0.70, that the probabilit y of a studen t b eing w ell-informed giv en she is successful is greater than 0.85, that the probabilit y of a studen t b eing successful giv en she is w ell-prepared is greater than 0.95, and that the probabilit y of a studen t b eing w ell-prepared giv en she is successful is greater than 0.95. This probabilistic kno wledge completed b y further probabilistic estimations is giv en b y the probabilistic kno wledge base ( B ; KB ) in Fig. 1, where B is the set of no des f su ; wi ; wp ; p r ; 204 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events st ; b o ; ex ; ho g and KB is the least set of conditional constrain ts that con tains ( Y j X )[ u 1 ; u 2 ] for eac h arro w from X to Y lab eled with u 1 ; u 2 . su wp ex st .85,1 .6,.7 .95,1 .85,.9 .85,.9 ho .6,.7 .95,1 .35,.4 pr bo .95,1 .95,1 .05,.1 .95,1 .95,1 .35,.4 wi Figure 1: A Conditional Constrain t T ree W e ma y w onder whether it is useful for b eing successful at the univ ersit y to in terview the professors, to study on b o oks, to sp end the time on one's hobbies, or to do b oth studying on b o oks and sp ending the time on one's hobbies. This can b e expressed b y the prob- abilistic queries 9 ( su j p r )[ x 1 ; x 2 ], 9 ( su j b o )[ x 1 ; x 2 ], 9 ( su j ho )[ x 1 ; x 2 ], and 9 ( su j b o ho )[ x 1 ; x 2 ], whic h yield the tigh t answ ers f x 1 = 0 : 00, x 2 = 1 : 00 g , f x 1 = 0 : 90 ; x 2 = 1 : 00 g , f x 1 = 0 : 30 ; x 2 = 0 : 46 g , and f x 1 = 0 : 71 ; x 2 = 1 : 00 g , resp ectiv ely . W e ma y w onder whether successful studen ts at the univ ersit y in terview ed their profes- sors, whether they studied on b o oks, whether they sp en t their time with their hobbies, or whether they b oth studied on b o oks and sp en t their time with their hobbies. This can b e expressed b y the probabilistic queries 9 ( p r j su )[ x 1 ; x 2 ], 9 ( b o j su )[ x 1 ; x 2 ], 9 ( ho j su )[ x 1 ; x 2 ], and 9 ( b o ho j su )[ x 1 ; x 2 ], whic h yield the tigh t answ ers f x 1 = 0 : 00, x 2 = 0 : 17 g , f x 1 = 0 : 90 ; x 2 = 1 : 00 g , f x 1 = 0 : 30 ; x 2 = 0 : 45 g , and f x 1 = 0 : 25 ; x 2 = 0 : 45 g , resp ectiv ely . 2.4 Conditional Constrain t T rees W e formally dene conditional constrain t trees and queries to conditional constrain t trees. W e pro vide some additional examples, whic h are subsequen tly used as running examples. A (gener al) c onditional c onstr aint tr e e is a probabilistic kno wledge base ( B ; KB ) for whic h an undirected tree (a singly connected undirected graph) ( B ; $ ) exists suc h that KB con tains exactly one pair of conditional constrain ts ( B j A )[ u 1 ; u 2 ] and ( A j B )[ v 1 ; v 2 ] with u 1 ; v 1 > 0 for eac h pair of adjacen t no des A and B (note that B = f B g implies KB = ; ). A basic ev en t B 2 B is called a le af in ( B ; KB ) i it has exactly one neigh b or in ( B ; $ ). A conditional constrain t tree is exact i u 1 = u 2 for all ( B j A )[ u 1 ; u 2 ] 2 KB . A query to a c onditional c onstr aint tr e e is a probabilistic query 9 ( F j E )[ x 1 ; x 2 ] with t w o conjunctiv e ev en ts E and F that are disjoin t in their basic ev en ts and suc h that all paths from a basic ev en t in E to a basic ev en t in F ha v e at least one basic ev en t in common. A query 9 ( F j E )[ x 1 ; x 2 ] to a conditional constrain t tree is pr emise-r estricte d i E is a basic ev en t. It is c onclusion-r estricte d i F is a basic ev en t. It is str ongly c onclusion-r estricte d i F is the only basic ev en t that is con tained in all paths from a basic ev en t in E to F . It is c omplete i EF con tains exactly the lea v es of ( B ; $ ). 205 Lukasiewicz Fig. 2 sho ws t w o conditional constrain t trees of whic h the one on the left side is exact. 9 ( STU j MNQR )[ x 1 ; x 2 ] is a query , while 9 ( MS j QU )[ x 1 ; x 2 ] is not a query to the conditional constrain t trees of Fig. 2. F urthermore, 9 ( STU j M )[ x 1 ; x 2 ] is a premise-restricted query , 9 ( O j QRSTU )[ x 1 ; x 2 ] a strongly conclusion-restricted query , and 9 ( QRSTU j M )[ x 1 ; x 2 ] a prem- ise-restricted complete query to the conditional constrain t trees of Fig. 2. 1) 1 1 .8,.9 .3,.4 .9,1 .9,1 .8,.9 .9,1 .8,.9 .9,1 .8,.9 1 1 .85 .95 .95 .95 .85 .85 .85 .95 .95 .15 .95 .55 .85 .5,.6 .1,.2 .8,.9 .9,1 .9,1 M O P Q R S T U M N O P Q R S T U 2) .35 N Figure 2: Tw o Conditional Constrain t T rees F or conditional constrain t trees ( B ; KB ), conjunctiv e ev en ts C , and basic ev en ts B , w e write C ) B i there exists a path G 1 ; G 2 ; : : : ; G k from a basic ev en t G 1 in C to the basic ev en t G k = B suc h that ( G i +1 j G i )[1 ; 1] 2 KB for all i 2 [1 : k  1]. W e write B ) C i for all paths G 1 ; G 2 ; : : : ; G k from the basic ev en t G 1 = B to a basic ev en t G k in C , it holds ( G i +1 j G i )[1 ; 1] 2 KB for all i 2 [1 : k  1]. That is, the conditions C ) B and B ) C immediately en tail KB j = ( B j C )[1 ; 1] and KB j = ( C j B )[1 ; 1], resp ectiv ely . Note that the restriction u 1 ; v 1 > 0 for all ( B j A )[ u 1 ; u 2 ], ( A j B )[ v 1 ; v 2 ] 2 KB is just made for tec hnical con v enience. The deduction tec hnique of Section 4 can easily b e generalized to conditional constrain t trees ( B ; KB ) that satisfy only the restriction u 1 > 0 i v 1 > 0 for all ( B j A )[ u 1 ; u 2 ] ; ( A j B )[ v 1 ; v 2 ] 2 KB (Luk asiewicz, 1996). The restriction that for eac h query 9 ( F j E )[ x 1 ; x 2 ], all paths from a basic ev en t in E to a basic ev en t in F ha v e at least one basic ev en t in common is crucial for the deduction tec hnique of Section 4. It assures that the problem of computing the tigh t answ er to a complete query can b e reduced to the problems of computing the tigh t answ er to a premise- restricted complete query and the tigh t answ er to a strongly conclusion-restricted complete query . Note that this restriction is trivially satised b y all premise- and conclusion-restricted queries (for example, b y all the queries in Section 2.3). Esp ecially tigh t answ ers to conclusion-restricted queries seem to b e quite imp ortan t in practice. They ma y b e used to c haracterize the probabilit y of uncertain basic ev en ts giv en a collection of basic ev en ts that are kno wn with certain t y . 206 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events 3. Probabilistic Satisabilit y In this section, w e sho w that conditional constrain t trees ha v e the nice prop ert y that they are alw a ys satisable. That is, within conditional constrain t trees, the user is prev en ted from sp ecifying inconsisten t probabilistic kno wledge. First, note that conditional constrain t trees alw a ys ha v e a trivial mo del in whic h the probabilit y of the conjunction of all negated basic ev en ts is one and in whic h the probabilit y of all the other atomic ev en ts is zero. The next lemma sho ws that, giv en a mo del Pr of a conditional constrain t tree and a real n um b er s from [0 ; 1], w e can construct a new mo del Pr s b y setting Pr s ( A ) = s  Pr ( A ) for all atomic ev en ts A that are dieren t from the conjunction of all negated basic ev en ts. Note that Pr 0 coincides with the trivial mo del and that Pr 1 is iden tical to Pr . This lemma is crucial for inductiv ely constructing mo dels of conditional constrain t trees. Lemma 3.1 L et ( B ; KB ) b e a c onditional c onstr aint tr e e with B = f B 1 ; B 2 ; : : : ; B n g . L et Pr b e a mo del of KB and let s b e a r e al numb er fr om [0 ; 1] . The mapping Pr s : A B ! [0 ; 1] with Pr s ( A ) = s  Pr ( A ) for al l A 2 A B n f B 1 B 2    B n g and Pr s ( B 1 B 2    B n ) = s  Pr ( B 1 B 2    B n )  s + 1 is a mo del of KB . Pro of. W e easily v erify that Pr s is a probabilistic in terpretation. It remains to sho w that Pr s is also a mo del of KB . Let ( H j G )[ u 1 ; u 2 ] 2 KB . Since Pr is a mo del of KB , w e ha v e Pr j = ( H j G )[ u 1 ; u 2 ], hence u 1  Pr ( G )  Pr ( GH )  u 2  Pr ( G ), and th us also u 1 s  Pr ( G )  s  Pr ( GH )  u 2 s  Pr ( G ). Since neither B 1 B 2    B n ) G nor B 1 B 2    B n ) GH , w e get u 1  Pr s ( G )  Pr s ( GH )  u 2  Pr s ( G ) and th us Pr s j = ( H j G )[ u 1 ; u 2 ]. 2 Finally , the follo wing theorem sho ws that conditional constrain t trees alw a ys ha v e a non trivial mo del in whic h all the basic ev en ts ha v e a probabilit y greater than zero. Theorem 3.2 L et ( B ; KB ) b e a c onditional c onstr aint tr e e with B = f B 1 ; B 2 ; : : : ; B n g . Ther e is a mo del Pr of KB with Pr ( B 1 B 2    B n ) > 0 . Pro of. It is sucien t to sho w the claim for exact conditional constrain t trees. The claim is pro v ed b y induction on the n um b er of basic ev en ts. Basis: for ( B ; KB ) = ( f B g ; ; ), a mo del Pr of KB with Pr ( B ) > 0 is giv en b y B ; B 7! 0 ; 1 (note that B ; B 7! 0 ; 1 is an abbreviation for Pr ( B ) = 0 and Pr ( B ) = 1). Induction: let ( B ; KB ) = ( B 1 [ B 2 ; KB 1 [ KB 2 ) with t w o exact conditional constrain t trees ( B 1 ; KB 1 ) = ( f B ; C g ; f ( C j B )[ u; u ] ; ( B j C )[ v ; v ] g ) and ( B 2 ; KB 2 ) = ( f C ; D 1 ; : : : ; D k g ; KB 2 ) suc h that B 1 \ B 2 = f C g . A mo del Pr 1 of KB 1 with Pr 1 ( BC ) > 0 is giv en b y: B C ; B C ; B C ; BC 7! uv u + v ; v  uv u + v ; u  uv u + v ; uv u + v : By the induction h yp othesis, there is a mo del Pr 2 of KB 2 (that is dened on the atomic ev en ts o v er B 2 ) with Pr 2 ( C D 1    D k ) > 0. By Lemma 3.1, w e can assume Pr 2 ( C ) = Pr 1 ( C ). A probabilistic in terpretation Pr on the atomic ev en ts o v er B is no w dened b y: Pr ( A b A c A 2 ) = Pr 1 ( A b A c )  Pr 2 ( A c A 2 ) Pr 2 ( A c ) 207 Lukasiewicz for all atomic ev en ts A b , A c , and A 2 o v er f B g , f C g , and B 2 n f C g , resp ectiv ely . W e easily v erify that Pr ( A b A c ) = Pr 1 ( A b A c ) and Pr ( A c A 2 ) = Pr 2 ( A c A 2 ) for all atomic ev en ts A b , A c , and A 2 o v er f B g , f C g , and B 2 n f C g , resp ectiv ely . Hence, Pr is a mo del of KB . Moreo v er, Pr 1 ( BC ) > 0 and Pr 2 ( C D 1    D k ) > 0 en tails Pr ( B C D 1    D k ) > 0. 2 4. Probabilistic Deduction In this section, w e presen t tec hniques for computing tigh t answ ers to queries directed to exact and general conditional constrain t trees, and w e analyze their computational com- plexit y . More precisely , the problem of computing the tigh t answ er to a query is reduced to the problem of computing the tigh t answ er to a complete query . The latter problem is then reduced to the problems of computing the tigh t answ er to a premise-restricted complete query and the tigh t answ er to a strongly conclusion-restricted complete query . 4.1 Premise-Restricted Complete Queries 4.1.1 Exa ct Conditional Constraint Trees W e no w fo cus on the problem of computing tigh t answ ers to premise-restricted complete queries that are directed to exact conditional constrain t trees. Let ( B ; KB ) b e an exact conditional constrain t tree and let 9 ( F j E )[ x 1 ; x 2 ] b e a premise- restricted complete query . T o compute the tigh t answ er to 9 ( F j E )[ x 1 ; x 2 ], w e start b y dening a directed tree (that is, a directed acyclic graph in whic h eac h no de has exactly one paren t, except for the ro ot, whic h do es not ha v e an y): A ! B i A $ B and A is closer to E than B . This directed tree ( B ; ! ) is uniquely determined b y the conditional constrain t tree and the premise-restricted complete query . Fig. 3 sho ws ( B ; ! ) for the premise-restricted complete query 9 ( QRSTU j M )[ x 1 ; x 2 ] to the exact conditional constrain t tree in Fig. 2, left side. No w, the set of no des B is partitioned in to sev eral strata. The lo w est stratum con tains only no des with no c hildren in ( B ; ! ), the highest stratum con tains the no des with no paren ts in ( B ; ! ) (that is, exactly the no de of the premise E of the query). Fig. 3 also sho ws the dieren t strata in our example. A t eac h no de of ( B ; ! ), w e compute certain tigh test b ounds that are logically en tailed b y KB . More precisely , the tigh test b ounds at a no de B are computed lo cally , b y exploiting the tigh test b ounds that ha v e previously b een computed at the c hildren of B . Hence, w e iterativ ely compute the tigh test b ounds at the no des of eac h stratum, starting with the no des of the lo w est stratum and terminating with the no des of the highest stratum. W e distinguish three dieren t w a ys of computing tigh test b ounds at a no de:  initialization of a leaf ( Leaf ),  c haining of an arro w and a subtree via a common no de ( Chaining ),  fusion of subtrees via a common no de ( Fusion ). Let us consider again the premise-restricted complete query 9 ( QRSTU j M )[ x 1 ; x 2 ] to the exact conditional constrain t tree in Fig. 2, left side. Fig. 4 illustrates the three dieren t w a ys 208 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events M N O Q R S T U P 3 2 1 0 strata 4 Figure 3: Directed T ree ( B ; ! ) of computing tigh test b ounds at a no de (the common no des for Chaining and Fusion are lled blac k). T able 1 sho ws the greatest lo w er and the least upp er b ounds that are computed at eac h no de B of eac h stratum. More precisely , these b ounds are  1 = inf Pr ( BD ) = Pr ( B ),  2 = sup Pr ( BD ) = Pr ( B ),  2 = sup Pr ( B D ) = Pr ( B ), and  2 = sup Pr ( D ) = Pr ( B ) sub ject to Pr j = KB and Pr ( B ) > 0. T able 1 also sho ws the requested tigh t answ er f x 1 = 0 : 02 ; x 2 = 0 : 17 g , whic h is giv en b y the tigh test b ounds  1 and  2 that are computed at the premise M . strata B D  1  2  2  2 S S 1 : 0000 1 : 0000 0 : 0000 1 : 0000 ( Leaf ) 0 T T 1 : 0000 1 : 0000 0 : 0000 1 : 0000 ( Leaf ) U U 1 : 0000 1 : 0000 0 : 0000 1 : 0000 ( Leaf ) P S 0 : 8500 0 : 8500 0 : 0447 0 : 8947 ( Chaining ) 1 P T 0 : 8500 0 : 8500 0 : 0447 0 : 8947 ( Chaining ) P U 0 : 8500 0 : 8500 0 : 0000 0 : 8500 ( Chaining ) P STU 0 : 5500 0 : 8500 0 : 0000 0 : 8500 ( Fusion ) 1 Q Q 1 : 0000 1 : 0000 0 : 0000 1 : 0000 ( Leaf ) R R 1 : 0000 1 : 0000 0 : 0000 1 : 0000 ( Leaf ) O STU 0 : 4474 0 : 7605 0 : 0447 0 : 7605 ( Chaining ) 2 O Q 0 : 9500 0 : 9500 0 : 0500 1 : 0000 ( Chaining ) O R 0 : 9500 0 : 9500 5 : 3833 6 : 3333 ( Chaining ) 2 O QRSTU 0 : 3474 0 : 7605 0 : 0447 0 : 7605 ( Fusion ) 3 N QRSTU 0 : 1911 0 : 4183 0 : 0246 0 : 4183 ( Chaining ) 4 M QRSTU 0 : 0169 0 : 1722 0 : 0719 0 : 1722 ( Chaining ) T able 1: Lo cally Computed Tigh test Bounds 209 Lukasiewicz O P N O P N O P N R S T U R S T U R S T U M Q Q Q M Initialization of a leaf: Chaining of an arrow and a subtree: Fusion of subtrees: M Figure 4: Lo cal Computations in ( B ; ! ) 210 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events W e no w fo cus on the tec hnical details. W e presen t the functions H  1 , H  2 , H  2 , and H  2 , whic h compute the describ ed greatest lo w er and least upp er b ounds. F or this purp ose, w e need the follo wing denitions. Let Pr ( C j B ) denote u for all ( C j B )[ u; u ] 2 KB . A no de B is a le af if it do es not ha v e an y c hildren. F or all lea v es B , let B " = B . F or all the other no des B , let B " b e the conjunction of all the c hildren of B . F or all lea v es C , let L ( C ) = C . F or all the other conjunctiv e ev en ts C , let L ( C ) b e the conjunction of all the lea v es that are in C or that are descendan ts of a no de in C . In the sequel, let B b e a no de and let C = B " . The case C = B refers to the initialization of the leaf B , the case C = B 1 with a no de B 1 6 = B to the c haining of the arro w B ! B 1 and a subtree via the common no de B 1 , and the case C = B 1 B 2 : : : B k with k > 1 no des B 1 ; B 2 ; : : : ; B k to the fusion of k subtrees via the common no de B . W e dene the function H  1 for computing greatest lo w er b ounds: let H  1 ( B ; C ) =  1 (note that  1 will coincide with the greatest lo w er b ound of Pr ( BL ( C ) ) = Pr ( B ) sub ject to Pr j = KB and Pr ( B ) > 0), where  1 in Leaf ( C = B ), Chaining ( C = B 1 ), and Fusion ( C = B 1 B 2 : : : B k with k > 1) is giv en as follo ws: Leaf :  1 = 1 Chaining :  1 = max(0 ; Pr ( C j B )  (1 + H  1 ( C ; C " )  1 Pr ( B j C ) )) Fusion :  1 = max(0 ; 1  k + k P i =1 H  1 ( B ; B i )) T o express that H  1 computes greatest lo w er b ounds, w e need the follo wing denitions. Let B ( B ; C ) comprise B , all no des in C and all descendan ts of a no de in C . Let KB ( B ; C ) b e the set of all conditional constrain ts of KB o v er B ( B ; C ). Let Mo ( B ; C ) b e the set of all mo dels of KB ( B ; C ) that are dened on the atomic ev en ts o v er B ( B ; C ). No w, the function H  1 is sound and globally complete with resp ect to B and C i H  1 ( B ; C ) =  1 is the greatest lo w er b ound of Pr ( BL ( C ) ) = Pr ( B ) sub ject to Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. Th us, the next theorem sho ws soundness and global completeness of H  1 . Theorem 4.1 a) F or al l pr ob abilistic interpr etations Pr 2 Mo ( B ; C ) , it holds  1  Pr ( B )  Pr ( B L ( C )) . b) Ther e exists a pr ob abilistic interpr etation Pr 2 Mo ( B ; C ) with Pr ( B ) > 0 ,  1  Pr ( B ) = Pr ( BL ( C ) ) , and Pr ( B L ( C )) = 0 i L ( C ) ) B . Pro of. The pro of is giv en in full detail in App endix B. 2 Next, w e presen t the functions H  2 , H  2 , and H  2 for computing least upp er b ounds. Note that H  2 , H  2 , and H  2 sho w the crucial result that for exact conditional constrain t trees, there are lo cal probabilistic deduction tec hniques that are sound and globally complete. 211 Lukasiewicz In detail, let H  2 ( B ; C ) =  2 , H  2 ( B ; C ) =  2 , and H  2 ( B ; C ) =  2 (note that  2 ,  2 , and  2 will coincide with the least upp er b ound of Pr ( BL ( C ) ) = Pr ( B ), Pr ( B L ( C ) ) = Pr ( B ), and Pr ( L ( C ) ) = Pr ( B ), resp ectiv ely , sub ject to Pr j = KB and Pr ( B ) > 0), where  2 ,  2 , and  2 in Leaf ( C = B ), Chaining ( C = B 1 ), and Fusion ( C = B 1 B 2 : : : B k with k > 1) are giv en as follo ws: Leaf :  2 = 1  2 = 0  2 = 1 Chaining :  2 = min (1 ; Pr ( C j B )  H  2 ( C ; C " ) Pr ( B j C ) ; 1  Pr ( C j B )  (1  H  2 ( C ; C " ) Pr ( B j C ) ) ; Pr ( C j B )  (1 + H  2 ( C ; C " ) Pr ( B j C ) ))  2 = min ( Pr ( C j B )  ( H  2 ( C ; C " )+1 Pr ( B j C )  1) ; Pr ( C j B )  H  2 ( C ; C " ) Pr ( B j C ) )  2 = Pr ( C j B )  H  2 ( C ; C " ) Pr ( B j C ) Fusion :  2 = min i 2 [1: k ] H  2 ( B ; B i )  2 = min i 2 [1: k ] H  2 ( B ; B i )  2 = min ( min i 2 [1: k ] H  2 ( B ; B i ) ; min i;j 2 [1: k ] ;i 6 = j ( H  2 ( B ; B i ) + H  2 ( B ; B j ))) The functions H  2 , H  2 , and H  2 are sound and globally complete with resp ect to B and C i H  2 ( B ; C ) =  2 , H  2 ( B ; C ) =  2 , and H  2 ( B ; C ) =  2 are the least upp er b ounds of Pr ( BL ( C ) ) = Pr ( B ), Pr ( B L ( C ) ) = Pr ( B ), and Pr ( L ( C ) ) = Pr ( B ), resp ectiv ely , sub ject to Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. Hence, the follo wing theorem sho ws soundness and global completeness of H  2 , H  2 , and H  2 (actually , it sho ws ev en more to enable a pro of b y induction on the recursiv e denition of H  2 , H  2 , and H  2 ). Theorem 4.2 a) F or al l pr ob abilistic interpr etations Pr 2 Mo ( B ; C ) , it holds Pr ( B L ( C ))   2  Pr ( B ) , Pr ( B L ( C ))   2  Pr ( B ) , and Pr ( L ( C ))   2  Pr ( B ) . b) Ther e exists a pr ob abilistic interpr etation Pr 2 Mo ( B ; C ) with Pr ( B ) > 0 , Pr ( B L ( C )) =  2  Pr ( B ) , and Pr ( L ( C )) =  2  Pr ( B ) . c) Ther e exists a pr ob abilistic interpr etation Pr 2 Mo ( B ; C ) with Pr ( B ) > 0 , Pr ( B L ( C )) =  2  Pr ( B ) , and Pr ( L ( C )) =  2  Pr ( B ) . 212 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events Pro of. The pro of is giv en in full detail in App endix B. 2 Note that Theorem 4.2 also sho ws that H  2 ( B ; B i )  H  2 ( B ; B i ) + H  2 ( B ; B i ) for all i 2 [1 : k ]. Th us, the expression min i;j 2 [1: k ] ;i 6 = j ( H  2 ( B ; B i ) + H  2 ( B ; B j )) in the denition of  2 in Fusion can b e replaced b y  2 +  2 for an increased eciency in computing  2 b y exploiting the already computed v alues of  2 and  2 . Briey , b y Theorems 4.1 and 4.2, the tigh t answ er to the premise-restricted complete query 9 ( F j E )[ x 1 ; x 2 ] is giv en b y f x 1 = H  1 ( E ; E " ) ; x 2 = H  2 ( E ; E " ) g . 4.1.2 Conditional Constraint Trees W e no w fo cus on computing the tigh t answ er to premise-restricted complete queries to general conditional constrain t trees. In the sequel, let ( B ; KB ) b e a conditional constrain t tree and let 9 ( F j E )[ x 1 ; x 2 ] b e a premise-restricted complete query . W e ma y think that the lo cal deduction tec hnique for exact conditional constrain t trees of Section 4.1.1 can easily b e generalized to conditional constrain t trees. In fact, this is true as far as the computation of greatest lo w er b ounds is concerned. Ho w ev er, the computation of least upp er b ounds cannot b e generalized that easily from exact conditional constrain t trees to conditional constrain t trees. More precisely , generalizing the computation of least upp er b ounds results in solving nonlinear programs. These nonlinear programs and our w a y to solv e them are illustrated b y the follo wing c haining example. Let the conditional constrain t tree ( B ; KB ) b e giv en b y B = f M ; N ; O ; P g and KB = f ( N j M )[ u 1 ; u 2 ] ; ( M j N )[ v 1 ; v 2 ], ( O j N )[ x 1 ; x 2 ], ( N j O )[ y 1 ; y 2 ], ( P j O )[ r 1 ; r 2 ], ( O j P )[ s 1 ; s 2 ] g . Let us consider the premise-restricted complete query 9 ( P j M )[ z 1 ; z 2 ]. By Theorem 4.2 and some straigh tforw ard arithmetic transformations, the requested least upp er b ound is the maxim um of z sub ject to u 2 [ u 1 ; u 2 ], v 2 [ v 1 ; v 2 ], x 2 [ x 1 ; x 2 ], y 2 [ y 1 ; y 2 ], r 2 [ r 1 ; r 2 ], s 2 [ s 1 ; s 2 ], and the nonlinear inequalities in (1) to (5): z  1 (1) z  1  u + u v  ux v + uxr v y (2) z  1  u + ux v  uxr v y + uxr v y s (3) z  u  ux v + ux v y  uxr v y + uxr v y s (4) z  uxr v y s (5) In this system of nonlinear inequalities, all upp er b ounds of z are monotonically decreasing in v , y , and s . Hence, w e can equiv alen tly maximize z sub ject to u 2 [ u 1 ; u 2 ], x 2 [ x 1 ; x 2 ], r 2 [ r 1 ; r 2 ], and the nonlinear inequalities in (6) to (10): z  1 (6) z  1  u + u v 1  ux v 1 + uxr v 1 y 1 (7) z  1  u + ux v 1  uxr v 1 y 1 + uxr v 1 y 1 s 1 (8) z  u  ux v 1 + ux v 1 y 1  uxr v 1 y 1 + uxr v 1 y 1 s 1 (9) z  uxr v 1 y 1 s 1 (10) F or example, the requested least upp er b ound for u 1 = u 2 = u and x 1 = x 2 = x is sho wn in Fig. 5 for u; x 2 [0 ; 1], r 1 = r 2 = 0 : 15, v 1 = 0 : 8, y 1 = 0 : 8, and s 1 2 f 0 : 05 ; 0 : 1 g . The requested least upp er b ound for u 1 < u 2 or x 1 < x 2 is the maxim um v alue o v er [ u 1 ; u 2 ]  [ x 1 ; x 2 ]. 213 Lukasiewicz s1=0.05 z2 0.99 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 u x s1=0.1 z2 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 u x Figure 5: Least Upp er Bound z 2 in the Chaining Example 214 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events W e no w transform this nonlinear program in to an equiv alen t linear program (b y re- placing 1, u , ux , and uxr b y the new v ariables x M , x N , x O , and x P , resp ectiv ely). More precisely , the maxim um of z sub ject to u 2 [ u 1 ; u 2 ], x 2 [ x 1 ; x 2 ], r 2 [ r 1 ; r 2 ], and the non- linear inequalities in (6) to (10) coincides with the maxim um of z sub ject to the follo wing system of linear inequalities o v er z and x B  0 ( B 2 B ): z  x M z  x M + 1  v 1 v 1  x N  y 1 v 1 y 1  x O + s 1 v 1 y 1 s 1  x P z  x M  v 1 v 1  x N + y 1 v 1 y 1  x O + 1  s 1 v 1 y 1 s 1  x P z  v 1 v 1  x N + 1  y 1 v 1 y 1  x O + 1  s 1 v 1 y 1 s 1  x P z  1 v 1 y 1 s 1  x P 1  x M  1 u 1  x M  x N  u 2  x M x 1  x N  x O  x 2  x N r 1  x O  x P  r 2  x O More generally , tigh t upp er answ ers to premise-restricted complete queries to conditional constrain t trees can b e computed b y solving similar nonlinear programs, whic h can similarly b e transformed in to linear programs. F or example, let us consider the premise-restricted complete query 9 ( QRSTU j M )[ x 1 ; x 2 ] to the conditional constrain t tree in Fig. 2, righ t side. The requested least upp er b ound is the maxim um of x sub ject to the system of linear inequalities in Fig. 6 (w e actually generated 72 linear inequalities of whic h 31 w ere trivially subsumed b y others). Note that the nine v ariables x M to x U corresp ond to the nine no des M to U . x  x M x  25 18 x Q x  25 2 x R x  25 18 x S x  25 18 x T x  25 18 x U x  x N + 1 9 x P x  x N + 1 9 x Q x  x N + 1 9 x R x  5 4 x O + 5 36 x P x  5 4 x O + 5 36 x Q x  5 4 x O + 45 4 x R x  5 4 x P + 5 36 x Q x  5 36 x P + 5 4 x Q x  5 36 x P + 5 4 x R x  5 36 x Q + 5 4 x R x  x M  x N + 5 4 x O + 5 36 x R x  x M + 1 4 x N  5 4 x O + 5 4 x P x  x M + 1 4 x N  5 4 x O + 5 4 x Q x  x M + 1 4 x N  5 4 x O + 5 4 x R x  x M + 1 4 x N  5 4 x P + 25 18 x S x  x M + 1 4 x N  5 4 x P + 25 18 x T x  x M + 1 4 x N  5 4 x P + 25 18 x U 1  x M  1 3 10 x M  x N  2 5 x M 1 2 x N  x O  3 5 x N 9 10 x O  x R  x O 9 10 x O  x Q  x O 4 5 x O  x P  9 10 x O 4 5 x P  x S  9 10 x P 4 5 x P  x T  9 10 x P 4 5 x P  x U  9 10 x P Figure 6: Generated Linear Inequalities in the Chaining Example Th us, in this example, the tigh t upp er answ er is computed b y solving a linear program that has 10 v ariables and 72 linear inequalities. Note that computing the tigh t upp er answ er b y the classical linear programming approac h w ould result in solving a linear program that has 2 9 = 512 v ariables and 4  9  2 = 34 linear inequalities (see Section 4.6). 215 Lukasiewicz Let us no w fo cus on the tec hnical details. W e subsequen tly generalize the function H  1 of Section 4.1.1 in a straigh tforw ard w a y to compute greatest lo w er b ounds in conditional constrain t trees. Moreo v er, w e presen t a linear program for computing the requested least upp er b ound in conditional constrain t trees. Let Pr 1 ( C j B ) denote u 1 for all ( C j B )[ u 1 ; u 2 ] 2 KB . In the sequel, let B b e a no de and let C = B " . Again, the cases C = B , C = B 1 with a no de B 1 6 = B , and C = B 1 B 2 : : : B k with k > 1 no des B 1 ; B 2 ; : : : ; B k refer to Leaf , Chaining , and Fusion , resp ectiv ely . W e dene the generalized function H  1 for computing greatest lo w er b ounds in con- ditional constrain t trees: let H  1 ( B ; C ) =  1 (note that  1 will coincide with the greatest lo w er b ound of Pr ( BL ( C ) ) = Pr ( B ) sub ject to Pr j = KB and Pr ( B ) > 0), where  1 in Leaf ( C = B ), Chaining ( C = B 1 ), and Fusion ( C = B 1 B 2 : : : B k with k > 1) is giv en b y: Leaf :  1 = 1 Chaining :  1 = max(0 ; Pr 1 ( C j B )  (1 + H  1 ( C ; C " )  1 Pr 1 ( B j C ) )) Fusion :  1 = max(0 ; 1  k + k P i =1 H  1 ( B ; B i )) H  1 is sound and globally complete with resp ect to B and C i H  1 ( B ; C ) =  1 is the greatest lo w er b ound of Pr ( BL ( C ) ) = Pr ( B ) sub ject to Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. Th us, the next theorem sho ws soundness and global completeness of H  1 . Theorem 4.3 a) F or al l pr ob abilistic interpr etations Pr 2 Mo ( B ; C ) , it holds  1  Pr ( B )  Pr ( B L ( C )) . b) Ther e exists a pr ob abilistic interpr etation Pr 2 Mo ( B ; C ) with Pr ( B ) > 0 ,  1  Pr ( B ) = Pr ( BL ( C ) ) , and Pr ( B L ( C )) = 0 i L ( C ) ) B . Pro of. The claims follo w from Theorem 4.1. 2 Next, w e fo cus on the requested least upp er b ound, whic h is computed b y solving a linear program as describ ed in the t w o examples. W e start b y dening the functions I  , I  , and I  o v er the v ariables x B ( B 2 B ). Let I  ( B ; C ) =  2 , I  ( B ; C ) =  2 , and I  ( B ; C ) =  2 , where  2 ,  2 , and  2 in Leaf ( C = B ), Chaining ( C = B 1 ), and Fusion ( C = B 1 B 2 : : : B k with k > 1) are giv en as follo ws: Leaf :  2 = x B  2 = 0  2 = x B 216 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events Chaining :  2 = min ( x B ; I  ( C ; C " ) Pr 1 ( B j C ) ; x C + I  ( C ; C " ) Pr 1 ( B j C ) ; x B  x C + I  ( C ; C " ) Pr 1 ( B j C ) )  2 = min ( 1  Pr 1 ( B j C ) Pr 1 ( B j C )  x C + I  ( C ; C " ) Pr 1 ( B j C ) ; I  ( C ; C " ) Pr 1 ( B j C ) )  2 = I  ( C ; C " ) Pr 1 ( B j C ) Fusion :  2 = min i 2 [1: k ] I  ( B ; B i )  2 = min i 2 [1: k ] I  ( B ; B i )  2 = min ( min i 2 [1: k ] I  ( B ; B i ) ; min i;j 2 [1: k ] ;i 6 = j ( I  ( B ; B i ) + I  ( B ; B j ))) The system of linear inequalities J ( B ; C ) is dened as the least set of linear inequalities o v er x G  0 ( G 2 B ( B ; C )) that con tains 1  x B  1 and u 1  x G  x H  u 2  x G for all ( H j G )[ u 1 ; u 2 ] 2 KB ( B ; C ) with G ! H (that is, G is the paren t of H ). The in tuition b ehind these denitions can no w b e describ ed as follo ws. Eac h x G ( G 2 B ( B ; C )) that satises J ( B ; C ) corresp onds to the exact conditional con- strain t tree ( B ( B ; C ) ; KB 0 ( B ; C )), where KB 0 ( B ; C ) con tains the pair ( H j G )[ x H = x G ; x H = x G ] and ( G j H )[ v 1 ; v 1 ] for eac h pair ( H j G )[ u 1 ; u 2 ] ; ( G j H )[ v 1 ; v 2 ] 2 KB ( B ; C ) with G ! H . W e will sho w that the least upp er b ound of Pr ( B L ( C )) = Pr ( B ), Pr ( B L ( C )) = Pr ( B ), and Pr ( L ( C )) = Pr ( B ) sub ject to Pr j = KB 0 ( B ; C ) and Pr ( B ) > 0 is giv en b y I  ( B ; C ), I  ( B ; C ), and I  ( B ; C ), resp ectiv ely . It will then follo w that the least upp er b ound of Pr ( B L ( C )) = Pr ( B ), Pr ( B L ( C )) = Pr ( B ), and Pr ( L ( C )) = Pr ( B ) sub ject to Pr j = KB ( B ; C ) and Pr ( B ) > 0 is giv en b y the maxim um of I  ( B ; C ), I  ( B ; C ), and I  ( B ; C ), resp ectiv ely , sub ject to all x G ( G 2 B ( B ; C )) satisfying J ( B ; C ). That is, w e implicitly p erformed the v ariable transformation describ ed in the t w o ex- amples. This transformation is indeed correct for conditional constrain t trees: Lemma 4.4 a) If x G ( G 2 B ( B ; C )) satises J ( B ; C ) , then for al l c onditional c onstr aints ( H j G )[ u 1 ; u 2 ] 2 KB ( B ; C ) such that G ! H , ther e exists u H 2 [ u 1 ; u 2 ] with x H = u H  x G . b) L et u H 2 [ u 1 ; u 2 ] for al l ( H j G )[ u 1 ; u 2 ] 2 KB ( B ; C ) such that G ! H . Ther e exists x G ( G 2 B ( B ; C )) with J ( B ; C ) and x H = u H  x G for al l no des H with p ar ent G . Pro of. a) F or all no des H with paren t G , let u H b e dened b y u H = x H = x G . b) Let x B = 1, and for all no des H with paren t G , let x H b e dened b y x H = u H  x G . 2 W e are no w ready to form ulate an optimization problem for computing the requested least upp er b ound. Theorem 4.5 L et X 2 b e the maximum of x subje ct to x  I  ( E ; E " ) and J ( E ; E " ) . a) Pr ( E L ( E " ))  X 2  Pr ( E ) for al l Pr 2 Mo ( E ; E " ) . b) Ther e exists Pr 2 Mo ( E ; E " ) with Pr ( E ) > 0 and Pr ( EL ( E " ) ) = X 2  Pr ( E ) . 217 Lukasiewicz Pro of. Let Pr ( B j C ) = v 1 for all ( B j C )[ v 1 ; v 2 ] 2 KB suc h that B ! C . By Theorem 4.2, the requested least upp er b ound is the maxim um of x sub ject to x  H  2 ( E ; E " ) and Pr ( C j B ) = u C 2 [ u 1 ; u 2 ] for all ( C j B )[ u 1 ; u 2 ] 2 KB suc h that B ! C . By Lemma 4.4, w e can equiv alen tly maximize x sub ject to x  I  ( E ; E " ) and J ( E ; E " ). 2 W e no w w onder ho w to solv e the generated optimization problem, since I  ( E ; E " ) ma y still con tain min-op erations that cannot b e tac kled b y linear programming. Moreo v er, giv en a metho d for solving this optimization problem, w e are also in terested in a rough idea on the o v erall time complexit y of computing the requested least upp er b ound this w a y . Finally , w e are in terested in p ossible impro v emen ts to increase eciency . These topics are discussed in the rest of this section. If I  ( E ; E " ) do es not con tain an y min-op erations at all, then the generated optimization problem is already a linear program. Otherwise, it can easily b e transformed in to a linear program. In a rst transformation step, all inner min-op erations are eliminated. This can easily b e done due to the w ell-structuredness of I  ( E ; E " ). In a second step, the only remaining outer min-op eration is eliminated b y in tro ducing exactly one linear inequalit y for eac h con tained op erand. In these linear inequalities, the op erands of the outer min- op eration are upp er b ounds of x . T o get a rough idea on the time complexit y of computing the requested least upp er b ound this w a y , w e m ust analyze the size of the generated linear programs. It is giv en b y the n um b er of v ariables, the n um b er of linear inequalities in J ( E ; E " ), and the n um- b er of linear inequalities extracted from x  I  ( E ; E " ). The latter is quite w orrying, since I  ( B ; C ) in Fusion seems to pro duce man y min-op erands. Moreo v er, I  ( B ; C ) in Fusion con tains I  ( B ; B i ), and I  ( B ; C ) in Chaining con tains I  ( C ; C " ). So, due to this crossed dep endency , the o v erall n um b er of generated linear inequalities is lik ely to `explo de' for trees that branc h v ery often . T o a v oid these problems, w e in tro duce the auxiliary functions J  , J  , and J  o v er the v ariables x B ( B 2 B ). Let J  ( B ; C ) =  0 2 , J  ( B ; C ) =  0 2 , and J  ( B ; C ) =  0 2 , where  0 2 ,  0 2 , and  0 2 in Leaf ( C = B ), Chaining ( C = B 1 ), and Fusion ( C = B 1 B 2 : : : B k with k > 1) are giv en as follo ws: Leaf :  0 2 = x B  0 2 = 0  0 2 = x B Chaining :  0 2 = min ( x B ; x C + J  ( C ; C " ) Pr 1 ( B j C ) ; x B  x C + J  ( C ; C " ) Pr 1 ( B j C ) )  0 2 = 1  Pr 1 ( B j C ) Pr 1 ( B j C )  x C + J  ( C ; C " ) Pr 1 ( B j C )  0 2 = J  ( C ; C " ) Pr 1 ( B j C ) 218 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events Fusion :  0 2 = min i 2 [1: k ] J  ( B ; B i )  0 2 = min i 2 [1: k ] J  ( B ; B i )  0 2 = min ( min i 2 [1: k ] J  ( B ; B i ) ; min i;j 2 [1: k ] ;i 6 = j ( J  ( B ; B i ) + J  ( B ; B j ))) Note that  0 2 in Chaining can b e separated in to the cases C " = C and C " 6 = C . Since simply  0 2 = x C for C " = C , w e reduce the n um b er of generated linear inequalities this w a y . The next lemma sho ws that the functions I  , I  , and I  can b e expressed in terms of the auxiliary functions J  , J  , and J  . Lemma 4.6 F or al l x B ( B 2 B ) that satisfy J ( E ; E " ) :  2 = min (  0 2 ;  0 2 ) ,  2 = min (  0 2 ;  0 2 ) , and  2 =  0 2 : Pro of sk etc h. The claim can b e pro v ed b y induction on the recursiv e denition of the functions I  , I  , and I  . 2 Briey , b y Theorem 4.3, Theorem 4.5, and Lemma 4.6, the tigh t answ er to the premise- restricted complete query 9 ( F j E )[ x 1 ; x 2 ] is giv en b y f x 1 = H  1 ( E ; E " ) ; x 2 = X 2 g , where X 2 is the maxim um of x sub ject to x  J  ( E ; E " ), x  J  ( E ; E " ), and J ( E ; E " ). In our example, w e get f x 1 = 0 : 00 ; x 2 = 0 : 27 g as the tigh t answ er to the premise-restricted complete query 9 ( QRSTU j M )[ x 1 ; x 2 ] to the conditional constrain t tree in Fig. 2, righ t side. The time complexit y of computing the requested greatest lo w er b ound and esp ecially the requested least upp er b ound this w a y is analyzed in Section 4.5. 4.2 Strongly Conclusion-Restricted Complete Queries W e no w fo cus on computing the tigh t answ er to strongly conclusion-restricted complete queries to general conditional constrain t trees. In the sequel, let ( B ; KB ) b e a conditional constrain t tree and let 9 ( F j E )[ x 1 ; x 2 ] b e a strongly conclusion-restricted complete query . The tigh t upp er answ er to 9 ( F j E )[ x 1 ; x 2 ] is alw a ys giv en b y f x 2 = 1 g . T o compute the tigh t lo w er answ er to 9 ( F j E )[ x 1 ; x 2 ], w e rst compute the tigh t lo w er answ er f y 1 =u 1 g to the premise-restricted complete query 9 ( E j F )[ y 1 ; y 2 ]. W e then distinguish the follo wing cases: If u 1 > 0, then the tigh t lo w er answ er to 9 ( F j E )[ x 1 ; x 2 ] is computed lo cally b y a function H  1 (lik e the tigh t lo w er answ er to premise-restricted complete queries in Section 4.1.2). If u 1 = 0 and E ) F , then the tigh t lo w er answ er to 9 ( F j E )[ x 1 ; x 2 ] is giv en b y f x 1 = 1 g . Otherwise, the tigh t lo w er answ er to 9 ( F j E )[ x 1 ; x 2 ] is giv en b y f x 1 = 0 g . W e no w fo cus on the tec hnical details. Let ( B ; ! ) b e the directed graph that b elongs to the premise-restricted complete query 9 ( E j F )[ y 1 ; y 2 ] (see Section 4.1.1). Let Pr 1 ( B j C ) denote v 1 for all ( B j C )[ v 1 ; v 2 ] 2 KB . In the sequel, let B b e a no de and let C = B " . Again, the cases C = B , C = B 1 with a no de B 1 6 = B , and C = B 1 B 2 : : : B k with k > 1 no des B 1 ; B 2 ; : : : ; B k refer to Leaf , Chaining , and Fusion , resp ectiv ely . W e dene the function H  1 for computing greatest lo w er b ounds in the case H  1 ( B ; C ) > 0 as follo ws. Let H  1 ( B ; C ) =  1 (note that  1 will coincide with the greatest lo w er b ound of Pr ( BL ( C ) ) = Pr ( L ( C )) sub ject to Pr j = KB and Pr ( L ( C )) > 0), where  1 in Leaf 219 Lukasiewicz ( C = B ), Chaining ( C = B 1 ), and Fusion ( C = B 1 B 2 : : : B k with k > 1) is giv en as follo ws (note that H  1 ( C ; C " ) and H  1 ( B ; B i ) are dened lik e in Section 4.1.2): Leaf :  1 = 1 Chaining :  1 = H  1 ( C ; C " )  (1 + Pr 1 ( B j C )  1 H  1 ( C ;C " ) ) Fusion :  1 = 0 B B @ 1 + min i 2 [1: k ] H  1 ( B ;B i )  (1 = H  1 ( B ;B i )  1) 1  k + k P i =1 H  1 ( B ;B i ) 1 C C A  1 By induction on the denition of H  1 , it is easy to see that H  1 ( B ; C ) > 0 en tails that  1 is dened and that  1 > 0 (note that H  1 ( B ; C ) =  1 in Leaf , Chaining , and Fusion is dened lik e in Section 4.1.2). In this case, H  1 is sound and globally complete with resp ect to B and C i H  1 ( B ; C ) =  1 is the greatest lo w er b ound of Pr ( BL ( C ) ) = Pr ( L ( C )) sub ject to Pr 2 Mo ( B ; C ) and Pr ( L ( C )) > 0. Th us, the next theorem sho ws soundness and global completeness of H  1 . It also sho ws that, for C = B 1 B 2 : : : B k with k > 1, the least upp er b ound of Pr ( BL ( C ) ) = Pr ( L ( C )) sub ject to Pr 2 Mo ( B ; C ) and Pr ( L ( C )) > 0 is giv en b y 1 . Theorem 4.7 a) If  1 > 0 , then for al l Pr 2 Mo ( B ; C ) , it holds  1  Pr ( L ( C ))  Pr ( B L ( C )) . b) If  1 > 0 , then ther e is a pr ob abilistic interpr etation Pr 2 Mo ( B ; C ) with Pr ( B ) > 0 , Pr ( L ( C )) > 0 ,  1  Pr ( L ( C )) = Pr ( B L ( C )) , and  1  Pr ( B ) = Pr ( BL ( C ) ) . c) If  1 > 0 and C = B 1 B 2 : : : B k with k > 1 , then ther e is some Pr 2 Mo ( B ; C ) with Pr ( B ) > 0 , Pr ( L ( C )) > 0 , 1  Pr ( L ( C )) = Pr ( B L ( C )) , and  1  Pr ( B ) = Pr ( BL ( C ) ) . d) If  1 = 0 and C = B 1 B 2 : : : B k with k > 1 , then for e ach " > 0 ther e is some Pr 2 Mo ( B ; C ) with Pr ( B ) > 0 , Pr ( L ( C )) > 0 , 1  Pr ( L ( C )) = Pr ( B L ( C )) , and "  Pr ( B )  Pr ( BL ( C ) ) . Pro of. The pro of is giv en in full detail in App endix C. 2 W e are no w ready to giv e the follo wing c haracterization of tigh t answ ers to strongly conclusion-restricted complete queries to conditional constrain t trees. Theorem 4.8 L et ( B ; KB ) b e a c onditional c onstr aint tr e e and let 9 ( F j E )[ x 1 ; x 2 ] b e a str ongly c onclusion-r estricte d c omplete query. L et the tight lower answer to the pr emise- r estricte d c omplete query 9 ( E j F )[ y 1 ; y 2 ] b e given by f y 1 =u 1 g . (1) If u 1 > 0 , then the tight answer to 9 ( F j E )[ x 1 ; x 2 ] is given by f x 1 = H  1 ( F ; F " ) ; x 2 = 1 g . (2) If u 1 = 0 and E ) F , then the tight answer to 9 ( F j E )[ x 1 ; x 2 ] is given by f x 1 = 1 ; x 2 = 1 g . (3) Otherwise, the tight answer to 9 ( F j E )[ x 1 ; x 2 ] is given by f x 1 = 0 ; x 2 = 1 g . Pro of. The pro of is giv en in full detail in App endix C. 2 220 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events 4.3 Complete Queries W e no w sho w that the problem of computing tigh t answ ers to complete queries can b e reduced to the problems of computing tigh t answ ers to premise-restricted complete queries and of computing tigh t answ ers to strongly conclusion-restricted complete queries. In detail, a complete query is premise-restricted, it is strongly conclusion-restricted, or it can b e reduced to premise-restricted complete queries and to strongly conclusion- restricted complete queries. F or example, giv en the complete query 9 ( STU j MQR )[ x 1 ; x 2 ] to the conditional constrain t tree in Fig. 2, righ t side, w e rst compute the tigh t answ er f y 1 =u 1 ; y 2 =u 2 g to the premise-restricted complete query 9 ( MQR j O )[ y 1 ; y 2 ] (directed to the corresp onding subtree) and the tigh t answ er f z 1 =v 1 ; z 2 =v 2 g to the strongly conclusion-re- stricted complete query 9 ( O j MQR )[ z 1 ; z 2 ] (directed to the corresp onding subtree). W e then generate a new conditional constrain t tree b y replacing the subtree o v er the no des M , N , O , Q , and R b y the pair of conditional constrain ts ( B j O )[ u 1 ; u 2 ] and ( O j B )[ v 1 ; v 2 ] o v er the no des B and O (note that B represen ts MQR ). Finally , w e compute the tigh t answ er to the premise-restricted complete query 9 ( STU j B )[ x 1 ; x 2 ] to the new conditional constrain t tree. Note that this reduction can alw a ys b e done, since for eac h query 9 ( F j E )[ x 1 ; x 2 ], all paths from a basic ev en t in E to a basic ev en t in F ha v e at least one basic ev en t in common. Theorem 4.9 L et ( B ; KB ) b e a c onditional c onstr aint tr e e and let 9 ( F j E )[ x 1 ; x 2 ] b e a c om- plete query that is not pr emise-r estricte d and not str ongly c onclusion-r estricte d. a) Ther e exists a b asic event G 2 B and two c onditional c onstr aint tr e es ( B 1 ; KB 1 ) and ( B 2 ; KB 2 ) such that B 1 \ B 2 = f G g , B 1 [ B 2 = B , and 9 ( G j E )[ z 1 ; z 2 ] is a str ongly c onclu- sion-r estricte d c omplete query to ( B 1 ; KB 1 ) . b) L et the tight answer to the pr emise-r estricte d c omplete query 9 ( E j G )[ y 1 ; y 2 ] to ( B 1 ; KB 1 ) b e given by f y 1 =u 1 ; y 2 =u 2 g and let the tight answer to the str ongly c onclusion-r estricte d c omplete query 9 ( G j E )[ z 1 ; z 2 ] to ( B 1 ; KB 1 ) b e given by f z 1 =v 1 ; z 2 =v 2 g . (1) If u 1 > 0 , then also v 1 > 0 and the tight answer to the c omplete query 9 ( F j E )[ x 1 ; x 2 ] to ( B ; KB ) c oincides with the tight answer to the pr emise-r estricte d c omplete query 9 ( F j B )[ x 1 ; x 2 ] to ( B 2 [ f B g ; KB 2 [ f ( B j G )[ u 1 ; u 2 ] ; ( G j B )[ v 1 ; v 2 ] g ) , wher e B is a new b asic event with B 62 B 2 . In p articular, for exact c onditional c onstr aint tr e es ( B ; KB ) , the tight answer to the c omplete query 9 ( F j E )[ x 1 ; x 2 ] is given by: f x 1 = max (0 ; v 1  v 1 u 1 + v 1 s 1 u 1 ) ; x 2 = min (1 ; 1  v 1 + v 1 s 2 u 1 ; t 2 t 2  s 2 + u 1 ) g ; wher e s 1 = H  1 ( G; G " ) , s 2 = H  2 ( G; G " ) , and t 2 = H  2 ( G; G " ) (note that H  1 , H  2 , and H  2 ar e dene d like in Se ction 4.1.1). (2) If u 1 = 0 , v 1 = 1 , and G ) F , then the tight answer to the c omplete query 9 ( F j E )[ x 1 ; x 2 ] to ( B ; KB ) is given by f x 1 = 1 ; x 2 = 1 g . (3) Otherwise, the tight answer to the c omplete query 9 ( F j E )[ x 1 ; x 2 ] to ( B ; KB ) is given by f x 1 = 0 ; x 2 = 1 g . Pro of. The pro of is giv en in full detail in App endix D. 2 221 Lukasiewicz 4.4 Queries The problem of computing tigh t answ ers to queries can b e reduced to the more sp ecialized problem of calculating tigh t answ ers to complete queries. More precisely , giv en a query 9 ( F j E )[ x 1 ; x 2 ] to a conditional constrain t tree ( B ; KB ), a complete query 9 ( F 0 j E 0 )[ x 1 ; x 2 ] to a conditional constrain t tree ( B 0 ; KB 0 ) is generated b y: 1. While ( B ; KB ) con tains a leaf B that is not con tained in EF : remo v e B from B and remo v e the corresp onding pair ( C j B )[ u 1 ; u 2 ] ; ( B j C )[ v 1 ; v 2 ] 2 KB from KB . 2. While EF con tains a basic ev en t B that is not a leaf in ( B ; KB ): increase B b y a new basic ev en t B 0 , increase KB b y the pair ( B 0 j B )[1 ; 1] and ( B j B 0 )[1 ; 1], and replace eac h o ccurrence of B in 9 ( F j E )[ x 1 ; x 2 ] b y the new basic ev en t B 0 . It remains to sho w that the generated probabilistic deduction problem has the same solution as the original probabilistic deduction problem: Theorem 4.10 The tight answer to the query 9 ( F j E )[ x 1 ; x 2 ] to ( B ; KB ) c oincides with the tight answer to the c omplete query 9 ( F 0 j E 0 )[ x 1 ; x 2 ] to ( B 0 ; KB 0 ) . Pro of. Let ( B 00 ; KB 00 ) b e the conditional constrain t tree that is generated in step 1 and let ( F j E )[ u 1 ; u 2 ] b e a tigh t logical consequence of KB 0 0 . W e no w sho w that ( F j E )[ u 1 ; u 2 ] is also a tigh t logical consequence of KB . First, ( F j E )[ u 1 ; u 2 ] is a logical consequence of KB , since KB 00 is a subset of KB . Moreo v er, eac h mo del Pr 00 of KB 00 (that is dened on all atomic ev en ts o v er B 0 0 ) can b e extended to a mo del Pr of KB (that is dened on all atomic ev en ts o v er B ) with Pr ( A ) = s  Pr 00 ( A ) for all atomic ev en ts A o v er B 00 that are dieren t from the conjunction of all negated basic ev en ts in B 0 0 , where s is a real n um b er from (0 ; 1]. This mo del can b e constructed inductiv ely lik e in the pro of of Theorem 3.2. Th us, for u 2 [ u 1 ; u 2 ], Pr 00 ( E ) > 0 and u  Pr 0 0 ( E ) = Pr 00 ( EF ) en tails Pr ( E ) > 0 and u  Pr ( E ) = Pr ( EF ). Finally , ( F j E )[ u 1 ; u 2 ] is a tigh t logical consequence of KB 0 0 i ( F 0 j E 0 )[ u 1 ; u 2 ] is a tigh t logical consequence of KB 0 , since w e just in tro duce synon yms for basic ev en ts in step 2. 2 4.5 Computational Complexit y 4.5.1 Exa ct Conditional Constraint Trees W e no w sho w that for exact conditional constrain t trees, our tec hnique to compute the tigh t answ er to queries runs in linear time in the n um b er of no des of the tree. In the sequel, let ( B ; KB ) b e an exact conditional constrain t tree and let n denote its n um b er of no des. Lemma 4.11 The tight answer to a pr emise-r estricte d or str ongly c onclusion-r estricte d c omplete query c an b e c ompute d in line ar time in n . Pro of. F or exact conditional constrain t trees, our approac h to compute the tigh t upp er answ er to premise-restricted complete queries b y H  2 , H  2 , and H  2 runs in time O ( n ): The directed tree can b e computed in time O ( n ). An initialization of a leaf with a constan t n um b er of assignmen ts is p erformed exactly for eac h leaf of the directed tree, a c haining with a constan t n um b er of arithmetic op erations is p erformed exactly for eac h arro w of the directed tree. Hence, initializing all lea v es and p erforming all c hainings runs 222 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events in time O ( n ). A fusion is done for eac h branc hing of the directed tree, using linear time in the n um b er of branc hes. Th us, all fusions together run in time O ( n ). Ev en for general conditional constrain t trees, the tigh t lo w er answ er to premise-restricted complete queries, and hence also the tigh t answ er to strongly conclusion-restricted complete queries, is analogously computed in time O ( n ). 2 Theorem 4.12 The tight answer to a query c an b e c ompute d in line ar time in n . Pro of. W e assume that the set of basic ev en ts B is totally ordered and that the basic ev en ts in the conjunctiv e ev en ts E and F of the query 9 ( F j E )[ x 1 ; x 2 ] are written in this order. First, the query is reduced to a complete query according to Section 4.4. This reduction can b e done in time O ( n ). No w, if the generated complete query is premise-restricted or strongly conclusion-restricted, then the claim follo ws immediately from Lemma 4.11. Otherwise, the generated complete query is reduced to premise-restricted and strongly conclusion-restricted complete queries according to Section 4.3. Also this reduction can b e done in time O ( n ), since the basic ev en t G in Theorem 4.9 a) is computable in time O ( n ). Hence, the claim follo ws from Theorem 4.9 and Lemma 4.11. Note that t 2 = H  2 ( G; G " ) in Theorem 4.9 b) (1) can also b e computed in time O ( n ). 2 4.5.2 Conditional Constraint Trees F or general conditional constrain t trees, our tec hnique to compute the tigh t lo w er answ er to queries runs still in linear time, while our tec hnique to compute the tigh t upp er answ er to queries runs in p olynomial time in the n um b er of no des of the tree. In the sequel, let ( B ; KB ) b e a general conditional constrain t tree and let n denote its n um b er of no des. Lemma 4.13 a) The tight lower answer to a pr emise-r estricte d c omplete query and the tight answer to a str ongly c onclusion-r estricte d c omplete query c an b e c ompute d in line ar time in n . b) The tight upp er answer to a pr emise-r estricte d c omplete query c an b e c ompute d in p oly- nomial time in n . Pro of. a) The claim is already sho wn in the pro of of Lemma 4.11. b) Our linear programming tec hnique to compute the tigh t upp er answ er to premise- restricted complete queries runs in p olynomial time in n : Linear programming runs in p olynomial time in the size of the linear programs (P a- padimitriou & Steiglitz, 1982; Sc hrijv er, 1986), where the size of a linear program is giv en b y its n um b er of v ariables and its n um b er of linear inequalities. W e no w sho w that the size of our linear programs in Section 4.1.2 is p olynomial in n . The n um b er of v ariables is n + 1. The n um b er of linear inequalities in J ( E ; E " ) is 2 n . By induction on the recursiv e denition of J  , J  , and J  , it can b e sho wn that the n um b er of min-op erands in J  ( B ; C ), J  ( B ; C ), and J  ( B ; C ) is limited b y jB ( B ; C ) j 2 , jB ( B ; C ) j , and jB ( B ; C ) j 4 , resp ectiv ely . Hence, the n um b er of linear inequalities extracted from x  J  ( E ; E " ) and x  J  ( E ; E " ) is limited b y jB ( E ; E " ) j 2 + jB ( E ; E " ) j 4 = n 2 + n 4 . Th us, the o v erall n um b er of generated linear inequalities l is limited b y l u = 2 n + n 2 + n 4 . 223 Lukasiewicz Finally , note that l u is a v ery rough upp er b ound for l , in man y conditional constrain t trees (esp ecially in those that branc h v ery rarely), l is m uc h lo w er than l u . F or example, taking a complete binary tree with n = 127 no des, w e get only l = 19 964 compared to l u = 260 161 024. In the example of Section 4.1.2 with n = 9 no des, w e get only l = 72 compared to l u = 6 660. Another example is a tree that is degenerated to a c hain of basic ev en ts. In this case, w e ev en get l = 5 n + 1, that is, the o v erall n um b er of generated linear inequalities is linear in n . 2 Theorem 4.14 a) The tight lower answer to a query c an b e c ompute d in line ar time in n . b) The tight upp er answer to a query c an b e c ompute d in p olynomial time in n . Pro of. W e assume that the set of basic ev en ts B is totally ordered and that the basic ev en ts in the conjunctiv e ev en ts E and F of the query 9 ( F j E )[ x 1 ; x 2 ] are written in this order. Lik e in the pro of of Theorem 4.12, the query is reduced to a complete query according to Section 4.4. This reduction can b e done in time O ( n ). No w, if the generated com- plete query is premise-restricted or strongly conclusion-restricted, then the claims follo w immediately from Lemma 4.13 . Otherwise, the generated complete query is reduced to premise-restricted and strongly conclusion-restricted complete queries according to Section 4.3. Again, this reduction can b e done in time O ( n ), since the basic ev en t G in Theorem 4.9 a) is computable in time O ( n ). Th us, the claims follo w from Theorem 4.9 and Lemma 4.13. Note that in Theorem 4.9 b) (1), the tigh t lo w er answ er to 9 ( F j B )[ x 1 ; x 2 ] can b e computed without u 2 and v 2 . 2 4.6 Comparison with the Classical Linear Programming Approac h As a comparison, w e no w briey describ e ho w probabilistic deduction in conditional con- strain t trees can b e done b y the classical linear programming approac h (P aa, 1988; v an der Gaag, 1991; Amarger et al. 1991; Hansen et al. 1995). In the sequel, let 9 ( F j E )[ x 1 ; x 2 ] b e a query to an exact or general conditional constrain t tree ( B ; KB ) o v er n no des. The tigh t answ er to 9 ( F j E )[ x 1 ; x 2 ] can b e computed b y solving t w o linear programs. In detail, the requested greatest lo w er and least upp er b ound are giv en b y the optimal v alues of the follo wing t w o linear programs with x A  0 ( A 2 A B ) and opt 2 f min ; max g : opt P A 2A B ; A ) EF x A sub ject to P A 2A B ; A ) E x A = 1 P A 2A B ; A ) GH x A  u 1  P A 2A B ; A ) G x A for all ( H j G )[ u 1 ; u 2 ] 2 KB P A 2A B ; A ) GH x A  u 2  P A 2A B ; A ) G x A for all ( H j G )[ u 1 ; u 2 ] 2 KB That is, the tigh t answ er is computed b y solving t w o linear programs with 2 n v ariables and 4 n  2 linear inequalities. F or example, the tigh t answ er to the premise-restricted complete query 9 ( QRSTU j M )[ x 1 ; x 2 ] to the conditional constrain t trees in Fig. 2 yields t w o linear programs with 2 9 = 512 v ariables and 4  9  2 = 34 linear inequalities. 224 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events Hence, if w e no w solv e these t w o linear programs b y the standard simplex metho d or the standard in terior-p oin t tec hnique, then w e need immediately exp onen tial time in n . It is still an op en question whether column generation tec hniques can help to solv e the t w o linear programs in less than exp onen tial time in n in the w orst case. 5. Comparison with Ba y esian Net w orks In this section, w e briey discuss the relationship b et w een conditional constrain t trees and Ba y esian net w orks (P earl, 1988). A Ba y esian net w ork is dened b y a directed acyclic graph G o v er discrete random v ari- ables X 1 ; X 2 ; : : : ; X n as no des and b y a conditional probabilit y distribution Pr ( X i j pa ( X i )) for eac h random v ariable X i and eac h instan tiation pa ( X i ) of its paren ts pa( X i ). It sp ecies a unique join t probabilit y distribution Pr o v er X 1 ; X 2 ; : : : ; X n b y: Pr ( X 1 ; X 2 ; : : : ; X n ) = n Y i =1 Pr ( X i j pa( X i )) : That is, the join t probabilit y distribution Pr is uniquely determined b y the conditional distributions Pr ( X i j pa ( X i )) and certain conditional indep endencies enco ded in G . Hence, Ba y esian trees (that is, Ba y esian net w orks that ha v e a directed tree as asso ciated directed acyclic graph) with only binary random v ariables seem to b e v ery close to exact conditional constrain t trees. Ho w ev er, exact and general conditional constrain t trees are asso ciated with an undirected tree that do es not enco de an y indep endencies ! F or this rea- son, exact and general conditional constrain t trees describ e con v ex sets of join t probabilit y distributions rather than single join t probabilit y distributions. But, w ould it b e p ossible to additionally assume certain indep endencies? Of course, with eac h exact or general conditional constrain t tree ( B ; KB ), w e can asso ciate all probabilistic in terpretations Pr that are mo dels of KB and that ha v e additionally the undirected tree ( B ; $ ) as an I-map (P earl, 1988). That is, w e w ould ha v e indep endencies without causal directionalit y lik e in Mark o v trees (P earl, 1988). Ho w ev er, this idea do es not carry us to a single probabilistic in terpretation (neither for exact conditional constrain t trees, nor for general conditional constrain t trees), and it is an in teresting topic of future researc h to in v estigate ho w the computation of tigh t answ ers in exact and general conditional constrain t trees c hanges under this kind of indep endencies (whic h yield tigh ter b ounds, since they reduce the n um b er of mo dels of exact and general conditional constrain t trees). Finally , if w e additionally x the probabilit y of exactly one no de, then an exact condi- tional constrain t tree under the describ ed indep endencies sp ecies exactly one probabilistic in terpretation (note that, to k eep satisabilit y , the probabilit y of a no de m ust resp ect certain upp er b ounds, whic h are en tailed b y the exact conditional constrain t tree). But, suc h exact conditional constrain t trees are in fact Ba y esian trees with only binary random v ariables. 6. Summary and Conclusions W e sho w ed that globally complete probabilistic deduction with conditional constrain ts o v er basic ev en ts is NP-hard. W e then concen trated on the sp ecial case of probabilistic deduction 225 Lukasiewicz in exact and general conditional constrain t trees. W e presen ted v ery ecien t tec hniques for globally complete probabilistic deduction. More precisely , for exact conditional constrain t trees, w e presen ted a lo cal approac h that runs in linear time in the size of the conditional constrain t trees. F or general conditional constrain t trees, w e in tro duced a global approac h that runs in p olynomial time in the size of the conditional constrain t trees. Probabilistic deduction in conditional constrain t trees is motiv ated b y previous w ork in the literature on inference rules. It generalizes patterns of commonsense reasoning that ha v e b een thoroughly studied in this w ork. Hence, w e presen ted a new class of tractable probabilistic deduction problems, whic h are driv en b y articial in telligence applications. It is also imp ortan t to note that the deduction pro cess in exact and general conditional constrain t trees can easily b e elucidated in a graphical w a y . F or example, the computation of the tigh t answ er to the premise-restricted complete query 9 ( QRSTU j M )[ x 1 ; x 2 ] to the exact conditional constrain t tree in Fig. 2, left side, can b e illustrated b y lab eling eac h no de of the directed tree in Fig. 3 with the corresp onding tigh test b ounds of T able 1. Lik e Ba y esian net w orks, conditional constrain t trees are w ell-structured probabilistic kno wledge bases that ha v e an in tuitiv e graphical represen tation. Dieren tly from Ba y esian net w orks, conditional constrain t trees do not enco de an y probabilistic indep endencies. Th us, they can also b e understo o d as a complemen t to Ba y esian net w orks, useful for restricted applications in whic h w ell-structured indep endencies do not hold or are dicult to access. Conditional constrain t trees are quite restricted in their expressiv e p o w er. Ho w ev er, in more general probabilistic kno wledge bases, probabilistic deduction in conditional con train t trees ma y alw a ys act as lo cal inference rules. F or example, in case w e desire explanatory information on some sp ecic lo cal deductions from a subset of the whole kno wledge base (whic h could esp ecially b e useful in the design phase of a probabilistic kno wledge base). An imp ortan t conclusion of this pap er concerns the question whether to p erform prob- abilistic deduction b y the iterativ e application of inference rules or b y linear programming. The tec hniques of this pap er ha v e b een elab orated b y follo wing the idea of inference rules in probabilistic deduction. Hence, on the one hand, this pap er sho ws that the idea of inference rules can indeed bring us to ecien t tec hniques for globally complete probabilistic deduction in restricted settings. Ho w ev er, on the other hand, giv en the tec hnical complexit y of the corresp onding pro ofs, it seems unlik ely that these results can b e extended to probabilistic kno wledge bases that are signican tly more general than conditional constrain t trees. That is, as far as signican tly more general probabilistic deduction problems with con- ditional constrain ts are concerned, the iterativ e application of inference rules do es not seem to b e v ery promising for globally complete probabilistic deduction. Note that a similar conclusion is dra wn in a companion pap er (1998a, 1999a), whic h sho ws the limits of lo cally complete inference rules for probabilistic deduction under taxonomic kno wledge. F or example, probabilistic deduction from probabilistic logic programs that do not as- sume probabilistic indep endencies (Ng & Subrahmanian 1993, 1994; Luk asiewicz, 1998d) should b etter not b e done b y the iterativ e application of inference rules. Note that m uc h more promising tec hniques are, for example, global tec hniques b y linear programming (Luk asiewicz, 1998d) and in particular appro ximation tec hniques based on truth-functional man y-v alued logics (Luk asiewicz 1998b, 1999b). 226 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events Ac kno wledgemen ts I am v ery grateful to Mic hael W ellman and the referees for their useful commen ts. I also w an t to thank Thomas Eiter for v aluable commen ts on an earlier v ersion of this pap er. This pap er is an extended and revised v ersion of a pap er that app eared in Principles of Know le dge R epr esentation and R e asoning: Pr o c e e dings of the 6th International Confer enc e , pp. 380{391. App endix A. Preliminaries of the Pro ofs for Sections 4.1 to 4.3 In this section, w e mak e some tec hnical preparations for the pro ofs of Theorems 4.1, 4.2, 4.7, 4.8, and 4.9. In the sequel, w e use the notation x 1 ; 1 x 1 ; 2 r 1 x 2 ; 1 x 2 ; 2 r 2 c 1 c 2 as an abbreviation of the follo wing system of equations: x 1 ; 1 + x 1 ; 2 = r 1 x 2 ; 1 + x 2 ; 2 = r 2 x 1 ; 1 + x 2 ; 1 = c 1 x 1 ; 2 + x 2 ; 2 = c 2 : (11) The next lemma pro vides the optimal v alues of t w o linear programs to b e solv ed in the pro ofs of Theorems 4.1, 4.2, 4.7, 4.8, and 4.9 . Lemma A.1 L et r 1 ; r 2 ; c 1 ; c 2  0 with r 1 + r 2 = c 1 + c 2 . F or al l i; j 2 f 1 ; 2 g : a) min ( r i ; c j ) = max x i;j subje ct to (11) and x n;m  0 for al l n; m 2 f 1 ; 2 g . b) max(0 ; r i  c 3  j ) = min x i;j subje ct to (11) and x n;m  0 for al l n; m 2 f 1 ; 2 g . Pro of. The claims can easily b e v eried (Luk asiewicz, 1996). 2 Let us assume that a conditional constrain t tree is the union of t w o subtrees that ha v e just one no de in common. A mo del of eac h subtree and a third mo del related to the common no de can no w b e com bined to a mo del of the whole conditional constrain t tree. This imp ortan t result follo ws from the next lemma. Lemma A.2 L et B 1 and B 2 b e sets of b asic events with B 1 \ B 2 = ; . L et B 0 b e a new b asic event that is not c ontaine d in B 1 [ B 2 . L et Pr 1 and Pr 2 b e pr ob abilistic interpr etations on the atomic events over B 1 [ f B 0 g and B 2 [ f B 0 g , r esp e ctively. L et B 1 and B 2 b e c onjunctive events over B 1 and B 2 , r esp e ctively. L et Pr 0 b e a pr ob abilistic interpr etation on the atomic events over f B 0 ; B 1 ; B 2 g with Pr 0 ( H 0 H 1 ) = Pr 1 ( H 0 H 1 ) and Pr 0 ( H 0 H 2 ) = Pr 2 ( H 0 H 2 ) for al l atomic events H 0 , H 1 , and H 2 over f B 0 g , f B 1 g , and f B 2 g , r esp e ctively. Ther e is a pr ob abilistic interpr etation Pr on the atomic events over B 1 [ B 2 [ f B 0 g with: Pr ( H 0 H 1 H 2 ) = Pr 0 ( H 0 H 1 H 2 ) ; Pr ( H 0 A 1 ) = Pr 1 ( H 0 A 1 ) ; and Pr ( H 0 A 2 ) = Pr 2 ( H 0 A 2 ) (12) 227 Lukasiewicz for al l atomic events H 0 , H 1 , H 2 , A 1 , and A 2 over the sets of b asic events f B 0 g , f B 1 g , f B 2 g , B 1 , and B 2 , r esp e ctively. Pro of. Let the probabilistic in terpretation Pr on the atomic ev en ts o v er B 1 [ B 2 [ f B 0 g b e dened as follo ws: Pr ( H 0 A 1 A 2 ) = 8 < : Pr 0 ( H 0 H 1 H 2 )  Pr 1 ( H 0 A 1 ) Pr 1 ( H 0 H 1 )  Pr 2 ( H 0 A 2 ) Pr 2 ( H 0 H 2 ) if Pr 1 ( H 0 H 1 )  Pr 2 ( H 0 H 2 ) > 0 0 if Pr 1 ( H 0 H 1 )  Pr 2 ( H 0 H 2 ) = 0 for all atomic ev en ts H 0 , A 1 , and A 2 o v er f B 0 g , B 1 , and B 2 , resp ectiv ely , with atomic ev en ts H 1 o v er f B 1 g and H 2 o v er f B 2 g suc h that A 1 ) H 1 and A 2 ) H 2 . No w, w e m ust sho w that Pr satises (12). Let H 0 , H 1 , and H 2 b e atomic ev en ts o v er f B 0 g , f B 1 g , and f B 2 g , resp ectiv ely . F or Pr 1 ( H 0 H 1 ) > 0 and Pr 2 ( H 0 H 2 ) > 0, w e get: Pr ( H 0 H 1 H 2 ) = P A 1 2A B 1 ; A 1 ) H 1 A 2 2A B 2 ; A 2 ) H 2 Pr 0 ( H 0 H 1 H 2 )  Pr 1 ( H 0 A 1 ) Pr 1 ( H 0 H 1 )  Pr 2 ( H 0 A 2 ) Pr 2 ( H 0 H 2 ) = Pr 0 ( H 0 H 1 H 2 ) : F or Pr 1 ( H 0 H 1 ) = 0 or Pr 2 ( H 0 H 2 ) = 0, w e get Pr ( H 0 H 1 H 2 ) = 0 = Pr 0 ( H 0 H 1 H 2 ). Let H 0 , H 1 , and A 1 b e atomic ev en ts o v er f B 0 g , f B 1 g , and B 1 , resp ectiv ely , with A 1 ) H 1 . F or Pr 1 ( H 0 H 1 ) > 0, Pr 2 ( H 0 B 2 ) > 0, and Pr 2 ( H 0 B 2 ) > 0, it holds: Pr ( H 0 A 1 ) = P A 2 2A B 2 ; A 2 ) B 2 Pr 0 ( H 0 H 1 B 2 )  Pr 1 ( H 0 A 1 ) Pr 1 ( H 0 H 1 )  Pr 2 ( H 0 A 2 ) Pr 2 ( H 0 B 2 ) + P A 2 2A B 2 ; A 2 ) B 2 Pr 0 ( H 0 H 1 B 2 )  Pr 1 ( H 0 A 1 ) Pr 1 ( H 0 H 1 )  Pr 2 ( H 0 A 2 ) Pr 2 ( H 0 B 2 ) = Pr 0 ( H 0 H 1 )  Pr 1 ( H 0 A 1 ) Pr 1 ( H 0 H 1 ) = Pr 1 ( H 0 A 1 ) : F or Pr 1 ( H 0 H 1 ) > 0, Pr 2 ( H 0 B 2 ) > 0, and Pr 2 ( H 0 B 2 ) = 0, w e get: Pr ( H 0 A 1 ) = P A 2 2A B 2 ; A 2 ) B 2 Pr 0 ( H 0 H 1 B 2 )  Pr 1 ( H 0 A 1 ) Pr 1 ( H 0 H 1 )  Pr 2 ( H 0 A 2 ) Pr 2 ( H 0 B 2 ) = Pr 0 ( H 0 H 1 )  Pr 1 ( H 0 A 1 ) Pr 1 ( H 0 H 1 ) = Pr 1 ( H 0 A 1 ) : The pro of is similar for Pr 1 ( H 0 H 1 ) > 0, Pr 2 ( H 0 B 2 ) = 0, and Pr 2 ( H 0 B 2 ) > 0. F or Pr 1 ( H 0 H 1 ) = 0, w e get Pr ( H 0 A 1 ) = 0 = Pr 1 ( H 0 A 1 ). Finally , the pro of of Pr ( H 0 A 2 ) = Pr 2 ( H 0 A 2 ) for all atomic ev en ts H 0 o v er f B 0 g and A 2 o v er B 2 can b e done analogously . 2 App endix B. Pro ofs for Section 4.1 In this section, w e giv e the pro ofs of Theorems 4.1 and 4.2. That is, w e sho w the global soundness and the global completeness of the functions H  1 , H  2 , H  2 , and H  2 . The pro ofs are done b y induction on the recursiv e denition of H  1 , H  2 , H  2 , and H  2 . 228 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events T o pro v e global soundness, w e just ha v e to sho w the lo cal soundness of the computations in Leaf , Chaining , and Fusion . T o pro v e global completeness, w e construct t w o mo dels of the conditional constrain t tree, one related to the greatest lo w er b ound and another one related to the least upp er b ound computed in Leaf , Chaining , and Fusion . F or Leaf , suc h a mo del is trivially giv en. F or Chaining , w e com bine a mo del of the arro w, a mo del of the subtree, and a mo del connected to the common no de to a mo del of the extended conditional constrain t tree. F or Fusion , w e com bine mo dels of the subtrees and a mo del connected to the common no de to a mo del of the extended conditional constrain t tree. More precisely , for Chaining and Fusion , the mo dels of the subtrees are related to previously computed tigh test b ounds, while the mo del connected to the common no de is related to the tigh test b ounds computed in the running Chaining or Fusion . W e need the follo wing tec hnical preparations. The next lemma helps us to sho w the global completeness of the functions H  2 , H  2 , and H  2 in Chaining and Fusion . Lemma B.3 a) F or al l r e al numb ers u; v 2 (0 ; 1] , x 2 2 [0 ; 1] , and x 2 ; z 2 2 [0 ; 1 ) with x 2 ; x 2  z 2 and z 2  x 2 + x 2 , ther e is some x 2 [ z 2  x 2 ; x 2 ] with: min (1 ; uz 2 v ; 1  u + ux v ; u  ux v + uz 2 v ) = min (1 ; uz 2 v ; 1  u + ux 2 v ; u + u x 2 v ) : (13) b) F or v 2 ; x 2 2 [0 ; 1] and v 2 ; x 2 ; w 2 ; z 2 2 [0 ; 1 ) with v 2  w 2 , v 2  w 2 , x 2  z 2 , x 2  z 2 , w 2  v 2 + v 2 , and z 2  x 2 + x 2 , ther e is v 2 [ w 2  v 2 ; v 2 ] and x 2 [ z 2  x 2 ; x 2 ] with: min ( w 2 ; z 2 ; v + z 2  x; x + w 2  v ) = min ( w 2 ; z 2 ; v 2 + x 2 ; x 2 + v 2 ) min ( v ; x ) = min ( v 2 ; x 2 ) : (14) Pro of. The claims can easily b e v eried (Luk asiewicz, 1996). 2 The follo wing lemma helps us to pro v e the lo cal soundness and the lo cal completeness of the functions H  1 , H  2 , H  2 , and H  2 in Chaining and Fusion . Lemma B.4 a) L et u; v 2 (0 ; 1] , x 2 [0 ; 1] , and x 2 [0 ; 1 ) . F or al l pr ob abilistic inter- pr etations Pr with Pr ( B ) > 0 , the c onditions u  Pr ( B ) = Pr ( BC ) , v  Pr ( C ) = Pr ( BC ) , x  Pr ( C ) = Pr ( CL ( C " )) , and x  Pr ( C ) = Pr ( C L ( C " )) ar e e quivalent to: Pr ( B C L ( C " )) Pr ( B ) Pr ( B C L ( C " )) Pr ( B ) Pr ( B C ) Pr ( B ) Pr ( B C L ( C " ) ) Pr ( B ) Pr ( B C L ( C " ) ) Pr ( B ) 1  u Pr ( C L ( C " )) Pr ( B ) u x v Pr ( B C L ( C " )) Pr ( B ) Pr ( B CL ( C " )) Pr ( B ) u v  u Pr ( BC L ( C " )) Pr ( B ) Pr ( BCL ( C " ) ) Pr ( B ) u u v  ux v ux v b) L et v ; x 2 [0 ; 1] and v ; x 2 [0 ; 1 ) . F or al l pr ob abilistic interpr etations Pr with Pr ( B ) > 0 , the c onditions v  Pr ( B ) = Pr ( BL ( G ) ) , v  Pr ( B ) = Pr ( B L ( G ) ) , x  Pr ( B ) = Pr ( BL ( H ) ) , and x  Pr ( B ) = Pr ( B L ( H )) ar e e quivalent to: Pr ( B L ( G ) L ( H ) ) Pr ( B ) Pr ( B L ( G ) L ( H )) Pr ( B ) Pr ( B L ( G ) ) Pr ( B ) Pr ( B L ( G ) L ( H ) ) Pr ( B ) Pr ( B L ( G ) L ( H ) ) Pr ( B ) v Pr ( B L ( H ) ) Pr ( B ) x Pr ( B L ( G ) L ( H ) ) Pr ( B ) Pr ( B L ( G ) L ( H ) ) Pr ( B ) 1  v Pr ( BL ( G ) L ( H ) ) Pr ( B ) Pr ( BL ( G ) L ( H ) ) Pr ( B ) v 1  x x 229 Lukasiewicz Pro of. The claims can b e v eried b y straigh tforw ard arithmetic transformations based on the prop erties of probabilistic in terpretations. 2 After these preparations, w e are no w ready to pro v e the global soundness and the global completeness of the functions H  1 , H  2 , H  2 , and H  2 . Pro of of Theorem 4.1. The claims are pro v ed b y induction on the recursiv e denition of H  1 . The case C = B 1 : : : B k is tac kled b y iterativ ely splitting C in to t w o conjunctiv e ev en ts. Th us, it is reduced to C = GH with conjunctiv e ev en ts G and H that are disjoin t in their basic ev en ts. F or C = B 1 , w e dene u = Pr ( C j B ), v = Pr ( B j C ), and x 1 = H  1 ( C ; C " ). F or C = B 1 : : : B k , hence C = GH , let v 1 = H  1 ( B ; G ) and x 1 = H  1 ( B ; H ). a) All mo dels Pr 2 Mo ( B ; C ) with Pr ( B ) = 0 satisfy the indicated condition. In the sequel, let Pr 2 Mo ( B ; C ) with Pr ( B ) > 0. Basis: Let C = B . Since C = L ( C ), w e get:  1  Pr ( B ) = 1  Pr ( B ) = Pr ( BC ) = Pr ( B L ( C )) : Induction: Let C = B 1 . F or all mo dels Pr 2 2 Mo ( C ; C " ), w e get b y the induction h yp othesis x 1  Pr 2 ( C )  Pr 2 ( C L ( C " )). Th us, Pr satises the same conditions. Since L ( C " ) = L ( C ) and b y Lemmata A.1 and B.4 a), w e then get:  1  Pr ( B ) = max(0 ; u  u v + ux 1 v )  Pr ( B )  Pr ( B L ( C " )) = Pr ( B L ( C )) : Let C = GH . F or all Pr 1 2 Mo ( B ; G ) and Pr 2 2 Mo ( B ; H ), w e get b y the induction h yp othesis v 1  Pr 1 ( B )  Pr 1 ( B L ( G )) and x 1  Pr 2 ( B )  Pr 2 ( B L ( H )). Th us, Pr satises the same conditions. Since L ( G ) L ( H ) = L ( GH ) = L ( C ) and b y Lemmata A.1 and B.4 b): max(0 ; v 1 + x 1  1)  Pr ( B )  Pr ( B L ( G ) L ( H )) = Pr ( B L ( C )) : b) Basis: Let C = B . A mo del Pr 2 Mo ( B ; C ) suc h that Pr ( B ) > 0, 1  Pr ( B ) =  1  Pr ( B ) = Pr ( B L ( C )), and Pr ( B L ( C )) = 0 is giv en b y B ; B 7! 0 ; 1. Induction: Let C = B 1 . Let the mo del Pr 1 of f ( C j B )[ u; u ] ; ( B j C )[ v ; v ] g with Pr 1 ( B ) > 0 and Pr 1 ( C ) > 0 b e dened lik e in the pro of of Theorem 3.2. W e no w c ho ose an appropriate mo del Pr 2 2 Mo ( C ; C " ). Let us rst consider the case x 1 > 0, v = 1, or not L ( C " ) ) C . By the induction h yp othesis, there exists a mo del Pr 2 2 Mo ( C ; C " ) with Pr 2 ( C ) > 0, x 1  Pr 2 ( C ) = Pr 2 ( C L ( C " )), and Pr 2 ( C L ( C " )) = 0 i L ( C " ) ) C . Let us next assume x 1 = 0, v < 1, and L ( C " ) ) C . By Theorem 3.2, there exists a mo del Pr 00 2 2 Mo ( C ; C " ) with Pr 00 2 ( C L ( C " )) > 0. By the induction h yp othesis, there exists a mo del Pr 0 2 2 Mo ( C ; C " ) with Pr 0 2 ( C ) > 0 and 0  Pr 0 2 ( C ) = Pr 0 2 ( C L ( C " )). Hence, there exists a mo del Pr 2 2 Mo ( C ; C " ) with Pr 2 ( C ) > 0 and min (1  v ; Pr 00 2 ( C L ( C " )) = Pr 00 2 ( C ))  Pr 2 ( C ) = Pr 2 ( C L ( C " )) : By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 with Pr 1 ( C ) = Pr 2 ( C ) and Pr 1 ( B C )  Pr 2 ( C L ( C " )). By Lemmata A.1 and B.4 a), w e can c ho ose the probabilistic in terpretation 230 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events Pr 0 o v er f B ; C ; L ( C " ) g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f B ; C g and f C ; L ( C " ) g , resp ectiv ely , suc h that: Pr 0 ( B C L ( C " )) = max(0 ; Pr 2 ( C L ( C " ))  Pr 1 ( B C )) = 0 Pr 0 ( BCL ( C " )) = max(0 ; Pr 2 ( C L ( C " ))  Pr 1 ( B C )) : By Lemma A.2 with B 1 = f B g , B 2 = B ( C ; C " ) n f C g , B 0 = C , B 1 = B , and B 2 = L ( C " ), there exists a probabilistic in terpretation Pr o v er B ( B ; C ) with (12). Hence, it holds Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. By Lemma B.4 a), w e get:  1  Pr ( B ) = max(0 ; u  u v + ux 1 v )  Pr ( B ) = Pr ( B L ( C " )) = Pr ( B L ( C )) : Moreo v er, it is easy to see that Pr ( B L ( C )) = 0 i L ( C ) ) B . Let C = GH . By the induction h yp othesis, there are mo dels Pr 1 2 Mo ( B ; G ) and Pr 2 2 Mo ( B ; H ) with Pr 1 ( B ) > 0, Pr 2 ( B ) > 0, v 1  Pr 1 ( B ) = Pr 1 ( B L ( G )), x 1  Pr 2 ( B ) = Pr 2 ( B L ( H )), Pr 1 ( B L ( G )) = 0 i L ( G ) ) B , and Pr 2 ( B L ( H )) = 0 i L ( H ) ) B . By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 with Pr 1 ( B ) = Pr 2 ( B ) and Pr 1 ( B L ( G ))  Pr 2 ( B L ( H ). By Lemmata A.1 and B.4 b), w e can c ho ose the probabilistic in terpretation Pr 0 o v er f B ; L ( G ) ; L ( H ) g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f B ; L ( G ) g and f B ; L ( H ) g , resp ectiv ely , suc h that: Pr 0 ( B L ( G ) L ( H )) = min ( Pr 2 ( B L ( H )) ; Pr 1 ( B L ( G )) Pr 0 ( B L ( G ) L ( H )) = max (0 ; Pr 2 ( B L ( H ))  Pr 1 ( B L ( G ))) : By Lemma A.2 with B 1 = B ( B ; G ) n f B g , B 2 = B ( B ; H ) n f B g , B 0 = B , B 1 = L ( G ), and B 2 = L ( H ), there exists a probabilistic in terpretation Pr o v er B ( B ; C ) with (12). Hence, it holds Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. By Lemma B.4 b), w e get: max(0 ; v 1 + x 1  1)  Pr ( B ) = Pr ( B L ( G ) L ( H )) = Pr ( B L ( C )) : Moreo v er, it is easy to see that Pr ( B L ( C )) = 0 i L ( C ) ) B . 2 Pro of of Theorem 4.2. The claims are pro v ed b y induction on the recursiv e denition of H  2 , H  2 , and H  2 . Again, the case C = B 1 : : : B k is tac kled b y iterativ ely splitting C in to t w o conjunctiv e ev en ts. Th us, it is reduced to C = GH with conjunctiv e ev en ts G and H that are disjoin t in their basic ev en ts. F or C = B 1 let u = Pr ( C j B ), v = Pr ( B j C ), and x 2 = H  2 ( C ; C " ) ; x 2 = H  2 ( C ; C " ) ; z 2 = H  2 ( C ; C " ) : F or C = B 1 : : : B k , hence C = GH , w e dene: v 2 = H  2 ( B ; G ) ; v 2 = H  2 ( B ; G ) ; w 2 = H  2 ( B ; G ) x 2 = H  2 ( B ; H ) ; x 2 = H  2 ( B ; H ) ; z 2 = H  2 ( B ; H ) : a) F or Pr 2 Mo ( B ; C ) with Pr ( B ) = 0, w e get Pr ( N ) = 0 for all N 2 B ( B ; C ). Th us, Pr satises the indicated conditions. Next, let Pr 2 Mo ( B ; C ) with Pr ( B ) > 0. 231 Lukasiewicz Basis: Let C = B . Since L ( C ) = C , w e get: Pr ( B L ( C )) = Pr ( BC ) = 1  Pr ( B ) =  2  Pr ( B ) Pr ( B L ( C )) = Pr ( B C ) = 0  Pr ( B ) =  2  Pr ( B ) Pr ( L ( C )) = Pr ( C ) = 1  Pr ( B ) =  2  Pr ( B ) : Induction: Let C = B 1 . F or all mo dels Pr 2 2 Mo ( C ; C " ), w e get b y the induction h yp othesis Pr 2 ( C L ( C " ))  x 2  Pr 2 ( C ), Pr 2 ( C L ( C " ))  x 2  Pr 2 ( C ), and Pr 2 ( L ( C " ))  z 2  Pr 2 ( C ). Hence, Pr satises the same conditions. Since L ( C ) = L ( C " ) and b y Lemmata A.1 and B.4 a), w e then get: Pr ( B L ( C )) = Pr ( B L ( C " ))  min (1 ; uz 2 v ; 1  u + ux 2 v ; u + u x 2 v )  Pr ( B ) =  2  Pr ( B ) Pr ( B L ( C )) = Pr ( B L ( C " ))  min ( u x 2 v + u v  u; uz 2 v )  Pr ( B ) =  2  Pr ( B ) Pr ( L ( C )) = Pr ( L ( C " ))  uz 2 v  Pr ( B ) =  2  Pr ( B ) : Let C = GH . F or all mo dels Pr 1 2 Mo ( B ; G ) and all mo dels Pr 2 2 Mo ( B ; H ), w e get b y the induction h yp othesis: Pr 1 ( B L ( G ))  v 2  Pr 1 ( B ) ; Pr 2 ( B L ( H ))  x 2  Pr 2 ( B ) Pr 1 ( B L ( G ))  v 2  Pr 1 ( B ) ; Pr 2 ( B L ( H ))  x 2  Pr 2 ( B ) Pr 1 ( L ( G ))  w 2  Pr 1 ( B ) ; Pr 2 ( L ( H ))  z 2  Pr 2 ( B ) : Th us, Pr satises the same conditions. Since L ( C ) = L ( GH ) = L ( G ) L ( H ) and b y Lem- mata A.1 and B.4 b), w e get: Pr ( B L ( C )) = Pr ( B L ( G ) L ( H ))  min ( v 2 ; x 2 )  Pr ( B ) Pr ( B L ( C )) = Pr ( B L ( G ) L ( H ))  min ( v 2 ; x 2 )  Pr ( B ) Pr ( L ( C )) = Pr ( L ( G ) L ( H ))  min ( w 2 ; z 2 ; v 2 + x 2 ; x 2 + v 2 )  Pr ( B ) : b) and c) Basis: Let C = B . A mo del Pr 2 Mo ( B ; C ) with Pr ( B ) > 0 satisfying Pr ( B L ( C )) = 1  Pr ( B ) =  2  Pr ( B ), Pr ( B L ( C )) = 0  Pr ( B ) =  2  Pr ( B ), and Pr ( L ( C )) = 1  Pr ( B ) =  2  Pr ( B ) is giv en b y B ; B 7! 0 ; 1. Induction: Let C = B 1 . Let the mo del Pr 1 of f ( C j B )[ u; u ] ; ( B j C )[ v ; v ] g with Pr 1 ( B ) > 0 and Pr 1 ( C ) > 0 b e dened lik e in the pro of of Theorem 3.2. F or the pro of of c), b y the induction h yp othesis, there is some Pr 2 2 Mo ( C ; C " ) with Pr 2 ( C ) > 0, Pr 2 ( C L ( C " )) = x 2  Pr 2 ( C ), and Pr 2 ( L ( C " )) = z 2  Pr 2 ( C ). By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 with Pr 1 ( C ) = Pr 2 ( C ) and Pr 1 ( B C )  Pr 2 ( C L ( C " )). By Lemma A.1, w e can c ho ose the probabilistic in terpretation Pr 0 o v er f B ; C ; L ( C " ) g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f B ; C g and f C ; L ( C " ) g , resp ectiv ely , suc h that: Pr 0 ( B C L ( C " )) = min ( Pr 1 ( B C ) ; Pr 2 ( C L ( C " ))) = Pr 2 ( C L ( C " )) Pr 0 ( B CL ( C " ) ) = min ( Pr 1 ( B C ) ; Pr 2 ( C L ( C " ))) : 232 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events By Lemma A.2 with B 1 = f B g , B 2 = B ( C ; C " ) n f C g , B 0 = C , B 1 = B , and B 2 = L ( C " ), there is a probabilistic in terpretation Pr o v er B ( B ; C ) with (12). Hence, it holds Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. By Lemma B.4 a), w e get: Pr ( B L ( C )) = Pr ( B L ( C " )) = min ( u x 2 v + u v  u; uz 2 v )  Pr ( B ) =  2  Pr ( B ) Pr ( L ( C )) = Pr ( L ( C " )) = uz 2 v  Pr ( B ) =  2  Pr ( B ) : F or the pro of of b), b y the induction h yp othesis, there are mo dels Pr 1 ; 2 ; Pr 2 ; 2 2 Mo ( C ; C " ) with Pr 1 ; 2 ( C ) > 0, Pr 2 ; 2 ( C ) > 0, and Pr 1 ; 2 ( C L ( C " )) = x 2  Pr 1 ; 2 ( C ) ; Pr 1 ; 2 ( L ( C " )) = z 2  Pr 1 ; 2 ( C ) Pr 2 ; 2 ( C L ( C " )) = x 2  Pr 2 ; 2 ( C ) ; Pr 2 ; 2 ( L ( C " )) = z 2  Pr 2 ; 2 ( C ) : (15) These conditions already en tail x 2  z 2 and x 2  z 2 . With the results from a), w e addi- tionally get z 2  x 2 + x 2 . By Lemma B.3 a), there is x 2 [ z 2  x 2 ; x 2 ] with (13). By (15), there is Pr 2 2 Mo ( C ; C " ) with Pr 2 ( C ) > 0 and Pr 2 ( C L ( C " )) = x  Pr 2 ( C ) ; Pr 2 ( L ( C " )) = z 2  Pr 2 ( C ) : By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 with Pr 1 ( C ) = Pr 2 ( C ). By Lemma A.1, w e can c ho ose the probabilistic in terpretation Pr 0 o v er f B ; C ; L ( C " ) g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f B ; C g and f C ; L ( C " ) g , resp ectiv ely , suc h that: Pr 0 ( B C L ( C " )) = min ( Pr 1 ( B C ) ; Pr 2 ( C L ( C " ))) Pr 0 ( B C L ( C " )) = min ( Pr 1 ( BC ) ; Pr 2 ( CL ( C " ) )) : By Lemma A.2 with B 1 = f B g , B 2 = B ( C ; C " ) n f C g , B 0 = C , B 1 = B , and B 2 = L ( C " ), there is a probabilistic in terpretation Pr o v er B ( B ; C ) with (12). Hence, it holds Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. By Lemma B.4 a), w e get: Pr ( B L ( C )) = Pr ( B L ( C " )) = min (1 ; uz 2 v ; 1  u + ux 2 v ; u + u x 2 v )  Pr ( B ) =  2  Pr ( B ) Pr ( L ( C )) = Pr ( L ( C " )) = uz 2 v  Pr ( B ) =  2  Pr ( B ) : Let C = GH . W e just sho w b), the claim in c) can b e pro v ed analogously . By the in- duction h yp othesis, there are mo dels Pr 1 ; 1 ; Pr 2 ; 1 2 Mo ( B ; G ) and Pr 1 ; 2 ; Pr 2 ; 2 2 Mo ( B ; H ) with Pr 1 ; 1 ( B ) > 0, Pr 2 ; 1 ( B ) > 0, Pr 1 ; 2 ( B ) > 0, Pr 2 ; 2 ( B ) > 0, and Pr 1 ; 1 ( B L ( G )) = v 2  Pr 1 ; 1 ( B ) ; Pr 1 ; 1 ( L ( G )) = w 2  Pr 1 ; 1 ( B ) Pr 2 ; 1 ( B L ( G )) = v 2  Pr 2 ; 1 ( B ) ; Pr 2 ; 1 ( L ( G )) = w 2  Pr 2 ; 1 ( B ) Pr 1 ; 2 ( B L ( H )) = x 2  Pr 1 ; 2 ( B ) ; Pr 1 ; 2 ( L ( H )) = z 2  Pr 1 ; 2 ( B ) Pr 2 ; 2 ( B L ( H )) = x 2  Pr 2 ; 2 ( B ) ; Pr 2 ; 2 ( L ( H )) = z 2  Pr 2 ; 2 ( B ) : (16) These conditions already en tail v 2  w 2 , v 2  w 2 , x 2  z 2 , and x 2  z 2 . With the results from a), w e additionally get w 2  v 2 + v 2 and z 2  x 2 + x 2 . By Lemma B.3 b), there is 233 Lukasiewicz v 2 [ w 2  v 2 ; v 2 ] and x 2 [ z 2  x 2 ; x 2 ] with (14). By (16), there is Pr 1 2 Mo ( B ; G ) and Pr 2 2 Mo ( B ; H ) with Pr 1 ( B ) > 0, Pr 2 ( B ) > 0, and Pr 1 ( B L ( G )) = v  Pr 1 ( B ) ; Pr 1 ( L ( G )) = w 2  Pr 1 ( B ) Pr 2 ( B L ( H )) = x  Pr 2 ( B ) ; Pr 2 ( L ( H )) = z 2  Pr 2 ( B ) : By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 with Pr 1 ( B ) = Pr 2 ( B ). By Lemma A.1, w e can c ho ose the probabilistic in terpretation Pr 0 o v er f B ; L ( G ) ; L ( H ) g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f B ; L ( G ) g and f B ; L ( H ) g , resp ectiv ely , suc h that: Pr 0 ( B L ( G ) L ( H ) ) = min ( Pr 1 ( B L ( G )) ; Pr 2 ( B L ( H ))) Pr 0 ( BL ( G ) L ( H ) ) = min ( Pr 1 ( B L ( G )) ; Pr 2 ( B L ( H ))) : By Lemma A.2 with B 1 = B ( B ; G ) n f B g , B 2 = B ( B ; H ) n f B g , B 0 = B , B 1 = L ( G ), and B 2 = L ( H ), there is a probabilistic in terpretation Pr o v er B ( B ; C ) with (12). Hence, it holds Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. By Lemma B.4 b), w e get: Pr ( B L ( C )) = Pr ( B L ( G ) L ( H )) = min ( v 2 ; x 2 )  Pr ( B ) Pr ( L ( C )) = Pr ( L ( G ) L ( H )) = min ( w 2 ; z 2 ; v 2 + x 2 ; x 2 + v 2 )  Pr ( B ) : 2 Finally , note that computing least upp er b ounds is more dicult than computing great- est lo w er b ounds, since for eac h edge B ! C , b y Lemmata 3.1 and B.4 a), the greatest lo w er b ound of Pr ( B C L ( C " )) = Pr ( B ) sub ject to Pr 2 Mo ( B ; C ) and Pr ( B ) > 0 is alw a ys 0, but the least upp er b ound of Pr ( B C L ( C " )) = Pr ( B ) sub ject to Pr 2 Mo ( B ; C ) and Pr ( B ) > 0 is generally not 1. App endix C. Pro ofs for Section 4.2 In this section, w e giv e the pro ofs of Theorems 4.7 and 4.8. W e need some tec hnical preparations as follo ws. The next lemma helps us to sho w the lo cal soundness of the function H  1 in Fusion . Lemma C.5 F or al l r e al numb ers u 1 ; u; v 1 ; v ; x 1 ; x; y 1 ; y 2 (0 ; 1] with u 1  u , v 1  v , x 1  x , y 1  y , and u 1 + x 1 > 1 , it holds: min ( u=v  u; x=y  x ) = ( u + x  1)  min ( u 1 =v 1  u 1 ; x 1 =y 1  x 1 ) = ( u 1 + x 1  1) : Pro of. The claim can easily b e v eried (Luk asiewicz, 1996). 2 The follo wing lemma helps us to sho w the lo cal soundness and the lo cal completeness of the function H  1 in Chaining and Fusion . Lemma C.6 a) L et u , v , x , and y b e r e al numb ers fr om (0 ; 1] . F or al l pr ob abilistic inter- pr etations Pr with Pr ( L ( C " )) > 0 , the c onditions u  Pr ( B ) = Pr ( BC ) , v  Pr ( C ) = Pr ( BC ) , x  Pr ( C ) = Pr ( CL ( C " )) , and y  Pr ( L ( C " )) = Pr ( CL ( C " )) ar e e quivalent to: 234 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events Pr ( B C L ( C " )) Pr ( L ( C " )) Pr ( B C L ( C " )) Pr ( L ( C " )) Pr ( B C ) Pr ( L ( C " )) Pr ( B C L ( C " )) Pr ( L ( C " )) Pr ( B C L ( C " )) Pr ( L ( C " )) y v xu  y v x Pr ( C L ( C " )) Pr ( L ( C " )) 1  y Pr ( B C L ( C " ) ) Pr ( L ( C " )) Pr ( B CL ( C " )) Pr ( L ( C " )) y x  y v x Pr ( BC L ( C " )) Pr ( L ( C " )) Pr ( BCL ( C " )) Pr ( L ( C " )) y v x y x  y y b) L et u , v , x , and y b e r e al numb ers fr om (0 ; 1] . F or al l pr ob abilistic interpr etations Pr with Pr ( B ) > 0 , the c onditions u  Pr ( B ) = Pr ( BL ( G ) ) , v  Pr ( L ( G )) = Pr ( BL ( G ) ) , x  Pr ( B ) = Pr ( BL ( H )) , and y  Pr ( L ( H )) = Pr ( BL ( H ) ) ar e e quivalent to: Pr ( B L ( G ) L ( H ) ) Pr ( B ) Pr ( B L ( G ) L ( H )) Pr ( B ) Pr ( B L ( G ) ) Pr ( B ) Pr ( B L ( G ) L ( H ) ) Pr ( B ) Pr ( B L ( G ) L ( H ) ) Pr ( B ) u v  u Pr ( B L ( H ) ) Pr ( B ) x y  x Pr ( B L ( G ) L ( H ) ) Pr ( B ) Pr ( B L ( G ) L ( H ) ) Pr ( B ) 1  u Pr ( BL ( G ) L ( H ) ) Pr ( B ) Pr ( BL ( G ) L ( H ) ) Pr ( B ) u 1  x x Pro of. The claims can b e v eried b y straigh tforw ard arithmetic transformations based on the prop erties of probabilistic in terpretations. 2 W e are no w ready to pro v e Theorems 4.7 and 4.8. Pro of of Theorem 4.7. The claims are pro v ed b y induction on the recursiv e denition of H  1 . The pro of for C = B 1 B 2 : : : B k with k > 1 is done for k = 2. It can easily b e generalized to k  2. F or C = B 1 , w e dene u 1 = Pr 1 ( C j B ), v 1 = Pr 1 ( B j C ), x 1 = H  1 ( C ; C " ), and y 1 = H  1 ( C ; C " ). Note that  1 > 0 en tails x 1 ; y 1 > 0 and v 1 + x 1 > 1. F or C = B 1 B 2 , w e dene G = B 1 , H = B 2 , u 1 = H  1 ( B ; G ), v 1 = H  1 ( B ; G ), x 1 = H  1 ( B ; H ), and y 1 = H  1 ( B ; H ). Note that  1 > 0 en tails u 1 ; v 1 ; x 1 ; y 1 > 0 and u 1 + x 1 > 1. a) All mo dels Pr 2 Mo ( B ; C ) with Pr ( L ( C )) = 0 satisfy the indicated condition. In the sequel, let Pr 2 Mo ( B ; C ) with Pr ( L ( C )) > 0 and th us also Pr ( B ) > 0. Basis: Let C = B . Since C = L ( C ), w e get:  1  Pr ( L ( C )) = 1  Pr ( L ( C )) = Pr ( C L ( C )) = Pr ( B L ( C )) : Induction: Let C = B 1 . F or all mo dels Pr 2 2 Mo ( C ; C " ), w e get x 1  Pr 2 ( C )  Pr 2 ( C L ( C " )) b y Theorem 4.3 a), and y 1  Pr 2 ( L ( C " ))  Pr 2 ( C L ( C " )) b y the induction h yp othesis. Th us, Pr satises the same conditions. Since L ( C " ) = L ( C ) and b y Lemmata A.1 and C.6 a):  1 = y 1  y 1 x 1 + y 1 v 1 x 1  Pr ( C L ( C " )) = Pr ( L ( C " )) = Pr ( C L ( C )) = Pr ( L ( C )) : Let C = GH . F or all mo dels Pr 1 2 Mo ( B ; G ) and Pr 2 2 Mo ( B ; H ), w e get b y Theorem 4.3 a) and b y the induction h yp othesis, resp ectiv ely: u 1  Pr 1 ( B )  Pr 1 ( B L ( G )) ; x 1  Pr 2 ( B )  Pr 2 ( B L ( H )) v 1  Pr 1 ( L ( G ))  Pr 1 ( B L ( G )) ; y 1  Pr 2 ( L ( H ))  Pr 2 ( B L ( H )) : 235 Lukasiewicz Hence, Pr satises the same conditions. Since L ( G ) L ( H ) = L ( GH ) = L ( C ) and b y Lem- mata A.1, C.5, and C.6 b), w e then get:  1 = 1 = (1 + min ( u 1 =v 1  u 1 ; x 1 =y 1  x 1 ) u 1 + x 1  1 )  1 = (1 + Pr ( B L ( G ) L ( H )) = Pr ( B ) Pr ( B L ( G ) L ( H )) = Pr ( B ) ) = Pr ( B L ( C )) Pr ( L ( C )) : b) Basis: Let C = B . A mo del Pr 2 Mo ( B ; C ) suc h that Pr ( B ) > 0, Pr ( L ( C )) > 0, 1  Pr ( L ( C )) = Pr ( B L ( C )), and 1  Pr ( B ) = Pr ( B L ( C )) is giv en b y B ; B 7! 0 ; 1. Induction: Let C = B 1 . Let the mo del Pr 1 of f ( C j B )[ u 1 ; u 1 ] ; ( B j C )[ v 1 ; v 1 ] g with Pr 1 ( B ) > 0 and Pr 1 ( C ) > 0 b e dened lik e in the pro of of Theorem 3.2. By the induction h yp othesis, there is Pr 2 2 Mo ( C ; C " ) with Pr 2 ( C ) > 0, Pr 2 ( L ( C " )) > 0, y 1  Pr 2 ( L ( C " )) = Pr 2 ( C L ( C " )) , and x 1  Pr 2 ( C ) = Pr 2 ( C L ( C " )). By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 suc h that Pr 1 ( C ) = Pr 2 ( C ) and Pr 1 ( B C )  Pr 2 ( C L ( C " )). By Lem- mata A.1 and C.6 a), w e can c ho ose the probabilistic in terpretation Pr 0 o v er f B ; C ; L ( C " ) g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f B ; C g and f C ; L ( C " ) g , resp ectiv ely , suc h that: Pr 0 ( B C L ( C " )) = max(0 ; Pr 2 ( C L ( C " ))  Pr 1 ( B C )) = 0 Pr 0 ( BCL ( C " )) = max(0 ; Pr 2 ( C L ( C " ))  Pr 1 ( B C )) : By Lemma A.2 with B 1 = f B g , B 2 = B ( C ; C " ) n f C g , B 0 = C , B 1 = B , and B 2 = L ( C " ), there exists a probabilistic in terpretation Pr o v er B ( B ; C ) with (12). Hence, it holds Pr 2 Mo ( B ; C ), Pr ( B ) > 0, and Pr ( L ( C )) > 0. Moreo v er, b y Lemma C.6 a), w e get:  1 = y 1  y 1 x 1 + y 1 v 1 x 1 = Pr ( C L ( C " )) = Pr ( L ( C " )) = Pr ( C L ( C )) = Pr ( L ( C ))  1 = u 1  u 1 v 1 + u 1 x 1 v 1 = Pr ( C L ( C " )) = Pr ( C ) = Pr ( C L ( C )) = Pr ( C ) : Let C = GH . By the induction h yp othesis, there are mo dels Pr 1 2 Mo ( B ; G ) and Pr 2 2 Mo ( B ; H ) with Pr 1 ( B ) > 0, Pr 2 ( B ) > 0, Pr 1 ( L ( G )) > 0, Pr 2 ( L ( H )) > 0, and u 1  Pr 1 ( B ) = Pr 1 ( B L ( G )) ; x 1  Pr 2 ( B ) = Pr 2 ( B L ( H )) v 1  Pr 1 ( L ( G )) = Pr 1 ( B L ( G )) ; y 1  Pr 2 ( L ( H )) = Pr 2 ( B L ( H )) : By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 with Pr 1 ( B ) = Pr 2 ( B ) and Pr 1 ( B L ( G ))  Pr 2 ( B L ( H ). By Lemmata A.1 and C.6 b), w e can c ho ose the probabilistic in terpretation Pr 0 o v er f B ; L ( G ) ; L ( H ) g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f B ; L ( G ) g and f B ; L ( H ) g , resp ectiv ely , suc h that: Pr 0 ( B L ( G ) L ( H )) = min ( Pr 2 ( B L ( H )) ; Pr 1 ( B L ( G ))) Pr 0 ( B L ( G ) L ( H )) = max(0 ; Pr 2 ( B L ( H ))  Pr 1 ( B L ( G ))) : By Lemma A.2 with B 1 = B ( B ; G ) n f B g , B 2 = B ( B ; H ) n f B g , B 0 = B , B 1 = L ( G ), and B 2 = L ( H ), there exists a probabilistic in terpretation Pr o v er B ( B ; C ) with (12). Hence, 236 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events it holds Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. By Lemma C.6 b), w e get Pr ( L ( C )) > 0 and  1 = 1 = (1 + min ( u 1 =v 1  u 1 ; x 1 =y 1  x 1 ) u 1 + x 1  1 ) = 1 = (1 + Pr ( B L ( G ) L ( H )) = Pr ( B ) Pr ( B L ( G ) L ( H )) = Pr ( B ) ) = Pr ( B L ( C )) Pr ( L ( C ))  1 = u 1 + x 1  1 = Pr ( B L ( G ) L ( H )) Pr ( B ) = Pr ( B L ( C )) Pr ( B ) : c) Let C = GH . By Theorem 4.3 b), there exist Pr 1 2 Mo ( B ; G ) and Pr 2 2 Mo ( B ; H ) with Pr 1 ( B ) > 0, Pr 2 ( B ) > 0, u 1  Pr 1 ( B ) = Pr 1 ( B L ( G )), and x 1  Pr 2 ( B ) = Pr 2 ( B L ( H )). By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 suc h that Pr 1 ( B ) = Pr 2 ( B ) and Pr 1 ( B L ( G ) )  Pr 2 ( B L ( H ). By Lemmata A.1 and C.6 b), w e can c ho ose the probabilistic in terpretation Pr 0 o v er f B ; L ( G ) ; L ( H ) g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f B ; L ( G ) g and f B ; L ( H ) g , resp ectiv ely , suc h that: Pr 0 ( B L ( G ) L ( H )) = max (0 ; Pr 2 ( B L ( H )  Pr 1 ( B L ( G ))) = 0 Pr 0 ( B L ( G ) L ( H )) = max (0 ; Pr 2 ( B L ( H ))  Pr 1 ( B L ( G ))) : By Lemma A.2 with B 1 = B ( B ; G ) n f B g , B 2 = B ( B ; H ) n f B g , B 0 = B , B 1 = L ( G ), and B 2 = L ( H ), there exists a probabilistic in terpretation Pr o v er B ( B ; C ) with (12). Hence, it holds Pr 2 Mo ( B ; C ) and Pr ( B ) > 0. By Lemma C.6 b), w e get Pr ( L ( C )) > 0 and 1 = Pr ( B L ( G ) L ( H )) = Pr ( L ( G ) L ( H )) = Pr ( B L ( C )) = Pr ( L ( C ))  1 = Pr ( B L ( G ) L ( H )) = Pr ( B )) = Pr ( B L ( C )) = Pr ( B ) : d) Let C = GH . By Theorem 3.2, there is a mo del Pr 000 2 Mo ( B ; C ) with Pr 00 0 ( B L ( C )) > 0. By Theorem 4.3 b), there is a mo del Pr 00 2 Mo ( B ; C ) with Pr 00 ( B ) > 0 and 0  Pr 0 0 ( B ) =  1  Pr 00 ( B ) = Pr 00 ( B L ( C )). Hence, there is a mo del Pr 0 2 Mo ( B ; C ) with Pr 0 ( B ) > 0 and min ( "; Pr 000 ( B L ( C )) = Pr 00 0 ( B ))  Pr 0 ( B ) = Pr 0 ( B L ( C )) : Let the mo dels Pr 1 2 Mo ( B ; G ) and Pr 2 2 Mo ( B ; H ) b e dened b y Pr 1 ( A 1 ) = Pr 0 ( A 1 ) and Pr 2 ( A 2 ) = Pr 0 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er B ( B ; G ) and B ( B ; H ), resp ectiv ely . By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 suc h that Pr 1 ( B ) = Pr 2 ( B ) and Pr 1 ( B L ( G ))  Pr 2 ( B L ( H ). By Lemmata A.1 and C.6 b), w e can c ho ose the probabilistic in terpretation Pr 0 o v er f B ; L ( G ) ; L ( H ) g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f B ; L ( G ) g and f B ; L ( H ) g , resp ectiv ely , suc h that: Pr 0 ( B L ( G ) L ( H )) = max(0 ; Pr 2 ( B L ( H )  Pr 1 ( B L ( G ))) = 0 Pr 0 ( B L ( G ) L ( H )) = min ( "; Pr 000 ( B L ( C )) = Pr 0 00 ( B ))  Pr 0 ( B ) : By Lemma A.2 with B 1 = B ( B ; G ) n f B g , B 2 = B ( B ; H ) n f B g , B 0 = B , B 1 = L ( G ), and B 2 = L ( H ), there is a probabilistic in terpretation Pr o v er B ( B ; C ) with (12). Hence, it holds Pr 2 Mo ( B ; C ), Pr ( B ) > 0, Pr ( L ( C )) > 0, Pr ( B L ( C )) = 0, and "  Pr ( B )  Pr ( BL ( C ) ). 2 Pro of of Theorem 4.8. F or u 1 > 0, the claim is immediate b y Theorem 4.7 a) to c). Let u 1 = 0 and E ) F . It holds 1  Pr ( E ) = Pr ( EF ) for all mo dels Pr of KB . Moreo v er, b y Theorem 3.2, there exists a mo del Pr of KB with Pr ( E ) > 0. Let u 1 = 0 and not E ) F . By Theorem 4.3 b), there exists a mo del Pr of KB with Pr ( E ) > 0 and Pr ( EF ) = 0. By Theorem 4.7 d), there exists a mo del Pr of KB with Pr ( E ) > 0 and 1  Pr ( E ) = Pr ( EF ). 2 237 Lukasiewicz App endix D. Pro ofs for Section 4.3 In this section, w e giv e the pro of of Theorem 4.9. The next lemma will help us to sho w the global tigh tness of the computed lo w er b ound in the case (3) of Theorem 4.9 b). Lemma D.7 L et x 2 [0 ; 1] and v ; x 2 [0 ; 1 ) . F or al l pr ob abilistic interpr etations Pr with Pr ( G ) > 0 , the c onditions Pr ( EG ) = 0 , v  Pr ( G ) = Pr ( E G ) , x  Pr ( G ) = Pr ( GF ) , and x  Pr ( G ) = Pr ( G F ) ar e e quivalent to: Pr ( E G F ) Pr ( G ) Pr ( E GF ) Pr ( G ) Pr ( E G ) Pr ( G ) Pr ( E G F ) Pr ( G ) Pr ( E G F ) Pr ( G ) v Pr ( G F ) Pr ( G ) x Pr ( E G F ) Pr ( G ) Pr ( E GF ) Pr ( G ) 1 Pr ( EG F ) Pr ( G ) Pr ( EGF ) Pr ( G ) 0 1  x x Pro of. The claim can b e v eried b y straigh tforw ard arithmetic transformations based on the prop erties of probabilistic in terpretations. 2 W e are no w ready to pro v e Theorem 4.9. Pro of of Theorem 4.9. a) By the denition of queries to conditional constrain t trees, all paths from a basic ev en t in E to a basic ev en t in F ha v e at least one basic ev en t in common. Hence, w e can c ho ose the basic ev en t G from all suc h basic ev en ts in common suc h that 9 ( G j E )[ z 1 ; z 2 ] is a strongly conclusion-restricted complete query to a subtree. b) F or u 1 > 0, the claim follo ws from Theorem 4.7 a) to c). F or the sp ecial case of exact conditional constrain t trees ( B ; KB ), the claim then follo ws from Theorems 4.3 and 4 : 5. Let u 1 = 0, v 1 = 1, and G ) F . It holds 1  Pr ( E ) = Pr ( EF ) for all mo dels Pr of KB . Moreo v er, b y Theorem 3.2, there exists a mo del Pr of KB with Pr ( E ) > 0. Let u 1 = 0, v 1 = 0, and G ) F . It is easy to see that b y (1) and Theorem 4.7 d), the tigh t upp er answ er is giv en b y f x 2 = 1 g . W e no w sho w that the tigh t lo w er answ er is giv en b y f x 1 = 0 g . By Theorem 4.3 b), there exists a mo del Pr 1 of KB 1 with Pr 1 ( E ) > 0, Pr 1 ( G ) > 0, and Pr 1 ( EG ) = 0. By Theorem 3.2, there exists a mo del Pr 2 of KB 2 with Pr 2 ( G ) > 0. By Lemma 3.1, w e can c ho ose Pr 1 and Pr 2 with Pr 1 ( G ) = Pr 2 ( G ) and Pr 1 ( E G )  Pr 2 ( GF ). By Lemmata A.1 and D.7, w e can c ho ose the probabilistic in terpretation Pr 0 o v er f E ; G; F g with Pr 0 ( A 1 ) = Pr 1 ( A 1 ) and Pr 0 ( A 2 ) = Pr 2 ( A 2 ) for all atomic ev en ts A 1 and A 2 o v er f E ; G g and f G; F g , resp ectiv ely , suc h that: Pr 0 ( E GF ) = max(0 ; Pr 2 ( GF )  Pr 1 ( E G )) = 0 Pr 0 ( EGF ) = 0 : By Lemma A.2, there exists a probabilistic in terpretation Pr o v er B with (12) for all atomic ev en ts H 0 , H 1 , H 2 , A 1 , and A 2 o v er the sets of basic ev en ts f G g , f E g , f F g , B 1 n f G g , and B 2 n f G g , resp ectiv ely . Hence, Pr is a mo del of KB with Pr ( E ) > 0 and Pr ( EF ) = 0. F or u 1 = 0 and not G ) F , the claim follo ws from (1) and Theorem 4.7 d). 2 238 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events References Adams, E. W. (1975). The L o gic of Conditionals , V ol. 86 of Synthese Libr ary . D. Reidel, Dordrec h t, Netherlands. Amarger, S., Dub ois, D., & Prade, H. (1991). Constrain t propagation with imprecise conditional probabilities. In Pr o c e e dings of the 7th Confer enc e on Unc ertainty in A rticial Intel ligenc e , pp. 26{34. Morgan Kaufmann. Andersen, K. A., & Ho ok er, J. N. (1994). Ba y esian logic. De cision Supp ort Systems , 11 , 191{210. Bacc h us, F. (1990). R epr esenting and R e asoning with Pr ob abilistic Know le dge: A L o gic al Appr o ach to Pr ob abilities . MIT Press, Cam bridge, USA. Bacc h us, F., Gro v e, A., Halp ern, J. Y., & Koller, D. (1996). F rom statistical kno wledge bases to degrees of b eliefs. A rticial Intel ligenc e , 87 , 75{143. Carnap, R. (1950). L o gic al F oundations of Pr ob ability . Univ ersit y of Chicago Press, Chicago. Coletti, G. (1994). Coheren t n umerical and ordinal probabilistic assessmen ts. IEEE T r ans- actions on Systems, Man, and Cyb ernetics , 24 (12), 1747{1754. de Finetti, B. (1974). The ory of Pr ob ability . Wiley , New Y ork. Dub ois, D., & Prade, H. (1988). On fuzzy syllogisms. Computational Intel ligenc e , 4 (2), 171{179. Dub ois, D., Prade, H., Go do, L., & de M an taras, R. L. (1993). Qualitativ e reasoning with imprecise probabilities. Journal of Intel ligent Information Systems , 2 , 319{363. Dub ois, D., Prade, H., & T ouscas, J.-M. (1990). Inference with imprecise n umerical quan ti- ers. In Ras, Z. W., & Zemank o v a, M. (Eds.), Intel ligent Systems , c hap. 3, pp. 53{72. Ellis Horw o o d. F agin, R., Halp ern, J. Y., & Megiddo, N. (1990). A logic for reasoning ab out probabilities. Information and Computation , 87 , 78{128. F risc h, A. M., & Hadda wy , P . (1994). An ytime deduction for probabilistic logic. A rticial Intel ligenc e , 69 , 93{122. Garey , M. R., & Johnson, D. S. (1979). Computers and Intr actability: A Guide to the The ory of NP-Completeness . F reeman, New Y ork. Georgak op oulos, G., Ka vv adias, D., & P apadimitriou, C. H. (1988). Probabilistic satisa- bilit y . Journal of Complexity , 4 (1), 1{11. Gilio, A., & Scozzafa v a, R. (1994). Conditional ev en ts in probabilit y assessmen t and revi- sion. IEEE T r ansactions on Systems, Man, and Cyb ernetics , 24 (12), 1741{1746. Halp ern, J. Y. (1990). An analysis of rst-order logics of probabilit y . A rtic al Intel ligenc e , 46 , 311{350. 239 Lukasiewicz Hansen, P ., Jaumard, B., Nguets  e, G.-B. D., & de Arag~ ao, M. P . (1995). Mo dels and algo- rithms for probabilistic and Ba y esian logic. In Pr o c e e dings of the 14th International Joint Confer enc e on A rticial Intel ligenc e , pp. 1862{1868. Heinsohn, J. (1994). Probabilistic description logics. In Pr o c e e dings of the 10th Confer enc e on Unc ertainty in A rticial Intel ligenc e . Morgan Kaufmann. Jaumard, B., Hansen, P ., & de Arag~ ao, M. P . (1991). Column generation metho ds for probabilistic logic. ORSA Journal of Computing , 3 , 135{147. Ka vv adias, D., & P apadimitriou, C. H. (1990). A linear programming approac h to reasoning ab out probabilities. A nnals of Mathematics and A rticial Intel ligenc e , 1 , 189{205. Luk asiewicz, T. (1996). Pr e cision of Pr ob abilistic De duction under T axonomic Know le dge . Do ctoral Dissertation, Univ ersit at Augsburg. Luk asiewicz, T. (1997). Ecien t global probabilistic deduction from taxonomic and prob- abilistic kno wledge-bases o v er conjunctiv e ev en ts. In Pr o c e e dings of the 6th Inter- national Confer enc e on Information and Know le dge Management , pp. 75{82. A CM Press. Luk asiewicz, T. (1998a). Magic inference rules for probabilistic deduction under taxonomic kno wledge. In Pr o c e e dings of the 14th Confer enc e on Unc ertainty in A rticial Intel- ligenc e , pp. 354{361. Morgan Kaufmann. Luk asiewicz, T. (1998b). Man y-v alued rst-order logics with probabilistic seman tics. In Pr o- c e e dings of the A nnual Confer enc e of the Eur op e an Asso ciation for Computer Scienc e L o gic . T o app ear. Luk asiewicz, T. (1998c). Probabilistic deduction with conditional constrain ts o v er basic ev en ts. In Principles of Know le dge R epr esentation and R e asoning: Pr o c e e dings of the 6th International Confer enc e , pp. 380{391. Morgan Kaufmann. Luk asiewicz, T. (1998d). Probabilistic logic programming. In Pr o c e e dings of the 13th Eu- r op e an Confer enc e on A rticial Intel ligenc e , pp. 388{392. J. Wiley & Sons. Luk asiewicz, T. (1999a). Lo cal probabilistic deduction from taxonomic and probabilistic kno wledge-bases o v er conjunctiv e ev en ts. International Journal of Appr oximate R e a- soning . T o app ear. Luk asiewicz, T. (1999b). Probabilistic and truth-functional man y-v alued logic program- ming. In Pr o c e e dings of the 29th IEEE International Symp osium on Multiple-V alue d L o gic . T o app ear. Luo, C., Y u, C., Lob o, J., W ang, G., & Pham, T. (1996). Computation of b est b ounds of probabilities from uncertain data. Computational Intel ligenc e , 12 (4), 541{566. Ng, R. T., & Subrahmanian, V. S. (1993). A seman tical framew ork for supp orting sub jectiv e and conditional probabilities in deductiv e databases. Journal of A utomate d R e asoning , 10 (2), 191{235. 240 Pr obabilistic Deduction with Conditional Constraints o ver Basic Events Ng, R. T., & Subrahmanian, V. S. (1994). Stable seman tics for probabilistic deductiv e databases. Information and Computation , 110 , 42{83. Nilsson, N. J. (1986). Probabilistic logic. A rticial Intel ligenc e , 28 , 71{88. Nilsson, N. J. (1993). Probabilistic logic revisited. A rticial Intel ligenc e , 59 , 39{42. P aa, G. (1988). Probabilistic Logic. In Dub ois, D., Smets, P ., Mamdani, A., & Prade, H. (Eds.), Non-Standar d L o gics for A utomate d R e asoning , c hap. 8, pp. 213{251. Aca- demic Press. P apadimitriou, C. H., & Steiglitz, K. (1982). Combinatorial Optimization, A lgorithms and Complexity . Pren tice-Hall, Englew o o d Clis, NJ. P earl, J. (1988). Pr ob abilistic R e asoning in Intel ligent Systems: Networks of Plausible Infer enc e . Morgan Kaufmann, San Mateo, CA. Pittarelli, M. (1994). An ytime decision making with imprecise probabilities. In Pr o c e e dings of the 10th Confer enc e on Unc ertainty in A rticial Intel ligenc e , pp. 470{477. Morgan Kaufmann. Sc hrijv er, A. (1986). The ory of Line ar and Inte ger Pr o gr amming . Wiley , New Y ork. Th one, H. (1994). Pr e cise Conclusion under Unc ertainty and Inc ompleteness in De ductive Datab ase Systems . Do ctoral Dissertation, Univ ersit at T  ubingen. Th one, H., Kieling, W., & G  un tzer, U. (1995). On cautious probabilistic inference and default detac hmen t. A nnals of Op er ations R ese ar ch , 55 , 195{224. v an der Gaag, L. (1991). Computing probabilit y in terv als under indep endency constrain ts. In Unc ertainty in A rticial Intel ligenc e 6 , pp. 457{466. North-Holland, Amsterdam. W alley , P . (1991). Statistic al R e asoning with Impr e cise Pr ob abilities . Chapman and Hall, New Y ork. 241

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment