Selectivity in Probabilistic Causality: Where Psychology Runs Into Quantum Physics

Given a set of several inputs into a system (e.g., independent variables characterizing stimuli) and a set of several stochastically non-independent outputs (e.g., random variables describing different aspects of responses), how can one determine, fo…

Authors: Ehtibar N. Dzhafarov, Janne V. Kujala

Selectivity in Pr obabilistic Causality: Wher e Psychology Runs Into Quantum Physics Ehtibar N. Dzhafarov ∗ Pur due University Janne V . Kujala University of Jyväskylä Giv en a set of se ve ral inputs into a system (e.g., independe nt v ariables characterizing stimuli) and a set of se veral stoch astically non-independen t outputs (e.g., random variables describing differe nt aspects of responses), ho w can one determine, for each of the outputs, which of the i nputs it is influenced by? The problem has applications ranging from modeling pairwise comparisons to reconstructing mental processing architectures to conjoint testing. A necessary and sufficient condition for a given pattern of selective influences is provided by the Joint Distri bution Criterion, according to which the problem of “what influences what” is equi valen t to that of the existence of a joint distribution for a certain set of random v ariables. Fo r inputs and outputs with finite sets of values this criterion translates into a test of consistency of a certain system of linear equations and inequalities (Linear Feasibility T est) which can be performed by means of linear programming. While new in t he behavioral conte xt, both t his test and the Joint Dist ribution Criterion on which it i s based hav e been previou sly proposed in quantum physics, in dealing with generalizations of Bell inequalities for the quantum entanglement problem. The parallels between this problem and that of selective influences in behavioral sciences are established by observing that noncommuting measurements in quantum physics are mutually exclusi ve and can therefore be treated as dif ferent lev els of one and the same factor . K E Y W O R D S : Bell-type inequalities, EPR paradigm , f actorial design, Fine’ s inequalities, joint distribution criterion, probabilistic causality , mental architectures, random outputs, selectiv e influences, quantum entanglement, Thurstonian scaling. 1. INTRODUCTION This paper deals with d iagrams of selective influe nces , like this one: α   ' ' ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ β     ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ γ     δ   ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ u u ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ ❥ A B C (1) The Greek letters in th is diagr am represent inpu ts , o r external factors , e.g., parameter s of stimuli whose values can be cho- sen at will or o bserved and reco rded. The capital Roman letters stand f or ran dom outpu ts characteriz ing reactions o f the system (an ob server , a g roup of observers, stock market, a set of p ho- tons, etc.). The arrows sho w which factor influ ences which ran - dom output. The factors are tr eated as deterministic en tities: ev en if α , β , γ , δ in reality vary random ly (e.g. , being rand omly generated by a compute r pro gram, or being concomitant pa- rameters of observations, such as ag e o f respond ents), for the purpo ses of analyzin g selective influ ences th e ran dom o utputs A , B , C are always viewed a s co nditione d upon v arious c ombina- tions of specific values of α , β , γ , δ . The first q uestion to ask is: what is the meanin g of the above diagram if the random outputs A , B , C in it are not n ecessarily stochastically ind epende nt? (If they are, the answer is of co urse trivial.) An d once the meaning of the d iagram of selective influences is establishe d, how can one d etermine that this d iagram cor rectly character izes the de - ∗ Correspondi ng author: Ehtibar Dzhafarov , Purdue Unive rsity , Department of Psychologi cal Scienc es, 703 Third Street W est Lafayett e, IN 47907, USA. emai l: ehtiba r@purdue.edu. penden ce of the joint distrib utions of the random outputs A , B , C on the external factors α , β , γ , δ ? These questions are impo rtant, because the assumption of stochastic indepen dence of the o utputs more often than not is either demon strably false or adopted for expediency alone, with no other justification. A t the same time the as sumptio n of selec- ti vity in causal relations b etween inputs an d stoch astic outputs is ubiquito us in theo retical mod eling, often b eing built in the very languag e of the m odels. For instance, in Thurstone ’ s mo st gen - eral mod el of pairwise compariso ns (Th urstone, 1927 ) it is as- sumed that each of the two stimuli is map ped into “its” internal representatio n, while the two r epresentation s a re stoc hastically interdepen dent random entities. In Dzh afarov (2003), Dzhafarov and Glu hovsky (2006 ), and Kujala an d Dzhafarov (2008 ) the reader may find other m otiv ating a pplications for the notion o f selecti ve in fluences: same-different co mparisons, c onjoint test- ing, parallel-serial n etworks o f mental operations, resp onse time decomp ositions, and all conceiv able combinations of regression analysis an d factor an alysis. In th is p aper we add another moti- vating example, the quantu m entanglemen t problem in quantum physics. This paper continues and expands the analy sis of selecti ve in- fluences presen ted in Dzhafarov an d Kujala (201 0). The famil- iarity with it can be helpful, b ut the main concepts, terminolo gy , and notation are recapitulated in Section 2. Unlike in Dzh a- farov and Kujala (2 010), howe ver , her e we do n ot pursue the goal of maximal generality of formu lations, focusing instead on the conceptua l s et-up that app lies to commonly encountered ex- perimental design s. Th is means a finite numb er of factors, each having a finite n umber of values. It also means that the r andom outcomes influence d by these factors are random variables in the n arrow sense of the word: their values a re vectors o f rea l number s or elements of co untable sets, rath er than more com- plex structures, such as function s or sets. This is don e primarily 1 2 Dzhafarov and Kujala to simplify and sho rten expo sition, an d a lso because the Linear F easibility T est , a new (for beh avioral sciences) applicatio n of the Join t Distribution Criterion on wh ich we focu s in this paper (Section 3) , is confined to finite sets of finite-valued factors and finite-valued ra ndom variables. This a lso allows us to emp hasize a simple but importan t and previously overloo ked pro position, Theorem 2.3, which essentially says th at, when dealing with ob- servable ra ndom variables, the u nobservable rando m en tities of the theor y can also be assumed to be rand om variables (in the narrow sense). In anoth er respect, howe ver , the present treat- ment is mor e general than that in Dzhafarov and Kujala (201 0): we allow fo r incomp lete d esigns , those in wh ich so me but not necessarily all combination s of the values of the factors serve as allow able trea tments. T his modificatio n is cr itical for the po ssi- bility of rep resenting a ny d iagram of selectiv e influenc es, such as ( 1), in a canonic al form , with every random outp ut being se- lectiv ely influence d by one and only one factor . As it tu rns out, both the Linear Feasibility T est and the Joint Distribution Criterio n on which it is based h av e their ana- logues in q uantum physics. 1 T o appreciate the an alogy , howe ver , one has to adop t th e interpre tation of noncom muting qu antum measuremen ts perf ormed o n a given co mpon ent of a quan tum- entangled system as mutually e xclusive factor le vels of the s ame factor . In Sections 2.6 and 3 we d iscuss the p arallels between the existence o f a classical explanation for an entan glement sit- uation in quantu m mechanics and the ad herence of a behavioral experiment to a diagram of selecti ve influences. The term “test” in this paper is used in the m eaning of nec- essary (some times n ecessary and sufficient) co nditions fo r dia- grams of selective influen ces. The usage is the same as when we speak o f the tests for conv ergence in calculus or fo r divisibility in arithmetic. That is, the mean ing of the term is non-statistical. W e assume that r andom outp uts are known on the popu lation lev el. Gen eral consideration s related to statistical tests based on our populatio n level tests ar e discussed in Section 3. 6, but spe- cific statistical issues are outside the scope of this paper . 2. BASIC NO TIONS In this section, we establish th e terminology , no tation, and re- capitulate basic facts related to factor s, rand om variables, and the dep endenc e of the latter on the fo rmer . W e follow Dzha- farov and Kujala (2010 ), adding ob servations related to th e fac- torial designs being inc omplete and random outpu ts being r an- dom variables in the narrow sense of the term. At the en d of the section we discuss the parallels b etween the issue o f selective influence in behavioral sciences an d the qu antum entang lement problem . 1 W e are grateful to Jerome Busemeyer of Indiana Unive rsity who pointed out to us that the formulati on of the Joint Distrib ution Crite rion in our earl ier work has the same formal structure as t he ident ically titled criterio n in Fine (1 981a- b), in his analysi s of quantum entangle ment. 2.1. Fac tors, factor points, treatments A factor α is treated as a set of factor points , each of which has the f ormat “value ( or level) x of factor α . ” In sy mbols, this can b e presented as ( x , ‘ α ’ ) , w here ‘ α ’ is the u nique n ame of the set α rather than the set itself. I t is convenient to write x α in place of ( x , ‘ α ’ ) . Thu s, if a factor with the nam e ‘ int ensi t y ’ has three lev els, ‘ l o w , ’ ‘ med ium , ’ and ‘ high , ’ then this factor is taken to be the set int ensi t y =  l o w int ensity , med ium int ensity , high int ensity  . There is no circu larity here, for, s ay , the factor poin t l o w int ensity stands for ( val ue = l ow , name = ‘ int ensi t y ’ ) rather than ( val ue = l o w , se t = int ensi t y ) . W e will dea l with finite sets of factors Φ = { α 1 , . . . , α m } , wit h each factor α ∈ Φ co nsisting of a finite number of factor points, α =  v α 1 , . . . , v α k α  . Clearly , α ∩ β = ∅ fo r any distinct α , β ∈ Φ . A tr eatment , as usual, is de fined as the set of factor p oints containing one factor point from each factor , φ =  x α 1 1 , . . . , x α m m  ∈ α 1 × . . . × α m . The set of tr eatments (used in an experiment or con sidered in a theory) is denoted by T ⊂ α 1 × . . . × α m and assumed to be non empty . Note that T n eed not in clude all possible comb i- nations of factor p oints. This is an importan t co nsideration in view o f the “canon ical rearrang ement” described below . Also, incompletely crossed designs occur broadly — in an experiment because the entire set α 1 × . . . × α m may be too large, or in a the- ory because certain combination s of f actor points may be physi- cally or logically imp ossible (e.g. , contrast an d sh ape cann ot be completely crossed if zero is one of the values for contrast). 2.2. Random va riables W e assume the r eader is familiar with the notio n of a ran dom entity (ra ndom variable in the gen eral sense o f the term) A as- sociated with an o bservation space ( A , Σ ) , where A is the set of possible values fo r A , and Σ a sigm a-algebra (set of events) on A . A random variab le (in the narr ow s ense) is a special case o f a random entity , defined as follows: (i) if A is co untable, Σ is the power set of A , then A is a random variable; (ii) if A is an in terval of reals, Σ is the L ebesgue sigm a-algebr a on A , then A is a rando m v ariable; (iii) if A 1 , . . . , A n are rand om variables, then any jo intly dis- tributed vector ( A 1 , . . . , A n ) whose o bservation space is the con- ventionally under stood produc t of the observations spaces fo r A 1 , . . . , A n is a rando m variable. W e u se the relation al symb ol ∼ in the m eaning of “is dis- tributed as . ” A ∼ B is well defined irrespecti ve of whether A and B are jointly distributed. Let, for each treatm ent φ ∈ T , there be a vector of jointly dis- tributed rand om variables A = ( A 1 , . . . , A n ) with a fixed (pro duct) observation spac e and the pro bability measure µ φ that d epends Selectivity in Probabilisitc C ausality 3 on φ . 2 Then we say that we have a ve ctor of jointly d istrib uted random variables that depends on tr ea tment φ , a nd write A ( φ ) = ( A 1 , . . . , A n )( φ ) , φ ∈ T . A correct way of thinking of A ( φ ) is that it represen ts a set of vectors of jointly d istrib uted rando m variables, each o f these vectors being labeled (indexed) by a particu lar trea tment. Any subvector of A ( φ ) should also be written with th e argumen t φ , say , ( A 1 , A 2 , A 3 ) ( φ ) . If φ is explicated as φ =  x α 1 1 , . . . , x α m m  , we write A ( φ ) = A ( x α 1 1 , . . . , x α m m ) . It is impo rtant to note that f or distinct tre atments φ 1 and φ 2 the corre spondin g A ( φ 1 ) and A ( φ 2 ) do not possess a joint d is- tribution , they are stochastically unr elated . This is easy to u n- derstand: since φ 1 and φ 2 are mutua lly exclusive condition s for observing values of A , there is no n on-arb itrary way of ch oos- ing which value a = ( a 1 , . . . , a n ) ob served at φ 1 should be paired with which value a ′ = ( a ′ 1 , . . . , a ′ n ) observed at φ 2 . T o con sider A ( φ 1 ) and A ( φ 2 ) stochastically independe nt an d to pa ir every possible value of A ( φ 1 ) with every possible value A ( φ 2 ) is as arbitrary as, say , to consid er th em positively co rrelated and to pair every q uantile of A ( φ 1 ) with the correspon ding quan tile of A ( φ 2 ) . 2.3. Arro w diagrams, canonically (re)arranged Giv en a set of factors Φ = { α 1 , . . . , α m } and a vector A ( φ ) = ( A 1 , . . . , A n )( φ ) of rando m variables depending on treatmen t, an arr ow diagram is a mapping M : { 1 , . . . , n } → 2 Φ (2) (2 Φ being the set of subsets of Φ ). Later, in Definition 2 .1, the arrows will be interp reted as indicating selective influences, but for now this is unimpo rtant. The set Φ i = M ( i ) , ( i = 1 , . . . , n ) , is r eferred to as the sub set of factors corr espond ing to A i . It d e- termines, for an y treatment φ ∈ T , the subtreatments φ Φ i defined as φ Φ i = { x α ∈ φ : α ∈ Φ i } , i = 1 , . . . , n . Subtreatmen ts φ Φ i across all φ ∈ T can be viewed as admissible values o f the subset of factors Φ i ( i = 1 , . . . , n ) . Note that φ Φ i is empty whenever Φ i is empty . The simp lest arrow diagram is bijective , with cor respon- dences α 1   . . . α n   A 1 . . . A n . (3) 2 The con veni ent assumption of the in vari ance of the observ ation space for A with respect to φ is i nnocuous: one can alw ays redefine t he obse rvat ion spaces for dif ferent treatments φ to make them coinci de. W e can simplify the subsequen t d iscussion withou t sacrificing generality by agreeing to reduce each arro w diag ram (in the c on- text of selecti ve influences) to a bijecti ve form, by appropriately redefining factors and treatmen ts. It is obvious h ow this shou ld be d one. Given the subsets of factor s Φ 1 . . . , Φ n determined by an ar row diagram (2), each Φ i can be viewed as a factor identi- fied with the set of factor points α ∗ i = n ( φ Φ i ) α ∗ i : φ ∈ T o , in ac cordanc e with the n otation we have adopted fo r factor points: ( φ Φ i ) α ∗ i = ( φ Φ i , ‘ α ∗ ’ ) . If Φ i is empty , th en φ Φ i is empty too, and the factor α ∗ i consists of o nly the du mmy factor poin t ∅ α i (where ∅ d enotes the empty set). T he set of tre atments T for the original factors { α 1 , . . . , α m } sho uld then be red efined fo r the vector of ne w factors ( α ∗ 1 , . . . , α ∗ n ) as T ∗ = nn ( φ Φ 1 ) α ∗ 1 , . . . , ( φ Φ n ) α ∗ n o : φ ∈ T o ⊂ α ∗ 1 × . . . × α ∗ n . W e call this (re)d efinition of factor poin ts, factors, and treat- ments the canonica l (re)arr angemen t. W e can say that the ran - dom variables fo llowing c anonical (re)ar rangeme nt ca n be in- dexed by the correspo nding factor s. T hus, whe n conv enien t, we can wr ite in (3) A { α 1 } in p lace of A 1 , A { α 2 } in place of A 2 , etc. Th e notation φ Φ i = φ { α i } then indicates the singleton set { x α i } ⊂ φ . As usual, we write x α i in place of { x α i } : φ { α i } =  x α 1 1 , . . . , x α n n  { α i } = x α i i . 2.4. The criterion Definition 2 .1 ( S elective influ ences, bijective form ) . An arrow diagram (3) is said to b e the d iagram of selective influences for ( A 1 , . . . , A n )( φ ) and ( α 1 , . . . , α n ) , and we write ( A 1 , . . . , A n ) " ( α 1 , . . . , α n ) , if, for some rando m en tity R and for any treatment φ =  x α 1 1 , . . . , x α n n  ∈ T , ( A 1 , . . . , A n )( φ ) ∼  f 1 ( φ { α 1 } , R ) , . . . , f n ( φ { α n } , R )  =  f 1 ( x α 1 1 , R ) , . . . , f n ( x α n n , R )  , (4) where f i : α i × R → A i ( i = 1 , . . . , n ) are some functions, with R denoting the set of possible values of R . 3 This definitio n is d ifficult to put to work, as it refers to an existence of a rand om entity (variable) R without sh owing h ow one can find it o r prove that it c annot be found . T he following criterion (necessary and sufficient condition) for ( A 1 , . . . , A n ) " ( α 1 , . . . , α n ) circumvents this problem. 3 It will be shown below , Theorem 2.3, that random entity R can alw ays be chosen to be a random v ariable (in the narrow sense). 4 Dzhafarov and Kujala Criterion 2.2 ( Joint D istrib ution Criterion , JDC) . A ve ctor o f random v ariables A ( φ ) = ( A 1 , . . . , A n )( φ ) satisfies a diagram o f selective influences (3 ) if and only if there is a vector of join tly distributed r ando m variables H =   for α 1 z }| { H x α 1 1 , . . . , H x α 1 k 1 , . . . , for α n z }| { H x α n 1 , . . . , H x α n k n   , one rando m variab le for each factor po int of each factor , such that  H φ { α 1 } , . . . , H φ { α n }  ∼ A ( φ ) (5) for every tr ea tment φ ∈ T . See Dzhafarov and Kujala (2 010) for a proof. Th e vector H in the fo rmulation of the JDC is ref erred to a s th e JDC-vecto r for A ( φ ) , or the hypoth etical JDC-vector fo r A ( φ ) , if the existence of such a vector of jointly distrib uted variables is in question. The JDC pro mpts a simp le justification for our d efinition of selecti ve i nfluen ces. Let, for examp le, ( A , B , C ) " ( α , β , γ ) , with α = { 1 α , 2 α } , β =  1 β , 2 β , 3 β  , γ = { 1 γ , 2 γ , 3 γ , 4 γ } . Consider all treatments φ in which the f actor point of α is fix ed, say , at 1 α . If ( A , B , C ) " ( α , β , γ ) , then in the vectors of random v ariables ( A , B , C )  1 α , 2 β , 1 γ  , ( A , B , C )  1 α , 2 β , 3 γ  , ( A , B , C )  1 α , 3 β , 1 γ  the marginal distrib ution of the variable A is one and the same, A  1 α , 2 β , 1 γ  ∼ A  1 α , 2 β , 3 γ  ∼ A  1 α , 3 β , 1 γ  . But the intuition of selective influ ences re quires more: that we can den ote this variable A ( 1 α ) beca use it preserves its id entity (and n ot just its distribution) n o matter wh at other variables it is paired with, ( B , C )  2 β , 1 γ  , ( B , C )  2 β , 3 γ  , or ( B , C )  3 β , 1 γ  . Analogou s stateme nts hold for A ( 2 α ) , B  2 β  , B  3 β  , C ( 1 γ ) , etc. The JDC f ormalizes the intuitive notio n of variables “p re- serving their id entity” wh en en tering in various co mbination s with each other : there are jointly distributed random v ariables H 1 α , H 2 α , H 1 β , H 2 β , H 3 β , H 1 γ , H 2 γ , H 3 γ , H 4 γ whose ide ntity is d efined by th is joint d istribution; whe n H 1 α is com bined with ra ndom variables H 2 β and H 1 γ , it fo rms the triad ( H 1 α , H 2 β , H 1 γ ) wh ose distribution is the same as that of ( A , B , C )  1 α , 2 β , 1 γ  ; wh en the same ran dom variable H 1 α is combined with ran dom variables H 2 β and H 3 γ , the triad ( H 1 α , H 2 β , H 3 γ ) is distributed as ( A , B , C )  1 α , 2 β , 3 γ  ; and so on — the key con cept b eing that it is o ne and the same H 1 α which is being paired with other variables, as opposed to different ran- dom variables A  1 α , 2 β , 1 γ  , A  1 α , 2 β , 3 γ  , A  1 α , 3 β , 1 γ  which are identically distributed. See Dzhafarov and Kujala (2010) for a demon stration that the identity is not generally preserved if all we know is marginal selec ti vity (as defined in Section 2.5). The following is an important consequence of JDC. Theorem 2. 3. In Definition 2.1 , the random entity R can always be chosen to be a random v ariable. Moreo ver , R can be cho sen arbitrarily , as any co ntinuo usly (a tomlessly) distrib uted random variable, e.g., uniformly distrib uted between 0 and 1. Pr oof. The first statemen t f ollows from the fact that R can b e chosen to coincide with the JDC-vector H , so that f i ( x α i , H ) = H α i x α i , for i = 1 , . . . , n , and x α i ∈ α i . The JDC-vector H is a random vari- able. The secon d statement follows from Theore m 1 in Dzha- farov & Gluh ovsky , 2006, based on a general r esult for standard Borel spaces (e.g., in Kechris, 1995, p. 116). 2.5. Three basic properties of selectiv e infl uences For com pleteness, we list three other fun damental conse- quences of JDC (Dzhafarov & Kujala, 2010). 2.5.1. Nestedness. For any subset { i 1 , . . . , i k } o f { 1 , . . . , n } , if ( A 1 , . . . , A n ) " ( α 1 , . . . , α n ) then ( A i 1 , . . . , A i k ) " ( α i 1 , . . . , α i k ) . 2.5.2. Complete Mar ginal Selectivity For any subset { i 1 , . . . , i k } o f { 1 , . . . , n } , if ( A 1 , . . . , A n ) " ( α 1 , . . . , α n ) then the k - marginal distribution 4 of ( A i 1 , . . . , A i k )( φ ) does not depend on points of the factors outside ( α i 1 , . . . , α i k ) . In particular, the distribution of A i only depends on points of α i , i = 1 , . . . , n . This is, of cour se, a trivial consequen ce o f the nestedness proper ty , but its impor tance lies in that it provides th e easiest to check necessary condition for selecti ve influences. 2.5.3. In variance under factor-po int-specific transformations Let ( A 1 , . . . , A n ) " ( α 1 , . . . , α n ) and H =  H x α 1 1 , . . . , H x α i k 1 , . . . , H x α n 1 , . . . , H x α n k n  be th e JDC-vector fo r ( A 1 , . . . , A n )( φ ) . Let F b e any function that ap plies to H compo nentwise and pr oduces a correspo nding vector of random v ariables F ( H ) =      F  x α 1 1 , H x α 1 1  , . . . , F  x α i k 1 , H x α i k 1  , . . . , F  x α n 1 , H x α n 1  , . . . , F  x α n k n , H x α n k n       , 4 k -margina l distribut ion is the distributio n of a subset of k random vari ables ( k ≥ 1) in a set of n ≥ k vari ables. In T ownsend and Schweick ert (1989) the property was formulated for 1-margina ls of a pair of random va riables. The adjec tiv e “complete” we use with “marginal selecti vit y” is to emphasize that we deal with all possible margi nals rather than with just 1-margina ls. Selectivity in Probabilisitc C ausality 5 where we deno te by F ( x α , · ) th e applicatio n of F to the comp o- nent labeled by x α . Clearly , F ( H ) p ossesses a joint d istribution and contains o ne com ponent for each factor poin t. If we now de- fine a v ector of rando m variables B ( φ ) for e very treatme nt φ ∈ T as ( B 1 , . . . , B n ) ( φ ) =  F  φ { α 1 } , A 1  , . . . , F  φ { α n } , A n  ( φ ) , then it follows from JDC that ( B 1 , . . . , B n ) " ( α 1 , . . . , α n ) . 5 A function F ( x α i , · ) can be referre d to as a fa ctor-point-specific transformation of the rand om variable A i , bec ause the ra ndom variable is transformed differently for dif ferent po ints of the fac- tor assume d to selectively influ ence it. W e can fo rmulate the proper ty in qu estion by saying tha t a diag ram of selectiv e influ- ences is inv ariant under all factor-point-specific transformations of the r andom v ariables. Note that this in cludes as a special c ase transform ations which are not f actor-point-spe cific, with F  x α i 1 , ·  ≡ . . . ≡ F  x α i k i , ·  ≡ F ( α i , · ) . This pro perty is importan t for co nstruction and use o f tests for selecti ve in fluences (Dzhafarov & Kujala, 2 010; Kujala & Dzha- farov , 2008). 2.6. Quantum entanglement and selective influences In psy cholog y , the n otion of selective influ ences was intro- duced by Stern berg (19 69), in the context of study ing “stages” of inform ation processing. Sternberg ac knowledged that selec- ti ve influences can hold ev en if the duration s o f the stages being selecti vely affected are not stochastically ind ependen t, but he lacked the math ematical appa ratus f or dealing w ith this p ossi- bility . T ownsend ( 1984 ) was the first to stud y th e no tion of se- lectiv eness un der stochastic interdepen dence s ystematically . He propo sed to for malize the n otion of selectively in fluenced and stochastically interde penden t rando m variables by the con cept of “ indirect no nselectiveness ”: the con ditional distribution of the variable A 1 giv en any value a 2 of the variable A 2 , depen ds on α 1 only , an d, by symmetry , th e conditiona l d istribution of A 2 at any A 1 = a 1 depend s on α 2 only . Un der th e na me o f “ con- ditionally selective infl uence ” this notion was math ematically characterized and generalized in Dzh afarov (1999) . It tu rned out, howe ver , that this notio n could not serve as a g eneral def- inition of selectiv e influen ces, becau se it d id not satisfy some intuitive d esiderata f or suc h a definitio n, e.g., th e nestedn ess and marginal selecti vity prope rties f ormulated in Section 2.5. V ar iants of Definition 2 .1 of the pre sent paper were pro posed in Dzhafarov (2003) and both elaborated and generalized in Dzha- farov and Gluhovsky (200 6), Kujala and Dzhafarov (200 8); JDC was explicitly form ulated in Dzhafarov an d Kujala (20 10), al- though clearly implied in the earlier work. 5 Since it is possible that F ( x α , H x α ) and F  y α , H y α  , with x α 6 = y α , have dif- ferent sets of possible valu es, strictly speaking, one may need to redefine the functio ns to ensure that the sets of possible valu es for B ( φ ) is the same for dif ferent φ . This is, howe ver , not essential (see footnote 2). Until very rece ntly (see footn ote 1) we were blissfully un- aware of th e ana logous developments in quantum phy sics. T he most conspicu ous par allels can be found in Fine ( 1981a -b), but that work in turn builds on a vener able line of re search and think- ing: going back first to Bell (19 64), an d u ltimately to Einstein, Podolsky , and Rosen’ s (19 35) paper . T he issue in qu estion re- gards two “no ncommu ting” measurements, such a s those of th e momentu m and of th e loca tion of a p article, o r spin measure- ments along two different a xes. For our pur poses it is suffi- cient to state that wh en one of two non commutin g measuremen ts is per formed (without uncer tainty about the resu lt), th e second one canno t be per formed on the same system. Th e key insight needed to understan d the analo gy with th e prob lem of selective influences is this: nonco mmuting measurements on the same sys- tem, being mutually e xclusive, can be v iewed as le vels (mutually exclusive values) of one and the s ame external factor . This is not entirely intuitive. Consider two p articles for eac h of which one can measure its mo mentum or its loc ation. The analogy req uires th at on e view the me asurement on particle 1 as a facto r α 1 with two mutu ally exclusive levels , 1 α 1 (location measuremen t) and 2 α 1 (momen tum measur ement); an d the mea- surement o n p article 2 is a factor α 2 with two mutually exclusiv e lev els, 1 α 2 and 2 α 2 , interpreted analo gously . The two measu re- ments can be comb ined in trea tments, ( 1 α 1 , 1 α 2 ) , ( 1 α 1 , 2 α 2 ) , etc., but not within a factor, ( 1 α 1 , 2 α 1 ) or ( 1 α 2 , 2 α 2 ) . The results of each of the measure ments is a random variable, A 1 for particle 1 an d A 2 for particle 2. T he possible values A 1 for A 1 are p os- sible location s of par ticle 1 if α 1 is at le vel 1 α 1 , but they are possible mo mentum values f or p article 1 if α 1 is at le vel 2 α 1 (which m akes it awkward but still po ssible to maintain the co n- vention men tioned in foo tnote 2) . It is easier with spins (Boh m & Aharon ov , 195 7): for instance, for sp in- 1 / 2 p articles (such as electrons), A 1 consists of two possible values of spin in one di- rection if α 1 is at level 1 α 1 and of two po ssible values of spin in another direction if the level is 2 α 1 . Th ese two two-element sets are more natural to consider “the same. ” W ith all this in mind, the question now can be posed in the familiar to us for m: ca n we say that ( A 1 , A 2 ) " ( α 1 , α 2 ) , or can the measuremen t (factor) α 1 influence the result (ran dom variable) A 2 and/or α 2 influence A 1 ? In the Einstein-Podo lsky- Rosen (EPR) paradigm involving entan gled p articles, the two random outco mes A 1 , A 2 are stochastically inte rdepend ent, and their joint distribution at ev ery treatment is (corr ectly) predicted by the quantum theory . The question therefore becomes: are the predicted (and observed) joint distributions of ( A 1 , A 2 ) compati- ble with the hypo thesis ( A 1 , A 2 ) " ( α 1 , α 2 ) ? E instein, Podo lsky , and Rosen ( 1935) took ( A 1 , A 2 ) " ( α 1 , α 2 ) for gr anted if the two particles ar e separ ated in space and measured simu ltaneously (in some inertial frame of reference) . Bell’ s (1964 ) ce lebrated theorem shows that ( A 1 , A 2 ) " ( α 1 , α 2 ) is not th e case for entangled spin - 1 / 2 pa rticles obeying the laws of quantu m me chanics. Th e reason this result is con- sidered to be of fou ndation al imp ortance ( “the m ost profo und discovery in science, ” repeating the oft-quo ted ch aracterization by Stapp, 19 75) is that Bell essentially adopted Definition 2. 1 f or ( A 1 , A 2 ) " ( α 1 , α 2 ) a nd iden tified the ran dom entity R with the set of all hidd en variables of a conceiv able theory “explaining” the d epende nce of ( A 1 , A 2 ) on ( α 1 , α 2 ) : knowing a value of R one would be able to predict, throug h the fun ctions f 1 and f 2 of 6 Dzhafarov and Kujala Definition 2.1, the values of ( A 1 , A 2 ) . I n addition to b eing called “hidden ” the variables entailed in R are referred to as “context- indepen dent” (meaning that the distribution of R and the fun c- tions f 1 , f 2 do n ot depen d o n treatmen ts) and “local” (mean ing, essentially , that in the theory in v olvin g R and f 1 , f 2 the measure- ment α 1 does not influen ce A 2 , nor α 2 influences A 1 ). Bell’ s (1964 ) theo rem therefore is interpreted as stating that q uantum prediction s regarding tw o e ntangled spin- 1 / 2 particle s cannot be explained by any theo ry inv olving context-ind ependen t and local variables. A rejection of ( A 1 , A 2 ) " ( α 1 , α 2 ) in q uantum p hysics can be handled by d ispensing with locality (Bohm ’ s approach ), but most physicists find th is unten able (mea surement α 1 cannot in- fluence A 2 if they are separated by a space-like interval). The quantum p robability theo ry can be viewed as a way of allowing for context-depe ndence while r etaining locality . In behavioral applications b oth locality a nd con text-independ ence can be tar- geted wh en ( A 1 , A 2 ) " ( α 1 , α 2 ) is rejected , and distinguishing the two is a challenge. Follo wing the logic of Bell’ s work, Clauser , Horne, Shimony , & Holt (19 69) derived a system of ineq ualities th at are neces- sary cond itions for ( A 1 , A 2 ) " ( α 1 , α 2 ) in the EPR p aradigm with two particles and two me asurements (factors) with bina ry outcomes. These in equalities a re subsumed in Fine’ s ( 1982a -b) ones ( discussed in Section 3 .5), which p resent b oth necessary and suf ficient condition s for ( A 1 , A 2 ) " ( α 1 , α 2 ) , based on JDC. The latter was introduc ed in Fine’ s p apers for the first time (and called by th is name too), althoug h the earlier Su ppes and Zan- otti’ s (198 1) Theorem on Com mon Causes can also be v iewed as a special form of JDC. Fine’ s in equalities for m a special case of the Lin ear Feasibility T est conside red in the next section. W e there fore defer f urther discussion of the EPR paradig m to Section 3. 5, an d co nclude the present section by the following table of correspo ndences: Selectiv e Probabilistic Causality Quantum Entanglement Problem (for spins) observe d random output detec ted spin val ue of a giv en particle fac tor/input spin measurement in a gi ven parti cle fac tor le vel setting (axis) of the spin measuremen t joint distrib ution criterion joint distrib ution criterion canonic al diagram of selecti ve influences “classic al” explan ation (by context- independen t local v ariables) 3. LINEAR FEASIBILITY TEST In this section we assume that for ea ch random v ariable A i ( φ ) in ( A 1 , . . . , A n )( φ ) the set A i of its possible values has m i el- ements, a i 1 , . . . , a i m i . It is arguably the most im portant special case bo th because it is ubiq uitous and becau se in all o ther cases random variables can be discretized into finite nu mber of cate- gories. W e are in terested in e stablishing the truth or falsity of the d iagram of selective influen ces (3), wh ere each factor α j in ( α 1 , . . . , α n ) contains k j factor p oints x j 1 , . . . , x j k j (written so in - stead o f m ore form al x α j 1 , . . . , x α j k j ). The Linear F ea sibility T est (LFT) to b e de scribed is a direct a pplication o f JDC to this sit- uation, fu rnishing a necessary an d sufficient condition for the diagram of selectiv e influences ( A 1 , . . . , A n ) " ( α 1 , . . . , α n ) . 3.1. The test In the hypoth etical JDC-vector H =  H x 1 1 , . . . , H x 1 k 1 , . . . , H x n 1 , . . . , H x n k n  , since we assume that, for any p oint x i j of factor α i and any trea t- ment φ containin g x i j , H x i j ∼ A i ( φ ) , we know that the set of possible values f or th e rando m variable H x i j is  a i 1 , . . . , a i m i  , irrespective of j . Den ote Pr h A 1 = a 1 l 1 , . . . , A n = a n l n   x 1 j 1 , . . . , x n j n i = P   for r .v .s z }| { l 1 , . . . , l n ; for fact or p oints z }| { j 1 , . . . , j n   , (6) where l i ∈ { 1 , . . . , m i } and j i ∈ { 1 , . . . , k i } for i = 1 , . . . , n (“r .v .s” abbreviates “ random variables”). Denote Pr     H x 1 1 = a 1 l 11 , . . . , H x 1 k 1 = a 1 l 1 k 1 , . . . , H x n 1 = a n l n 1 , . . . , H x n k n = a n l nk n     = Q    for A 1 z }| { l 11 , . . . , l 1 k 1 , . . . , for A n z }| { l n 1 , . . . , l nk n    , (7) where l i j ∈ { 1 , . . . , m i } for i = 1 , . . . , n . This giv es us m k 1 1 × . . . × m k n n Q -prob abilities. A r equired join t distribution for th e JDC- vector H exists if and only if the se probab ilities can be fo und subject to m k 1 1 × . . . × m k n n nonnegativity co nstraints Q ( l 11 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) ≥ 0 , (8) Selectivity in Probabilisitc C ausality 7 and (denoting by n T the n umber of treatments in T ) n T × m 1 × . . . × m n linear equation s ∑ Q ( l 11 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) = P ( l 1 , . . . , l n ; j 1 , . . . , j n ) , (9) where the summation is acro ss all po ssible values of the ( l 11 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) subject to l 1 j 1 = l 1 , . . . , l n j n = l n . 6 This can be m ore comp actly formu lated in a m atrix fo rm. Let the ob servable probabilities P ( l 1 , . . . , l n ; j 1 , . . . , j n ) co nsti- tute compon ents of a n T × m 1 × . . . × m n -dimensional col- umn vector P , with its cells lexicograp hically enumerated by ( l 1 , . . . , l n ; j 1 , . . . , j n ) . Let the hyp othetical probabili- ties Q ( l 11 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) constitute compo nents of a m k 1 1 × . . . × m k n n -dimension al colu mn vector Q , with i ts cells lex- icograph ically enumerated by ( l 11 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) . Let M be a Boolean matrix with n T × m 1 × . . . × m n rows and m k 1 1 × . . . × m k n n columns lexicograp hically enume rated in th e same way as, respe cti vely , P and Q , such that the entry in the cell in the ( l 1 , . . . , l n ; j 1 , . . . , j n ) th row a nd ( l 11 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) th column is 1 if l 1 j 1 = l 1 , . . . , l n j n = l n ; otherwise the en try is 0 . Clearly , the vector Q exists if and only if the system MQ = P , Q ≥ 0 (10) (with th e ine quality un derstood co mpone ntwise) has a solution. This is a typical linear p r ogramming (LP) problem . M ore pre- cisely , this is an LP task in the standard form and wit h a dummy objective function (e.g., a linear co mbination with zero coeffi- cients). It is known (Karm arkar, 1984; Kha chiyan, 19 79) that it is always po ssible, in polyno mial tim e, to either find a solution for such a system o r to d etermine that it does no t exist. Ma ny standard software packag es can handle this problem (e.g., GNU Linear Progr amming Kit at http://www .gnu .org/software/glpk/). 3.2. Properties of the LP pro blem The rank of matrix M is always strictly smaller than the num- ber of comp onents in P . This follows fro m th e fact that fo r any two allow able treatments ( j 1 , . . . , j n ) and ( j ′ 1 , . . . , j ′ n ) that share a subvector ( j 1 ′ , . . . , j s ′ ) =  j ′ 1 ′ , . . . , j ′ s ′  (where we use { 1 ′ , . . . , s ′ } to designate s d istinct elements chosen from { 1 , . . . , n } ), and fo r any fixed ( v 1 , . . . , v s ) , the sum of all rows of M corre sponding to ( l 1 , . . . , l n ; j 1 , . . . , j n ) th com ponen ts of P with ( l 1 ′ , . . . , l s ′ ) = ( v 1 , . . . , v s ) is the same Boo lean vector a s the sum of all rows of M correspo nding to ( l 1 , . . . , l n ; j ′ 1 , . . . , j ′ n ) th compon ents of P with the same property . Th e upper limit for the rank of matrix M is giv en in the following theorem. 6 The sum of all Q ’ s is 1 beca use it equals the sum of all P ’ s (acr oss all l 1 , . . . , l n ) for any gi ven treatment j 1 , . . . , j n . Theorem 3.1. The rank o f M for a maxima l set of treatments T = α 1 × . . . × α n is ( k 1 ( m 1 − 1 ) + 1 ) . . . ( k n ( m n − 1 ) + 1 ) . Pr oof. Given any { 1 ′ , . . . , s ′ } ⊂ { 1 , . . . , n } , ( j 1 ′ , . . . , j s ′ ) ∈ { 1 , . . . , k 1 ′ } × . . . × { 1 , . . . , k s ′ } , ( l 1 ′ , . . . , l s ′ ) ∈ { 1 , . . . , m 1 ′ } × . . . × { 1 , . . . , m s ′ } , let V ( 1 ′ , . . . , s ′ ; j 1 ′ , . . . , j s ′ ; l 1 ′ , . . . , l s ′ ) denote an ( m 1 ) k 1 . . . ( m n ) k n -compo nent Boolean row vector w hose compon ents ar e le xicog raphically enumer ated in the same way as Q , a nd such that its ( l 11 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) th compon ent is 1 if and only if l 1 ′ j 1 ′ = l 1 ′ , . . . , l s ′ j s ′ = l s ′ . The ro ws of matrix M are V ( 1 , . . . , n ; j 1 , . . . , j n ; l 1 , . . . , l n ) - vectors. It is easy to check that for any fixed ( 1 ′ , . . . , s ′ ; j 1 ′ , . . . , j s ′ ) , the sum of the rows of M correspo nding to fixed values ( l 1 ′ , . . . , l s ′ ) is V ( 1 ′ , . . . , s ′ ; j 1 ′ , . . . , j s ′ ; l 1 ′ , . . . , l s ′ ) . It follows that for s = n , n − 1 , . . . , 1, a vector V ( 1 ′ , . . . , s ′ ; j 1 ′ , . . . , j s ′ ; l 1 ′ , . . . , l s ′ ) in which all l i ′ = 1 ex- cept for i ′ ∈ { 1 ′′ , . . . , v ′′ } ⊂ { 1 ′ , . . . , s ′ } (a subset of v < s distinct elements), is a linear combin ation o f the vector V  1 ′′ , . . . , v ′′ ; j 1 ′′ , . . . , j v ′′ ; l 1 ′′ , . . . , l v ′′  and all the vectors V  1 ′ , . . . , s ′ ; j 1 ′ , . . . , j s ′ ; l 1 ′ , . . . , l s ′  for which all l i ′ > 1 and { j 1 ′′ , . . . , j v ′′ ; l 1 ′′ , . . . , l v ′′ } ⊂ { j 1 ′ , . . . , j s ′ ; l 1 ′ , . . . , l s ′ } . As a result the r ows of M are linear combinatio ns of the rows of M ∗ consisting of vectors V  1 ′ , . . . , s ′ ; j 1 ′ , . . . , j s ′ ; l 1 ′ , . . . , l s ′  for all possible { 1 ′ , . . . , s ′ } ⊂ { 1 , . . . , n } , ( j 1 ′ , . . . , j s ′ ) ∈ { 1 , . . . , k 1 ′ } × . . . × { 1 , . . . , k s ′ } , ( l 1 ′ , . . . , l s ′ ) ∈ { 2 , . . . , m 1 ′ } × . . . × { 2 , . . . , m s ′ } . By straightforward combinatorics the number of such vectors is ( k 1 ( m 1 − 1 ) + 1 ) . . . ( k n ( m n − 1 ) + 1 ) . The rows of M ∗ are linea rly indep endent because the column correspo nding to the ( l 11 = 1 , . . . , l 1 k 1 = 1 , . . . , l n 1 = 1 , . . . , l nk n = 1 ) th compon ent of Q contains a single 1, in the r ow of M ∗ correspo nding to s = 0 (which row contains 1’ s o nly). 8 Dzhafarov and Kujala Note that k i ( m i − 1 ) + 1 < m k i i for all k i ≥ 2 and m i ≥ 1. This means that ( k 1 ( m 1 − 1 ) + 1 ) . . . ( k n ( m n − 1 ) + 1 ) < ( m 1 ) k 1 . . . ( m n ) k n , and the system MQ = P is always underdeterm ined. Corollary 3.2. If P satisfies ma r ginal selectivity , then system (10) is equiva lent t o M ∗ Q = P ∗ , Q ≥ 0 , (11) wher e M ∗ is as defined in the pr oof a bove, and P ∗ is the “r e- duced hierar chical” vector with compon ents Pr h A 1 ′ = a 1 ′ l 1 ′ , . . . , A s ′ = a s ′ l s ′   x 1 ′ j 1 ′ , . . . , x s ′ j s ′ i = P ∗ 1 ′ ,..., s ′ ( l 1 ′ , . . . , l s ′ ; j 1 ′ , . . . , j s ′ ) , (12) wher e s = 0 , . . . , n , { 1 ′ , . . . , s ′ } ⊂ { 1 , . . . , n } , and l i ′ ∈{ 2 , . . . , m i } for each i ′ ∈ { 1 ′ , . . . , s ′ } . M ∗ is of full r o w rank. T o co mment on th is co rollary , it follows fr om the proof of Theorem 3.1 tha t MQ = P never has a solution if vector P vio- lates the equality ∑ Pr h A 1 = a 1 l 1 , . . . , A n = a n l n   x 1 j 1 , . . . , x n j n i = ∑ Pr h A 1 = a 1 l 1 , . . . , A n = a n l n   x 1 j ′ 1 , . . . , x n j ′ n i , where the sum mation is acro ss all values of ( l 1 , . . . , l n ) with a fixed ( l 1 ′ , . . . , l s ′ ) . Clearly , this necessary condition is ju st an - other way of stating m arginal selecti vity . Assumin g that P d oes satisfy margin al selectivity , it can be represented by the “ reduced hierarchica l” vector P ∗ whose co mpon ents are m arginal prob a- bilities of all order s, with s = 0 corresp onding to the proba bility 1. 3.3. Examples Example 3. 3. Let α = { 1 α , 2 α } , β =  1 β , 2 β  , and the set of allow able treatments T con sist of all four p ossible combinations of the factor p oints. L et A and B be Bernoulli variables, a 1 = b 1 = 1, a 2 = b 2 = 2, distributed as s hown: α β A B Pr 1 1 1 1 . 1 40 1 2 . 360 2 1 . 360 2 2 . 140 α β A B Pr 1 2 1 1 . 1 98 1 2 . 302 2 1 . 302 2 2 . 198 α β A B Pr 2 1 1 1 . 1 89 1 2 . 311 2 1 . 311 2 2 . 189 α β A B Pr 2 2 1 1 . 4 60 1 2 . 040 2 1 . 040 2 2 . 460 Marginal selectivity here is satis fied trivially: all m arginal prob- abilities are equ al 0.5, for all tr eatments. In the matrix for m o f the LFT , the column-vecto r of the above 16 probabilities, ( . 140 , . 360 , . 360 , . . . , . 040 , . 040 , . 460 ) ⊤ , using ⊤ for tran sposition, is den oted b y P . The L FT prob lem is defined b y the system MQ = P , Q ≥ 0 , whe re the 16 × 16 Boolean matrix M is sh own below: each co lumn of the ma- trix co rrespon ds to a comb ination o f values fo r the hy potheti- cal H -v ariables (shown above the matrix), wh ile each row co rre- sponds to a com bination of a treatm ent with values of the o utputs A , B (sho wn on the left). H 1 α 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 H 2 α 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 H 1 β 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 H 2 β 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 α β A B 1 1 1 1 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 2 0 0 1 1 0 0 1 1 0 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 2 2 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 1 2 1 1 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 2 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 2 2 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 2 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 2 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 2 1 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 2 2 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 2 2 1 1 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 2 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 2 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 2 2 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 The linear program ing rou tine of Mathem atica TM (using the interior point algorithm) shows that the lin ear equation s (9) ha ve nonnegative solu tions correspon ding to the JDC-vector H 1 α H 2 α H 1 β H 2 β Pr 1 1 1 1 . 0270861 0 1 1 1 2 . 0023929 5 1 1 2 1 . 1668930 0 1 1 2 2 . 0335861 0 1 2 1 1 . 0019796 5 1 2 1 2 . 1085410 0 1 2 2 1 . 0020412 8 1 2 2 2 . 1574800 0 H 1 α H 2 α H 1 β H 2 β Pr 2 1 1 1 . 15748 000 2 1 1 2 . 00204 128 2 1 2 1 . 10854 100 2 1 2 2 . 00197 965 2 2 1 1 . 03358 610 2 2 1 2 . 16689 300 2 2 2 1 . 00239 295 2 2 2 2 . 02708 610 The colum n-vector o f these probabilities co nstitutes Q > 0 . Th is proves th at in this case we do have ( A , B ) " ( α , β ) . Selectivity in Probabilisitc C ausality 9 Example 3.4. In the previous example, let us change the d istri- butions of ( A , B ) to the following: α β A B Pr 1 1 1 1 . 4 50 1 2 . 050 2 1 . 050 2 2 . 450 α β A B Pr 1 2 1 1 . 1 05 1 2 . 395 2 1 . 395 2 2 . 105 α β A B Pr 2 1 1 1 . 1 70 1 2 . 330 2 1 . 330 2 2 . 170 α β A B Pr 2 2 1 1 . 1 10 1 2 . 390 2 1 . 390 2 2 . 110 Once ag ain, marginal selectivity is satisfied trivially , as all marginal prob abilities are 0. 5, for all treatments. The lin ear progr aming routine of Mathematica TM , howe ver , sho ws that the linear eq uations (9) have no non negativ e solution s. This ex- cludes the existence of a JDC-vector fo r this situation s, rulin g out thereby the possibility of ( A , B ) " ( α , β ) . 3.4. Renaming and grouping Since LFT is bo th a necessary and suf ficient conditio n for se- lectiv e influences, if it is passed for ( A 1 , . . . , A n )( φ ) , it is guar- anteed to be passed following any factor-point-specific tr ansfor- mations of these random outpu ts. All such transfo rmations in the case of discrete random variables can be de scribed as com- binations of renaming (factor -poin t specific one) and coarsening (grou ping of some values togethe r). In fact, the o utcome of L FT simply does no t dep end o n the values of the rand om variables in volved, only their pro babilities m atter . The refore a renamin g will no t chang e anything in the system o f linear eq uations and inequalities (8)- (9). An example of coarsenin g will be rede fin- ing A an d B , each having possible values 1 , 2 , 3 , 4, into binary variables A ∗ ( φ ) = ( 1 if A ( φ ) = 1 , 2 , 2 if A ( φ ) = 3 , 4 , B ∗ ( φ ) = ( 1 if B ( φ ) = 1 , 2 , 3 , 2 if B ( φ ) = 4 . It is clear that an y such a redefinition amounts to replacin g so me of the equation s in (9) with their sums. Th erefore, if th e original system has a solutio n, s o will also the system af ter such replace- ments. Of cou rse, the reverse is not generally tru e: the coar ser system can have so lutions when the origina l s ystem does not. The same is true for coar sening the system by gr ouping to- gether some o f the factor poin ts within facto rs. Su ppose we want to gr oup to gether po ints x 1 1 and x 1 2 of factor α 1 contain- ing more than two po ints. This mean s that the pro babilities P ( l 1 , l 2 , . . . , l n ; j 1 , j 2 , . . . , j n ) are redefined as 7 P ′ ( l 1 , l 2 , . . . , l n ; j 1 , j 2 , . . . , j n ) =          1 2 P ( l 1 , l 2 , . . . , l n ; 1 , j 2 , . . . , j n ) + 1 2 P ( l 1 , l 2 , . . . , l n ; 2 , j 2 , . . . , j n ) if j 1 = 1 , P ( l 1 , l 2 , . . . , l n ; j 1 + 1 , j 2 , . . . , j n ) if j 1 > 1 . When we a verage the original equations for P ( l 1 , l 2 , . . . , l n ; 1 , j 2 , . . . , j n ) and P ( l 1 , l 2 , . . . , l n ; 2 , j 2 , . . . , j n ) , we get ∑ ( 1 2 ∑ l 12 Q ( l 11 = l 1 , l 12 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) + 1 2 ∑ l 11 Q ( l 11 , l 12 = l 1 . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) ) = P ′ ( l 1 , l 2 , . . . , l n ; 1 , j 2 , . . . , j n ) , where l 2 j 2 = l 2 , . . . , l n j n = l n and the outer su mmation is a cross all l i j except for th e follo wing v alues for ( i , j ) : ( 1 , 1 ) , ( 1 , 2 ) , and ( i , j i ) , i = 2 , . . . , n . W e define a n ew vector Q ′ whose dimension- ality is less than that of Q by one, putting Q ′ ( l 11 = l , l 13 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) = 1 2 ∑ l 12 Q ( l 11 = l , l 12 , l 13 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) + 1 2 ∑ l 11 Q ( l 11 , l 12 = l , l 13 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) , where l has the sam e ran ge as any of l 1 j . (For n otational sim- plicity , in Q ′ we do not re-e numerate ( 1 , 3 ) as ( 1 , 2 ) , ( 1 , 4 ) as ( 1 , 3 ) , etc., lea ving thereby l 12 undefined ) For any point of factor α 1 other than x 1 1 and x 1 2 , say , x 1 3 , we have then ∑ l 11 , l 12 Q ( l 11 , l 12 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) = P ( l 1 , l 2 . . . , l n ; 3 , j 2 . . . , j n ) , which can be presented as ∑ ∑ l ( 1 2 ∑ l 12 Q ( l 11 = l , l 12 , l 13 = l 1 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) + 1 2 ∑ l 11 Q ( l 11 , l 12 = l , l 13 = l 1 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) ) = P ( l 1 , l 2 . . . , l n ; 3 , j 2 . . . , j n ) . This is equiv alent to ∑ Q ′ ( l 11 , l 13 = l 1 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) = P ′ ( l 1 , l 2 . . . , l n ; j 1 = 2 , j 2 . . . , j n ) , where l 2 j 2 = l 2 , . . . , l n j n = l n , an d the summation is acro ss all l i j except for ( i , j ) = ( 1 , 3 ) and ( i , j ) = ( i , j i ) , i = 2 , . . . , n . So we have obtained a solutio n for the factor-coarsened system from a solution for the original system. 7 More general mixture s, π P ( l 1 , l 2 , . . . , l n ; 1 , j 2 , . . . , j n ) + ( 1 − π ) P ( l 1 , l 2 , . . . , l n ; 2 , j 2 , . . . , j n ) for 0 < π ≤ 1, are dealt with as eas- ily; moreov er , π = 1 formally corresponds to dropping the factor point x 1 2 , considere d belo w . The value s of π other than 1 / 2 and 1 can be useful if the grouping is done on a sample le vel, to reflect the dif ferences in sample sizes correspond ing to treatments containin g x 1 1 and x 1 2 . 10 Dzhafarov and Kujala Droppin g a point, say , x 1 2 is even simpler : we d elete all rows with j 1 = 2, and then redefine the Q vector as Q ′ ( l 11 , l 13 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) = ∑ l 12 Q ( l 11 , l 12 , l 13 , . . . , l 1 k 1 , . . . , l n 1 , . . . , l nk n ) . If th e rando m variables inv olved have m ore than finite num ber of values a nd/or th e factors consist of m ore than finite num ber of factor p oints, or if these n umbers, though finite, are too large to handle th e ensuin g lin ear p rogram ming prob lem, then LFT can still b e used after the values of the r andom variables and/or fac- tors hav e been approp riately groupe d. L FT then becomes only a necessary conditio n f or selective influen ces (with r espect to the original sy stem of factors and r andom variables), and its results will generally be different for dif ferent (non-n ested) g roup ings. Example 3.5. Consider the hypothesis ( A , B ) " ( α , β ) with the factors ha ving a finite n umber of factor p oints each, and A and B being respon se times. T o u se LFT , one can transform the ran dom variable A as, say , A ∗ ( φ ) =          1 if A ( φ ) ≤ a 1 / 4 ( φ ) , 2 if a 1 / 4 ( φ ) < A ( φ ) ≤ a 1 / 2 ( φ ) , 3 if a 1 / 2 ( φ ) < A ( φ ) ≤ a 3 / 4 ( φ ) , 4 if A ( φ ) > a 3 / 4 ( φ ) , and transfor m B as B ∗ ( φ ) = ( 1 if B ( φ ) ≤ b 1 / 2 ( φ ) , 2 if B ( φ ) > b 1 / 2 ( φ ) , where a p ( φ ) an d b p ( φ ) designa te the p th quantiles o f, respec- ti vely A ( φ ) and B ( φ ) . The initial h ypoth esis n ow is refor mu- lated as ( A ∗ , B ∗ ) " ( α , β ) , with the understand ing that if it is rejected then th e initial hypoth esis will be rejected too (a neces- sary con dition only). LFT will now b e applied to distributions of the form α β A B Pr x y 1 1 p 11 1 2 p 12 . . . . . . . . . 4 1 p 41 4 2 p 42 where the m arginals fo r A are co nstrained to 0.2 5 an d the marginals for B to 0.5, for all treatments  x α , y β  , yieldin g a trivial com pliance with margina l selectivity . No te th at the test may very well uph old ( A ∗ , B ∗ ) " ( α , β ) even if marginal selec- ti vity is violated for ( A , B )( φ ) (e.g., if th e qu antiles a p  x α , y β  change as a functio n of y β ). 3.5. Quantum entanglement Fine’ s (19 82a-b) inequalities relate to the simp lest EPR paradigm , with the n umber of particles n = 2, number of spin axes per par ticle k 1 = k 2 = 2, an d the n umber of p ossible spin values per particle m 1 = m 2 = 2 (this value b eing the same for all sp in axes chosen for a giv en par ticle). They can b e written, with reference to (6) and (12), as − 1 ≤ P ( 2 , 2 ; j 1 , j 2 ) + P  2 , 2 ; j ′ 1 , j 2  + P  2 , 2 ; j ′ 1 , j ′ 2  − P  2 , 2 ; j 1 , j ′ 2  − P ∗ 1  2 ; j ′ 1  − P ∗ 2 ( 2 ; j 2 ) ≤ 0 , where j 1 , j ′ 1 ∈ { 1 , 2 } , j 2 , j ′ 2 ∈ { 1 , 2 } , j 1 6 = j ′ 1 , j 2 6 = j ′ 2 . These in- equalities con stitute the necessary an d sufficient c ondition s fo r ( A 1 , A 2 ) " ( α 1 , α 2 ) , with margin al selectivity assumed imp lic- itly . Althoug h Fine’ s deriv ation of these inequalities is different, they can be der iv ed as solu tions o f system (11), with P ∗ the 9- compon ent vecto r (using ⊤ for tran sposition) ( 1 , P ∗ 1 ( 2 ; 1 ) , . . . , P ∗ 2 ( 2 ; 2 ) , P ( 2 , 2 ; 1 , 1 ) , . . . , P ( 2 , 2 ; 2 , 2 )) ⊤ , Q the 16-comp onent vector ( Q ( 1 , 1 , 1 , 1 ) , . . . , Q ( 2 , 2 , 2 , 2 ) ) ⊤ , and M ∗ the corr espondin g 9 × 1 6 Boolean matrix, H 1 α 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 H 2 α 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 H 1 β 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 H 2 β 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 α β A B · · · · 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 · 2 · 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 2 · 2 · 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 · 1 · 2 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 · 2 · 2 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 2 2 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 1 2 2 2 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 2 1 2 2 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 2 2 2 2 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 In fact, using a stand ard facet en umeratio n pro gram (e.g., lrs program at http://cgm.cs.mcgill.ca/~avis/C/lrs.html) these in- equalities (together with the equalities r epresenting marginal se- lectivity) c an be derived “me chanically . ” Th e essence of th e computatio n is in the fact that a linear system ( 10) or ( 11) is feasible if a nd only if the p oint P (resp ectiv ely , P ∗ ) belon gs to the con vex hu ll of the points correspo nding to the columns of M (respectively , M ∗ ), wh ich for m a su bset of the vertices of a un it hyperc ube. The facet en umeratio n pro grams derive inequalities describing this con vex hull. Giv en a set of num erical ( experimentally estimated or the o- retical) pro babilities, comp uting th e LP problem (10 ) o r (1 1) is always prefe rable to dealing with explicit inequ alities as th eir number be comes very large even for moderate- size vectors P . While Fine’ s inequalities f or n = 2 , k 1 = k 2 = 2 , m 1 = m 2 = 2 (assuming m arginal selectivity) n umber just 8, already for n = 2, Selectivity in Probabilisitc C ausality 11 k 1 = k 2 = 2 with m 1 = m 2 = 3 (describing , e.g., an EPR exper- iment w ith two sp in-1 pa rticles, or two spin- 1 / 2 one s a nd ineffi- cient d etectors), our computation s yield 1080 in equalitiies, a nd for n = 3, k 1 = k 2 = k 3 = 2 and m 1 = m 2 = m 3 = 2 , correspond - ing to the Greenb erger , Ho rne, & Zeilinger (1989 ) paradigm with three spin- 1 / 2 par ticles, this number is 53792. The po tential of JDC to lead to LFT an d provide an ulti- mate cr iterion fo r th e entan glement pro blem has not been uti- lized in quantum ph ysics until relatively re cently , when L FT was propo sed in W erner & W olf (20 01a, b ) an d Basoalto & Perci- val (2003 ). Prior to th is, criteria (as oppo sed to just ne cessary condition s) for the p ossibility o f a classical explanatio n for an EPR parad igm in volving multiple particles, mu ltiple m easure- ment setting s, and m ultiple outcom es per measurem ents were only kn own under strong symmetr y constraints (de Barros & Suppes, 200 1; Garg, 1 983; Mermin, 1990; Peres, 1999). 3.6. Sample-level tests Although this paper is no t concern ed with statistical q ues- tions, it may be useful to m ention some o f the ap proache s to constructing samp le-level tests based on LFT . As m en- tioned in Section 3.5, the set of ou r vectors P fo r which the system MQ = P , Q ≥ 0 has a solutio n form s a con- vex polyto pe. In particular, if the set T of allow able treat- ments contains all combin ations of factors points, the po lytope is the (( k 1 ( m 1 − 1 ) + 1 ) . . . ( k n ( m n − 1 ) + 1 ) − 1 ) -dimensional conv ex hull of the poin ts c orrespon ding to th e colu mns of th e Boolean matr ix M , which form a subset of the vertices o f the ( m 1 ) k 1 . . . ( m n ) k n -dimension al un it hyper cube. Recently Davis- Stober (200 9) developed a statistical theory for testing th e hy - pothesis that a vector of pr obabilities P (not n ecessarily of the same structure as in LFT) b elongs to a co n vex polyto pe P against th e hypo thesis that it does not. Under certain regular- ity constraints he derived th e asymptotic distribution (a conv ex mixture of chi-sq uare distributions) for the log maximu m likeli- hood ratio statistic − 2 log max P ∈ P L ( P | N ) max P L ( P | N ) , where N is the vector of observed ab solute f requen cies, com- prised of the n umbers o f occur rences of ( l 1 , . . . , l n ; j 1 , . . . , j n ) in the case of LFT . The likeliho ods L ( P | N ) are computed using the standard th eory of m ultinomial distributions. Th is th eory has been “test-driven” o n the polyto pes related to the tra nsitiv- ity of p referenc es problem (Regenwetter, Dan a, & Davis-Stober, 2010, 2011). A Bayesian ap proach to th e same prob lem is pre- sented in Myun g, Karabatsos, & Iverson (2005). Other approach es readily suggest themselves. On e of them is to use the known theor y of L ( P | N ) / max P L ( P | N ) to com- pute a confid ence region of possible pr obability vectors P fo r a giv en empirical vector N . The hypothesis o f selecti ve influences is retained or rejec ted acco rding as this confiden ce region con- tains or does no t contain a po int P that p asses LFT . Resam pling technique s is an other obvious ap proach , e.g., the permu tation test in wh ich the assignm ent of emp irical distributions to differ - ent treatments is random ly “reshuffled” so that each distrib ution generally ends up assigned to a “wrong ” treatment. I f the pro- portion of the p ermuted assignments whose d eviation from the LFT p olytope do es not excee d that of the th e o bserved estimate of P is sufficiently small, the hypoth esis of selectiv e in fluences can be considered supported. Little is kn own at present ab out the com putational feasibil- ity and statistical p roperties o f these and similar proced ures. In particular (this also ap plies to Davis-Stober’ s test), we do not know th eir statistical power f or d ifferent location s of the tru e vector of probab ilities outside the con vex poly tope described by MQ = P , Q ≥ 0 . No r d o we know how th e effect size, a mea- sure of deviation of P from the poly tope, sho uld be computed optimally . All of this will have to be in vestigated s epar ately . 4. CONCLUSION Selectiv eness in the influences e xerted by a set of inputs upon a set of rando m a nd stochastically in terdepen dent o utputs is a critical feature of many p sycholog ical mode ls, o ften built into the very language of these m odels. W e spea k of an interna l rep- resentation of a giv en stimulus, as separate from an interna l rep- resentation of another stimulus, ev en if these r epresentation s are considered ran dom en tities a nd they are not indepen dent. W e speak of decompo sitions of response time into sign al-depen dent and sign al-indepe ndent compon ents, or into a pe rceptual stage (influenced by stimuli) and a memory -search stage (influen ced by the n umber of memorized items), without necessarily assum- ing th at the two compon ents or stage s are stochastically inde- penden t. In this pa per, we have described the L inear Feasibility T est, an application of the fundamen tal Joint Distribution Criterion for selective influences to ran dom variables with finite number s of values. This test can b e p erform ed by mean s of stan dard lin - ear progra mming. Du e to the fact that any rando m outp ut can be d iscretized, th e Lin ear Feasibility T est is universally ap pli- cable, althou gh one should keep in mind that if a diagram of selecti ve influen ces is u pheld b y the test a t some d iscretization, it may be rejected at a finer or non-n ested discretization (b ut not at a coarser one). Both the Joint Distribution Criterio n and the Linear Feasibility T est, althou gh new in the beh avioral context, have their direct analogu es in q uantum physics, in d ealing with the problem of the existence o f a c lassical explanation (o ne with non-co ntextual, local hid den variables) fo r o utcomes of no ncom- muting measurements performed on entangled particles. The discovery of these parallels promises to enrich and facilitate our understan ding of selecti ve influences. Acknowledgments This research has been su pported by AFOSR grant F A9550 - 09-1- 0252 to Purdue Uni versity and by the Academy of Finland grant 12 1855 to Un iv ersity of Jyväskylä. W e are indebted to Joseph Houpt and Jerom e Busemeyer whose commen ts helpe d us to significantly improve the paper . 12 Dzhafarov and Kujala REFERENCES Basoalto, R.M., & Per civ al, I .C. (20 03). BellT est and CHSH experiments with more than two settings. Journal o f Physics A: Mathematical & General , 36, 7411–742 3. Bell, J. (1 964). On the Einstein-Podolsky-Rosen parado x. Physics , 1, 195-2 00. Bohm, D., and Ahar onov , Y . ( 1957) . Discussion of Experim en- tal Proof for the Paradox o f Einstein, Rosen and Podolski. Physical Review , 1 08, 1070-1 076. Clauser , J. F ., Ho rne, M. A., Sh imony , A. & Ho lt, R. A. (19 69). Proposed experiment to test local h idden- variable th eories. Physical Review Letters , 23, 880–884. Davis-Stober , C. P . (2 009). Analysis of mu ltinomial models un- der inequality co nstraints: Application s to measuremen t the- ory . Journal of Mathematical Psyc hology , 53, 1–13. de Barro s, J. A., & Suppes, P . ( 2001) . Results f or six dete c- tors in a three-particle GHZ exper iment . In J. Bricmont, D. Durr, M. C. Galav otti, G. Gh irardi, F . Petrucc ione, N. Za nghi (Eds.) Chan ce in Physics: F ounda tions and P erspectives (pp. 213-2 23), B erlin: Sprin ger . Dzhafarov , E.N. ( 1999 ). Cond itionally selective depend ence of random variables on external factors. Journal of Math emat- ical Psychology , 43, 123-1 57. Dzhafarov , E.N. (20 03a). Selectiv e influence th rough condi- tional indepen dence. Psychometrika , 68, 7–26. Dzhafarov , E.N., & Gluhovsky , I. (2 006). Notes on selective in- fluence, probab ilistic cau sality , and probabilistic dimension- ality . Journal of Mathematical Psychology , 5 0, 390–4 01. Dzhafarov , E.N., & Kujala, J.V . ( 2010 ). The Joint Distribution Criterion and the Distance T ests f or selective prob abilistic causality . F r on tiers in Quantitative Psychology an d Mea- sur ement , 1:151 doi: 10.338 9/fpsyg .2010 .00151. Einstein, A, Podo lsky , B., & Rosen , N. (193 5). Can quantum - mechanical description of phy sical reality be co nsidered complete? Physical Review , 47, 777 –780 . Fine, A. (19 82a). Join t distributions, qu antum co rrelations, and commutin g o bservables. Journal of Ma thematical Phy sics , 23, 1306-1 310. Fine, A. ( 1982b ). Hidd en variables, joint pr obability , an d the Bell inequalities. Physical Review Letters , 48, 291-295 . Garg, A. (1983 ). Detector erro r and E instein-Podo lsky-Rosen correlation s. Physical Review D , 28, 785-7 90. Greenberger, D.M., Ho rne, M .A., & Zeilinger, A. (1 989). Go - ing be yon d Bell’ s theor em. In M. Kafatos ( Ed. ) Bell’s Theo- r em, Quan tum Theory and Conceptions of the Universe (pp. 69–72 ), Dordrecht: Kluwer . Karmarkar, N. (1984) . A ne w p olynom ial-time algo rithm for linear progr amming. Comb inatorica, 4, 373-395 . Khachiyan , L. (1 979). A polyno mial algor ithm in linear pro- grammin g. Doklad y Ak ademii Nauk SS SR 244, 1093-109 7. Kechris, A. S. (199 5). Classical Descriptive Set Theory . N ew Y o rk: Springer . Kujala, J. V ., & Dz hafarov , E. N. (2 008). T esting for selectivity in the depend ence of rand om variables on external factors. Journal of Mathematical Psychology , 52, 128–144. Mermin, N. D. (1 990). Ex treme qu antum entang lement in a su- perposition of macro scopically d istinct states . Physical Re- view Letter s , 65, 18 38-1 840. Myung , J. I., Kara batsos, G., & Iverson, G. J. (200 5). A Bayesian approach to testing decision ma king axioms. J our- nal of Mathematical Psychology , 49, 205–225. Peres, A. (199 9). All the Bell ineq ualities. F o undatio ns of Physics , 29, 589- 614. Regenwetter , M., Dan a, J., & Davis-Stober , C. P . ( 2010 ). T esting transitivity o f pr eferences on two-alternative forced ch oice data. F r ontiers in Quan titative P sychology and Measur e- ment , doi: 10.3 389/fp syg.201 0.00148. Regenwetter , M., Dana, J., Davis-Stober , C. P . (20 11). Transi- ti vity of pre ferences. Psychologica l Review , 118, 42-5 6. Sternberg, S. (1969). The discovery of pro cessing stages: Exten- sions of Don ders’ m ethod. In W .G. K oster (E d.), Attention and P erformance II. Acta Psychologica , 30, 276–3 15. Stapp, H.P . (19 75). Bell’s Theore m and W o rld Process . Nuovo Cimento B 29, 270- 276. Suppes, P . , & Zan otti, M. (198 1). When are pr obabilistic expla- nations possible? Synth ese , 48, 191–199. Thurston e, L. L. (1927 ). A law of comp arative jud gments. Psy- chological Review , 34, 273– 286. T ownsend, J. T . (19 84). Unc overing mental pro cesses with fac- torial experimen ts. Journal of Mathema tical Psychology , 28 , 363–4 00. T ownsend, J.T ., & Schweickert, R. (19 89). T oward the tri- chotomy me thod of reactio n times: La ying the fou ndation of stochastic mental networks. Journal of Mathe matical Psy- chology , 33, 309–327. W erne r , R.F ., & W olf , M. M. (20 01). All multip artite Bell cor- relation inequalities f or tw o dichotomic o bservables per site. arXiv:quant-ph/0 10202 4v1. W erne r , R.F ., & W olf , M.M. (200 1). Bell inequalities and entan- glement. arXiv:quant-ph/0 1070 93v2.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment