A Separation of NP and coNP in Multiparty Communication Complexity
We prove that NP differs from coNP and coNP is not a subset of MA in the number-on-forehead model of multiparty communication complexity for up to k = (1-\epsilon)log(n) players, where \epsilon>0 is any constant. Specifically, we construct a function…
Authors: Dmytro Gavinsky, Alex, er A. Sherstov
A Separation of NP and c oNP in Multipart y Comm unication Complexi t y Dmitr y Ga vi nsky ∗ Alexander A. Sherstov † Abstract W e prov e that NP 6 = coNP and co NP * MA in the num b e r-on- forehead mo del of multipart y comm unication complexity for up to k = (1 − ǫ ) log n players, wher e ǫ > 0 is a ny constant. Spec ific a lly , w e construct a function F : ( { 0 , 1 } n ) k → { 0 , 1 } with co-nondeterministic complexity O (log n ) and Merlin-Arth ur c o mplexity n Ω(1) . The problem was op en for k > 3 . 1 In tro duction The n umber-on-forehead mo del of multipart y co mmunicat ion complex- it y [CFL] features k comm unicating pla y ers w hose goal is to compute a giv en distributed function. More precisely , one considers a Boolean fu nction F : ( { 0 , 1 } n ) k → {− 1 , +1 } whose argumen ts x 1 , . . . , x k ∈ { 0 , 1 } n are placed on the foreheads of play e rs 1 through k , resp ectiv ely . T hus, pla y er i sees all the arguments except for x i . The p la y ers communicate b y wr iting bits on a sh ared blac kb oard, visible to all. Their goal is to compu te F ( x 1 , . . . , x k ) with minimum comm unication. The multipart y mo del has found a v ariet y of applications, including circuit complexit y , pseudorandomness, and p ro of complexit y [Y, HG , BNS, R W, BPS]. Th is mo del dr a ws its ric hness f rom the o v erlap in the pla y ers’ in puts, wh ic h mak es it c hallenging to pr o v e lo w er b ound s. Sev eral fun damen tal qu estions in th e multi part y mo d el r emain op en despite m uc h researc h. ∗ NEC Lab oratories America Inc., 4 Indep end ence W a y , Su ite 200, Princeton, NJ 08540. † Microsof t Research, Cam bridge, MA 02142. Email: sherstov @ cs . utexa s . edu 1.1 Previous W ork and Our Results The k -part y num b er-on-forehead mo del naturally giv es rise to the complex- it y classes NP cc k , coNP cc k , BPP cc k , and MA cc k , corresp onding to comm unicatio n problems F : ( { 0 , 1 } n ) k → {− 1 , +1 } with efficien t nondeterministic, co- nondeterministic, randomized, and Merlin-Arthur proto cols, resp ectiv ely . An efficien t p r otocol is one with comm unication cost log O (1) n. Determining the exact relati onships among these cl asses is a natural goal in complexit y theory . F or example, it had b een op en to sho w that nondeterministic proto cols can b e more p o w erful than rand omized, for k > 3 play e rs. This problem w as recen tly solved in [LS, CA] for up to k = (1 − o (1)) log 2 log 2 n pla y ers, and later strengthened in [DP] to k = (1 − ǫ ) log 2 n pla y ers, wh ere ǫ > 0 is an y giv en constan t. An explicit separation for the latt er case w as obtained in [DPV ]. The cont ribution in this pap er is to relate th e p o w er of nond eterministic, co-nondeterministic, and Merlin-Arth ur p roto cols. F or k = 2 play er s , th e relations among these mo dels are well und ersto o d [KN, K2]: it is kn o wn that coNP cc 2 6 = NP cc 2 and f urther that coNP cc 2 * M A cc 2 . Starting at k = 3 , ho w ev er, it h as b een op en to ev en s eparate NP cc k and coNP cc k . Our main result is th at coNP cc k * MA cc k for u p to k = (1 − ǫ ) log 2 n play ers , wh ere ǫ > 0 is an arbitrary constan t. The separation is b y an explicitly giv en function. In particular, ou r w ork sh o ws that NP cc k 6 = coNP cc k and also sub sumes the separation in [DP, DPV], since NP cc k ⊆ M A cc k and B PP cc k ⊆ MA cc k . Let the sym b ols N ( F ), N ( − F ), and MA ( F ) den ote the nondeterministic, co- nondeterministic, and Merlin-Arth ur complexit y of F in the k -part y num b er- on-forehead mod el. Theorem 1.1 (Main Result) . L et k 6 (1 − ǫ ) log 2 n, wher e ǫ > 0 is any given c onsta nt. Then ther e is an ( explicitly g iven ) function F : ( { 0 , 1 } n ) k → {− 1 , +1 } with N ( − F ) = O (log n ) and MA ( F ) = n Ω(1) . In p a rticular, coN P cc k * M A cc k and NP cc k 6 = coNP cc k . It is a longstanding open problem to exhibit a fu nction with non trivial 1 m ultipart y complexit y for k > log 2 n pla y ers. Therefore, the separatio n in Theorem 1.1 is state -of-the-art with resp ect to the num b er of pla y ers . The pro of of Th eorem 1.1, to b e describ ed shortly , is b ased on the p at- tern matrix metho d [S1, S2] and its multipart y generalizati on in [DPV]. In the final sectio n of this pap er , we revisit sev eral other m u ltipart y general- izations [C, LS, CA, BH] of the pattern matrix metho d. By applyin g our tec hniqu es in these other settings, w e are ab le to obtain similar exp onen- tial separations by fun ctions as simple as constan t-depth circu its. Ho wev er, these new separations only hold up to k = ǫ log n pla y ers, unlik e the sepa- ration in Theorem 1.1. 1.2 Previous T ec hniques P erh aps the b est-kno wn m etho d for comm unication lo we r b ounds, b oth in the num b er-on-forehead multipart y model and v arious t w o-party models, is the discr ep ancy metho d [KN]. The metho d consists in exh ib iting a distri- bution P with resp ect to whic h th e fun ction F of in terest h as negligible discrepancy , i.e ., negligible correlatio n with all lo w-cost proto cols. A more p o w erful tec hnique is the gener alize d discr ep ancy metho d [K1, R3]. This metho d consists in exhibiting a distribution P and a fu n ction H such that, on the one h and, the f u nction F of interest is w ell-correlated with H with re- sp ect to P , but on the other h and, H h as negligible d iscrepancy w ith resp ect to P . In pr actice, considerable effort is r equired to find s u itable P and H and to an alyze the resulting discrepancies. In particular, no strong b ounds we re a v ailable on th e discrepancy or generalized discrepancy of constant- depth circuits AC 0 . The recent p attern matrix metho d [S1, S2] solv es this problem for AC 0 and a large family of other matrices. More sp ecifically , the method uses standard analytic prop erties of Bo olean functions (su ch as appro ximate degree or thr eshold degree) to d etermine the discrepancy and generalized discrepancy of the asso ciated communicati on problems. Originally form ulated in [S1, S2] for th e tw o-part y mo del, the pattern matrix m etho d has b een adapted to th e multipart y mo del by sev eral au- thors [C, LS, C A, DP, DPV, BH]. Th e fi rst adaptation of the metho d to the multipart y mo del ga ve improv ed low er b oun ds for the multipart y dis- join tn ess f unction [LS, CA]. This line of w ork w as com bined in [DP , DPV] with p robabilistic arguments to separate the classes NP cc k and BPP cc k for u p to k = (1 − ǫ ) log 2 n pla ye rs, b y an explicit function. A new pap er [BH] giv es p olynomial lo w er b ounds for constan t-depth circuits, in the mod el w ith up to k = ǫ log n pla yers. F u rther details on th is b o d y of researc h and other 2 dualit y-based approac hes [SZ] can b e found in the sur v ey article [S 3 ]. 1.3 Our Approach T o ob tain our main result, we com bine the wo rk in [DP, DPV] with sev- eral n ew ideas. First, w e derive a new criterion for high nondeterministic comm un ication complexit y , inspired by the Klauc k-Razb orov generalized discrepancy metho d [K1, R3]. Similar to Klauc k-Razb oro v, we also lo ok for a hard fun ction H that is we ll-correlated with the function F of in - terest, bu t w e additionally quantify the agreement of H and F on the set F − 1 ( − 1) . T his agreemen t ensures that F − 1 ( − 1) d o es not hav e a small cov er b y cylinder in tersections, th us placing F outside NP cc k . T o h andle the more p o w erful Merlin-Arth u r mo del, w e com bine this dev elopment with an earlier tec hniqu e [K2] for proving lo w er b oun ds against t w o-part y Merlin-Arth u r proto cols. In k eeping with the philosophy of the pattern matrix metho d, we then reform ulate the agreemen t requiremen t for H an d F as a suitable analytic prop erty of the underlying Boolean function f and p ro ve this prop erty di- rectly , usin g linear p rogramming dualit y . The function f in question hap- p ens to b e OR. Finally , w e apply our p rogram to the sp ecific function F constructed in [DPV] for the pur p ose of separating NP cc k and BPP cc k . Since F h as small nondeterministic complexit y by design, the pr o of of our main result is com- plete once we apply our mac h in ery to − F and derive a lo wer b ound on MA ( − F ) . 1.4 Organization W e start in Section 2 w ith relev ant tec h n ical preliminaries and standard bac kgroun d on m u ltipart y comm unication complexity . In Secti on 3, w e r e- view the original discrepancy metho d , the generalized d iscrepancy m etho d, and the pattern matrix metho d. In Section 4, w e derive the new criterion for high nondeterministic and Merlin-Arthur comm unication complexit y . Th e pro of of Theorem 1.1 co mes next, in Section 5. In the final section of the pap er, we explore some implications of this work in ligh t of other multipart y pap ers [C, LS, CA, BH]. 3 2 Preliminaries W e view Bo olean f u nctions as mappings X → {− 1 , +1 } , where X is a fi nite set suc h as X = { 0 , 1 } n or X = { 0 , 1 } n × { 0 , 1 } n . W e identi fy − 1 and + 1 with “true” and “false,” resp ectiv ely . T he n otation [ n ] stand s for th e s et { 1 , 2 , . . . , n } . F or in tegers N , n w ith N > n, the sy mb ol [ N ] n denotes the family of all size- n subsets of { 1 , 2 , . . . , N } . F or a string x ∈ {− 1 , +1 } N and a set S ∈ [ N ] n , we define x | S = ( x i 1 , x i 2 , . . . , x i n ) ∈ {− 1 , + 1 } n , where i 1 < i 2 < · · · < i n are the elemen ts of S. F or x ∈ { 0 , 1 } n , we write | x | = x 1 + · · · + x n . Th roughout this m anuscript, “log ” r efers to the logarithm to base 2 . F or a function f : X → R , w here X is an arbitrary fi nite set, we write k f k ∞ = m ax x ∈ X | f ( x ) | . W e will need the follo wing observ ation regarding discrete pr ob ab ility distributions on the h yp ercub e, cf. [S1 ]. Prop osition 2.1. L et µ ( x ) b e a pr ob ability distribution on { 0 , 1 } n . Fix i 1 , . . . , i n ∈ { 1 , 2 , . . . , n } . Then X x ∈{ 0 , 1 } n µ ( x i 1 , . . . , x i n ) 6 2 n −|{ i 1 ,...,i n }| . F or functions f , g : X 1 × · · · × X k → R (where X i is a fi nite set, i = 1 , 2 , . . . , k ), we define h f , g i = P ( x 1 ,...,x k ) f ( x 1 , . . . , x k ) g ( x 1 , . . . , x k ) . When f and g are vect ors or matrices, this is the standard defin ition of inn er pro d u ct. The Hadamar d pr o duct of f and g is the tensor f ◦ g : X 1 × · · · × X k → R giv en by ( f ◦ g )( x 1 , . . . , x k ) = f ( x 1 , . . . , x k ) g ( x 1 , . . . , x k ) . The symbol R m × n refers to the family of all m × n m atrices with real en tries. The ( i, j )th entry of a matrix A is denoted by A ij . In most matrices that arise in th is work, the exact ord ering of the columns (and r ows) is irr el- ev ant. In s uc h cases, w e describ e a matrix us in g the notation [ F ( i, j )] i ∈ I , j ∈ J , where I and J are some index sets. W e conclude w ith a review of the F ourier transform o v er Z n 2 . Consider the vec tor space of functions { 0 , 1 } n → R , equipp ed with the inner pro d uct h f , g i = 2 − n P f ( x ) g ( x ) . F or S ⊆ [ n ] , define χ S : { 0 , 1 } n → {− 1 , +1 } by χ S ( x ) = ( − 1) P i ∈ S x i . Then { χ S } S ⊆ [ n ] is an orthonormal basis for the inner pro du ct space in question. As a r esult, ev ery function f : { 0 , 1 } n → R has a un ique representati on of the form f = P S ⊆ [ n ] ˆ f ( S ) χ S , where ˆ f ( S ) = h f , χ S i . The reals ˆ f ( S ) are called the F ourier c o efficients of f . The follo wing fact is immediate from the definition of ˆ f ( S ): 4 Prop osition 2.2. Fix f : { 0 , 1 } n → R . Then max S ⊆ [ n ] | ˆ f ( S ) | 6 2 − n X x ∈{ 0 , 1 } n | f ( x ) | . 2.1 Comm unication Complexity An excellen t reference on co mm unication complexit y is the monograph b y Kushilevitz and Nisan [KN]. In this o verview, we will limit ourselv es to k ey defi nitions and notation. Th e simp lest m o del of communicatio n in this w ork is the t w o-party randomized mo del. Consider a f unction F : X × Y → {− 1 , +1 } , where X and Y are fin ite sets. Alice receiv es an in put x ∈ X , Bob receiv es y ∈ Y , and their ob jectiv e is to predict F ( x, y ) with high accuracy . T o this en d, Alice and Bob share a communicati on c hannel and ha ve an un limited supply of sh ared random bits. Alice and Bob’s p roto col is said to hav e error ǫ if on eve ry in put ( x, y ), the computed output differs from the correct answ er F ( x, y ) with probabilit y no greater than ǫ. The c ost of a giv en proto col is the maxim um n u m b er of bits exc h anged on an y input. The r andomize d comm u n ication complexit y of F, denoted R ǫ ( F ) , is the least cost of an ǫ -error protocol for F . I t is standard practice to use the shorthand R ( F ) = R 1 / 3 ( F ) . Reca ll that the error probability of a proto col can b e decreased from 1 / 3 to an y other p ositiv e constant at the exp ense of increasing th e communication cost by a constan t factor. W e will use this fact in our pro ofs without further m ention. A generalizat ion of t w o-part y co mmunicat ion is the multip arty numb er- on-for ehe ad mo d el of comm un ication. Here one considers a fun ction F : X 1 × · · · × X k → {− 1 , +1 } for some fin ite sets X 1 , . . . , X k . T h ere are k pla yers. A giv en in put ( x 1 , . . . , x k ) ∈ X 1 × · · · × X k is distributed among the play ers by p lacing x i on the forehead of play er i (for i = 1 , . . . , k ). In other w ords, play er i kno ws x 1 , . . . , x i − 1 , x i +1 , . . . , x k but not x i . The pla yers comm un icate by writing bits on a shared blac kb oard, visible to all. They additionally ha v e access to a shared sour ce of random bits. Their goal is to devise a comm unication proto col that will allo w them to accurately predict the v alue of F on ev ery inpu t. Analogous to the tw o-part y case, the randomized co mm unication complexit y R ǫ ( F ) is the least cost of an ǫ -error comm un ication protocol for F in this mo del, and R ( F ) = R 1 / 3 ( F ). Another mo d el in this pap er is the n umber-on-forehead nondeterministic mo del. As b efore, one considers a fun ction F : X 1 × · · · × X k → {− 1 , +1 } for some finite sets X 1 , . . . , X k . An inpu t from X 1 × · · · × X k is d istr ibuted 5 among the k pla ye rs as b efore. A t the s tart of the pr otocol, c 1 unbiase d nondeterministic bits app ear on the shared blac kb oard. Giv en the v alues of those bits, the pla yers b eha ve deterministically , exc hanging an additional c 2 bits by w riting them on the b lackboard. A n ondeterministic proto col for F must outp ut the correct answer for at le ast one nond etermin istic c hoice of the c 1 bits when F ( x 1 , . . . , x k ) = − 1 and for al l p ossible c hoices when F ( x 1 , . . . , x k ) = +1. Th e cost of a nondeterministic proto col is de- fined as c 1 + c 2 . The nondeterministic comm unication complexit y of F , denoted N ( F ) , is the least cost of a n ondeterministic p r otocol for F . T he c o- nondeterministic comm unication complexit y of F is the qu an tity N ( − F ). The num b er-on-forehead Merlin-Arthur mo del com bin es the p ow er of randomized and nondeterministic mo dels. S imilar to the nondeterministic case, the proto col starts with a nondeterministic guess of c 1 bits, follo w ed b y c 2 bits of communicatio n. Ho w ev er, the comm u nication can b e rand omized, and the requirement is that the error probabilit y b e at most ǫ for at le ast one n on d eterministic c hoice when F ( x 1 , . . . , x k ) = − 1 and for al l p ossible nondeterministic c hoices when F ( x 1 , . . . , x k ) = +1. T h e cost of a p r oto col is defin ed as c 1 + c 2 . The Merlin-Arthur comm u nication complexit y of F , denoted MA ǫ ( F ), is the least cost of an ǫ -error Merlin-Arth ur protocol for F . W e pu t MA ( F ) = MA 1 / 3 ( F ). Clearly , MA ( F ) 6 min { N ( F ) , R ( F ) } for ev ery F . Analogous to computational complexit y , one defines BPP cc k , NP cc k , coNP cc k , a nd MA cc k as t he classes of functions F : ( { 0 , 1 } n ) k → {− 1 , +1 } w ith complexit y log O (1) n in the r andomized, nond eterministic, co-nondeterministic, and Merlin-Arthur mo d els, resp ectiv ely . 3 Generalized Discrepancy and P attern Matrices A common to ol for pro ving communicatio n lo wer b ounds is th e discr ep ancy metho d. Giv en a fu nction F : X × Y → {− 1 , + 1 } and a distribution µ on X × Y , the discr ep ancy of F with r esp e c t to µ is defined as disc µ ( F ) = max S ⊆ X , T ⊆ Y X x ∈ S X y ∈ T µ ( x, y ) F ( x, y ) . This d efinition generalizes to th e multipart y case as follo ws. Con s ider a function F : X 1 × · · · × X k → {− 1 , + 1 } and a d istribution µ on X 1 × · · · × X k . 6 The discr ep ancy of F with r esp e ct to µ is defined as disc µ ( F ) = max χ X ( x 1 ,...,x k ) ∈ X 1 ×···× X k µ ( x 1 , . . . , x k ) F ( x 1 , . . . , x k ) χ ( x 1 , . . . , x k ) , where the maxim um ranges o ver fu nctions χ : X 1 × · · · × X k → { 0 , 1 } of the form χ ( x 1 , . . . , x k ) = k Y i =1 φ i ( x 1 , . . . , x i − 1 , x i +1 , . . . , x k ) (3.1) for some φ i : X 1 × · · · X i − 1 × X i +1 × · · · X k → { 0 , 1 } , i = 1 , 2 , . . . , k . A function χ of the form (3.1) is called a r e ctangle for k = 2 and a cylinder interse ction for k > 3 . Note that for k = 2 , the multipart y defin ition of discrepancy agrees with the one given earlier for the t wo-part y mo del. W e put disc( F ) = min µ disc µ ( F ) . Discrepancy is d ifficult to analyze as defin ed . Typicall y , one us es the follo wing estimat e, deriv ed by rep eated applications of the Cau ch y-Sc hw arz inequalit y . Theorem 3.1 ([BNS, CT, R1]) . Fix F : X 1 ×· · · × X k → {− 1 , + 1 } and a dis- tribution µ on X 1 × · · · × X k . Pu t ψ ( x 1 , . . . , x k ) = F ( x 1 , . . . , x k ) µ ( x 1 , . . . , x k ) . Then disc µ ( F ) | X 1 | · · · | X k | 2 k − 1 6 E x 0 1 ∈ X 1 x 1 1 ∈ X 1 · · · E x 0 k − 1 ∈ X k − 1 x 1 k − 1 ∈ X k − 1 E x k ∈ X k Y z ∈{ 0 , 1 } k − 1 ψ ( x z 1 1 , . . . , x z k − 1 k − 1 , x k ) . In the case of k = 2 parties, there are other wa ys to estimate the discrepancy , including the sp ectral norm of a matrix (e.g., see [S2 ]). F or a fun ction F : X 1 × · · · × X k → {− 1 , +1 } and a d istribution µ ov er X 1 × · · · × X k , let D µ ǫ ( F ) denote the least cost of a d eterministic p roto col for F whose probabilit y of error with resp ect to µ is at most ǫ. This qu an tity is kno w n as the µ -distributional c omplexity of F . Since a r andomized proto col can b e v iewed as a p robabilit y distribution ov er deterministic proto cols, w e immediately hav e that R ǫ ( F ) > max µ D µ ǫ ( F ) . W e are now ready to state the discrepancy metho d . 7 Theorem 3.2 (Discrepancy metho d; see [KN]) . F or every F : X 1 × · · · × X k → {− 1 , + 1 } , every distribution µ on X 1 × · · · × X k , and 0 < γ 6 1 , R 1 / 2 − γ / 2 ( F ) > D µ 1 / 2 − γ / 2 ( F ) > log γ disc µ ( F ) . In wo rds, a fu n ction with small discrepan cy is hard to compute to any non trivial adv an tage o ver r an d om guessing, let alone compute it to high accuracy . 3.1 Generalized Discrepancy Metho d The discrepancy metho d is particularly strong in that it giv es comm u- nication lo wer b ound s not only for b ounded-error pr oto cols b ut also for proto cols w ith error v anishingly close to 1 2 . This stren gth of the discrep- ancy metho d is at once a w eakness. F or example, th e disjoin tness function disj ( x, y ) = W n i =1 ( x i ∧ y i ) has a randomized pr otocol with error 1 2 − Ω 1 n and communicati on O (log n ) . As a result, the disjoint ness fu nction has h igh discrepancy , and no s trong lo wer b ounds can b e obtained for it via the d is- crepancy m etho d. Y et it is wel l-kno w n that disj has comm unication com- plexit y Θ( n ) in the randomized mo del [KS, R2] and Ω( √ n ) in the qu an tum mo del [R3] and Merlin-Arth ur mo del [K2]. The gener alize d discr ep ancy metho d is an extension of the traditional discrepancy metho d that av oids the difficulty just cited. This tec hnique w as fi rst applied by Klauc k [K1] and reformulat ed in its current f orm b y Razb oro v [R3 ]. The d ev elopment in [K1, R3] tak es place in the quantum mo del of communicat ion. Ho wev er, the same idea w ork s in a v ariet y of mo dels, as illustrated in [S2]. The v ersion of the generalized discrepancy metho d for the t wo- part y randomized model is as follo ws. Theorem 3.3 ([S2, § 2.4]) . Fix a function F : X × Y → {− 1 , +1 } and 0 6 ǫ < 1 / 2 . Then for al l functions H : X × Y → {− 1 , +1 } and al l pr ob ability distributions P on X × Y , R ǫ ( F ) > log h F , H ◦ P i − 2 ǫ disc P ( H ) . The usefu lness of Theorem 3.3 stems f r om its applicabilit y to f unctions that ha ve efficient proto cols with error close to random guessing, suc h as 1 2 − Ω 1 n for the disjoint ness f u nction. Note that one reco vers Theorem 3.2, the ordinary discrep ancy metho d , by setting H = F in Theorem 3.3. 8 Pr o of of The or em 3.3 (adapted from [S 2], p p. 88 –89). Pu t c = R ǫ ( F ) . A public-coin pr otocol with cost c can b e th ough t of as a probabilit y d is- tribution on deterministic p roto cols with cost at most c. In particular, there are random v ariables χ 1 , χ 2 , . . . , χ 2 c : X × Y → { 0 , 1 } , eac h a rectangle, as w ell as random v ariables σ 1 , σ 2 , . . . , σ 2 c ∈ {− 1 , + 1 } , suc h that F − E h X σ i χ i i ∞ 6 2 ǫ. Therefore, D F − E h X σ i χ i i , H ◦ P E 6 2 ǫ. On the other hand, D F − E h X σ i χ i i , H ◦ P E > h F , H ◦ P i − 2 c disc P ( H ) b y the d efi nition of discrep ancy . The theorem follo ws at once fr om the last t wo in equalities. Theorem 3.3 extends wo rd-for-wo rd to th e multipart y mo del, as follo ws: Theorem 3.4 ([LS , CA]) . Fix a f u nction F : X → {− 1 , +1 } and ǫ ∈ [0 , 1 / 2) , wher e X = X 1 × · · · × X k . Then f or al l functions H : X → {− 1 , + 1 } and al l pr ob ability distributions P on X , R ǫ ( F ) > log h F , H ◦ P i − 2 ǫ disc P ( H ) . Pr o of. Identic al to the t wo -part y case (T h eorem 3.3), with the w ord “rect- angles” replaced by “cylinder in tersections.” 3.2 P attern Matrix Metho d T o apply the generalized discrepancy metho d to a giv en Bo olean function F , one needs to iden tify a Bo olean function H whic h is w ell co rrelated with F under some distribution P but has lo w d iscrepancy with resp ect to P . The pattern matrix metho d [S1, S2] is a sy s tematic tec hniqu e for fi nding such H and F . T o simplify the exp osition of our main results, we will no w review this method and sketc h its pro of. Recall th at the ǫ -appr oximate de gr e e of a f u nction f : { 0 , 1 } n → R , denoted d eg ǫ ( f ) , is th e least degree of a p olynomial p with k f − p k ∞ 6 ǫ . A starting p oin t in the pattern matrix metho d is the follo wing dual formulatio n of the appro ximate degree. 9 F act 3.5. Fix ǫ > 0 . L et f : { 0 , 1 } n → R b e giv e n with d = d eg ǫ ( f ) > 1 . Then ther e is a function ψ : { 0 , 1 } n → R such that: ˆ ψ ( S ) = 0 for | S | < d, X z ∈{ 0 , 1 } n | ψ ( z ) | = 1 , X z ∈{ 0 , 1 } n ψ ( z ) f ( z ) > ǫ. See [S2] for a pro of of this fact usin g linear p r ogramming dualit y . The crux of the metho d is the follo wing theorem. Theorem 3.6 ([S1]) . Fix a function h : { 0 , 1 } n → {− 1 , + 1 } and a pr ob a- bility distribution µ on { 0 , 1 } n such that [ h ◦ µ ( S ) = 0 for | S | < d. L et N b e a given inte ger. Define H = [ h ( x | V )] x,V , P = 2 − N + n N n − 1 [ µ ( x | V )] x,V , wher e the r ows ar e indexe d by x ∈ { 0 , 1 } N and c olumns by V ∈ [ N ] n . Then disc P ( H ) 6 4e n 2 N d d/ 2 . A t last, w e are r eady to state the pattern matrix metho d. Theorem 3.7 ([S2]) . L et f : { 0 , 1 } n → {− 1 , + 1 } b e a given function, d = deg 1/3 ( f ) . L et N b e a given inte ge r. Define F = [ f ( x | V )] x,V , wher e the r ows ar e indexe d by x ∈ { 0 , 1 } N and c olumns by V ∈ [ N ] n . If N > 16e n 2 /d, then R ( F ) = Ω d log N d 4e n 2 . 10 Pr o of (adapted from [S2]) . Let ǫ = 1 / 10 . By F act 3.5, there exists a func- tion h : { 0 , 1 } n → {− 1 , +1 } and a probability distribution µ on { 0 , 1 } n suc h that [ h ◦ µ ( S ) = 0 , | S | < d, (3.2) and X z ∈{ 0 , 1 } n f ( z ) µ ( z ) h ( z ) > 1 3 . (3.3) Letting H = [ h ( x | V )] x,V and P = 2 − N + n N n − 1 [ µ ( x | V )] x,V , we obtain from (3 .2) and Th eorem 3.6 that disc P ( H ) 6 4e n 2 N d d/ 2 . (3.4) A t the same time, one sees from (3. 3) that h F , H ◦ P i > 1 3 . (3.5) The theorem now follo ws from (3.4) and (3.5 ) in view of the generalized discrepancy metho d, T heorem 3.3. R emark. Presen ted ab o ve is a wea k er , com binatorial version of the pattern matrix metho d . Th e co mmunicat ion low er b ounds in Theorems 3.6 and 3.7 w er e improv ed to optimal in [S 2] us in g matrix-analytic tec hn iqu es. Unlike the com b inatorial argument ab o ve , ho wev er, the matrix-analytic pro of is n ot kno w n to extend to the multi part y model and is not used in the follo w-u p m u ltipart y p ap ers [C, LS, C A, DP, DPV , BH] or our wo rk. An alternate tec hniqu e based on F act 3.5 is the blo ck- c omp osition metho d [SZ ], dev elop ed indep enden tly of th e p attern matrix metho d . See [S3, § 5.3] for a comparativ e discussion. 4 A New Criterion for Nondeterministic and Merlin- Arth ur Complexit y In this section, we derive a new criterion f or high comm unication complexit y in the nondeterministic and Merlin-Arthur mo dels. This criterion, inspired b y the generalized d iscrepancy metho d, will allo w us to obtain our main result. 11 Theorem 4.1. L et F : X → {− 1 , + 1 } b e given, wher e X = X 1 × · · · × X k . Fix a function H : X → {− 1 , +1 } and a pr ob ability distribution P on X . Put α = P ( F − 1 ( − 1) ∩ H − 1 ( − 1)) , β = P ( F − 1 ( − 1) ∩ H − 1 (+1)) , Q = log α β + disc P ( H ) . Then N ( F ) > Q (4.1) and MA ( F ) > min Ω( p Q ) , Ω Q log { 2 /α } . (4.2) Pr o of. Put c = N ( F ) . Then there is a co ver of F − 1 ( − 1) by 2 c cylinder in- tersections, eac h con tained in F − 1 ( − 1) . Fix one su c h co ver, χ 1 , χ 2 , . . . , χ 2 c : X → { 0 , 1 } . By the d efinition of discrepancy , h P χ i , − H ◦ P i 6 2 c disc P ( H ) . On the other hand, P χ i ranges b etw een 1 and 2 c on F − 1 ( − 1) and v anishes on F − 1 (+1) . T h erefore, h P χ i , − H ◦ P i > α − 2 c β . These tw o inequalities force (4.1). W e no w tur n to the Merlin-Arth u r mo del. Let c = MA ( F ) and δ = α 2 − c − 1 . The first step is to impro ve the error probabilit y of the Merlin-Arth u r p roto col by rep etiti on from 1 / 3 to δ. Sp ecifically , follo w ing Klauc k [K2] w e observe that there exist randomized proto cols F 1 , . . . , F 2 c : X → { 0 , 1 } , eac h a rand om v ariable of the coin tosses and eac h ha ving comm un ication cost c ′ = O ( c log { 1 /δ } ) , su c h that the sum X E [ F i ] ranges in [1 − δ, 2 c ] on F − 1 ( − 1) and in [0 , δ 2 c ] on F − 1 (+1) . As a r esult, D X E [ F i ] , − H ◦ P E > α (1 − δ ) − β 2 c − (1 − α − β ) δ 2 c . (4.3) 12 A t the same time, D X E [ F i ] , − H ◦ P E 6 2 c X i =1 2 c ′ disc P ( H ) = 2 c + c ′ disc P ( H ) . (4.4) The bou n ds in (4.3) and (4.4) force (4.2). Since sign tensors H and − H h a ve th e same d iscrepancy un der an y giv en distribution, w e ha ve the follo wing alternate form of Theorem 4.1. Corollary 4.2. L et F : X → {− 1 , +1 } b e given, wher e X = X 1 × · · · × X k . Fix a function H : X → {− 1 , +1 } and a pr ob ability distribution P on X . Put α = P ( F − 1 (+1) ∩ H − 1 (+1)) , β = P ( F − 1 (+1) ∩ H − 1 ( − 1)) , Q = log α β + disc P ( H ) . Then N ( − F ) > Q and MA ( − F ) > min Ω( p Q ) , Ω Q log { 2 /α } . A t fir st glance, it is u nclear how the nond eterministic b ound of Theo- rem 4.1 and its coun terpart Corolla ry 4.2 relate to the generalized discrep- ancy metho d. W e now p au s e to make this r elationship quite explicit. Recall that nondeterminism is a kin d of randomized computation, viz., a nondeter- ministic p r oto col with cost c for a f unction F is a kind of cost- c randomized proto col with error probabilit y at most ǫ = 1 2 − 2 − c on F − 1 ( − 1) and error probabilit y ǫ = 0 elsewhere. This is the setting of Theorem 4.1. Th e gener- alized d iscr ep ancy metho d, on th e other h and, has a single error parameter ǫ for all in puts. T o b est con ve y this d istinction b et wee n the t w o method s, w e form ulate a m ore general criterion y et, whic h allo ws for d ifferen t errors on eac h inp u t. 13 Theorem 4.3. L et F : X → {− 1 , + 1 } b e given, wher e X = X 1 × · · · × X k . L et c b e the le ast c ost of a public-c oin pr oto c ol for F with err or pr ob ability E ( x ) on x ∈ X , for some E : X → [0 , 1 / 2] . Then for al l functions H : X → {− 1 , +1 } and al l pr ob ability distributions P on X , 2 c > h F , H ◦ P i − 2 h P , E i disc P ( H ) . Pr o of. A public-coin pr otocol w ith cost c is a pr ob ab ility distr ib ution on deterministic protocols with cost at most c. Then by hyp othesis, th ere are random v ariables χ 1 , χ 2 , . . . , χ 2 c : X → { 0 , 1 } , eac h a cylinder intersecti on, and random v ariables σ 1 , σ 2 , . . . , σ 2 c ∈ {− 1 , + 1 } , suc h that F ( x ) − E h X σ i χ i ( x ) i 6 2 E ( x ) for x ∈ X. Therefore, D F − E h X σ i χ i i , H ◦ P E 6 2 h P , E i . On the other hand, D F − E h X σ i χ i i , H ◦ P E > h F , H ◦ P i − 2 c disc P ( H ) b y the d efi nition of discrep ancy . The theorem follo ws at once fr om the last t wo in equalities. 5 Main Result W e no w pro v e the claimed separations o f nondeterministic, co- nondeterministic, and Merlin-Arthur comm unication complexit y . It will b e easier to first obtain th ese separations b y a pr obabilistic argument and only then s k etc h an explicit constru ction. W e start b y deriving a suitable analytic p r op erty of the or function. Theorem 5.1. Ther e i s a function ψ : { 0 , 1 } m → R such that: X z ∈{ 0 , 1 } m | ψ ( z ) | = 1 , (5.1) ˆ ψ ( S ) = 0 for | S | 6 Θ ( √ m ) , (5.2) ψ (0) > 1 6 . (5.3) 14 Pr o of. Let f : { 0 , 1 } m → {− 1 , + 1 } b e giv en by f ( z ) = 1 ⇔ z = 0 . It is well- kno w n [NS, P ] that deg 1/3 ( f ) > Ω( √ m ) . By F act 3.5, there is a function ψ : { 0 , 1 } m → R that obeys (5.1), (5.2), and add itionally satisfies X z ∈{ 0 , 1 } m ψ ( z ) f ( z ) > 1 3 . Finally , 2 ψ (0) = X z ∈{ 0 , 1 } m ψ ( z ) { f ( z ) + 1 } = X z ∈{ 0 , 1 } m ψ ( z ) f ( z ) > 1 3 , where the second equalit y follo ws from ˆ ψ ( ∅ ) = 0. F or the remainder of this section, it will b e conv enien t to establish some additional notatio n follo wing Da vid and Pitassi [DP]. Fix in tegers n, m with n > m. Let ψ : { 0 , 1 } m → R b e a giv en fu nction with P z ∈{ 0 , 1 } m | ψ ( z ) | = 1 . Let d d enote the least order of a nonzero F our ier co efficien t of ψ . Fix a Bo olean function h : { 0 , 1 } m → {− 1 , + 1 } and the distribution µ on { 0 , 1 } m suc h that ψ ( z ) ≡ h ( z ) µ ( z ) . F or a mapp ing α : ( { 0 , 1 } n ) k → [ n ] m , d efine a ( k + 1)-part y communicat ion problem H α : ( { 0 , 1 } n ) k +1 → {− 1 , +1 } by H α ( x, y 1 , . . . , y k ) = h ( x | α ( y 1 ,...,y k ) ) . Define a distribution P α on ( { 0 , 1 } n ) k +1 b y P α ( x, y 1 , . . . , y k ) = 2 − ( k + 1) n + m µ ( x | α ( y 1 ,...,y k ) ) . The follo wing theorem com bines the pattern matrix metho d with a probabilistic argumen t. Theorem 5.2 ([DP]) . Assume that n > 16e m 2 2 k . Then for a uniformly r andom choic e of α : ( { 0 , 1 } n ) k → [ n ] m , E α h disc P α ( H α ) 2 k i 6 2 − n/ 2 + 2 − d 2 k +1 . F or completeness, w e include a detailed pro of of this result. Pr o of (repro d uced from the sur v ey article [S 3 ], p p . 88–89). By Th eo- rem 3.1 , disc P α ( H α ) 2 k 6 2 m 2 k E Y | Γ( Y ) | , (5.4) where w e put Y = ( y 0 1 , y 1 1 , . . . , y 0 k , y 1 k ) ∈ ( { 0 , 1 } n ) 2 k and Γ( Y ) = E x Y z ∈{ 0 , 1 } k ψ x | α ( y z 1 1 ,y z 2 2 ,...,y z k k ) . 15 F or a fixed c hoice of α and Y , we w ill u s e the shorthand S z = α ( y z 1 1 , . . . , y z k k ) . T o analyz e Γ( Y ) , one pro ves t w o k ey claims analogous to those in the t w o- part y Theorem 3.6 (see [S1, S3] for m ore detai l). Claim 5.3. Assume that S z ∈{ 0 , 1 } k S z > m 2 k − d 2 k − 1 . Then Γ( Y ) = 0 . Pr o of. If | S S z | > m 2 k − d 2 k − 1 , then some S z m u st feature more than m − d elemen ts that do n ot o ccur in S u 6 = z S u . But this forces Γ( Y ) = 0 since the F our ier transform of ψ is su pp orted on c h aracters of order d and higher. Claim 5.4. F or every Y , | Γ( Y ) | 6 2 −|∪ S z | . Pr o of. Immediate from Prop osition 2.1. In view of (5.4) and Claims 5.3 and 5.4, we hav e E α h disc P α ( H α ) 2 k i 6 m 2 k − m X i = d 2 k − 1 2 i P Y , α h [ S z = m 2 k − i i . It remains to b oun d the p robabilities in the last expression. With pr obabilit y at least 1 − k 2 − n o ver the choi ce of Y , we ha ve y 0 i 6 = y 1 i for eac h i = 1 , 2 , . . . , k . Conditioning on this ev en t, the fact that α is c hosen uniformly at random means th at the 2 k sets S z are distr ib uted ind ep endently and u niformly ov er [ n ] m . A calculatio n now reve als that P Y , α h [ S z = m 2 k − i i 6 k 2 − n + m 2 k i m 2 k n i 6 k 2 − n + 8 − i . W e are ready to pr o ve our main result. It ma y b e helpful to con tr ast the pro of to follo w with the pro of of th e p attern matrix metho d (Theorem 3.7). Theorem 5.5. L et k 6 (1 − ǫ ) log n, wher e ǫ > 0 is any give n c onstant. Then ther e exists a func tion F α : ( { 0 , 1 } n ) k +1 → {− 1 , + 1 } such that: N ( F α ) = O (log n ) (5.5) and MA ( − F α ) = n Ω(1) . (5.6) In p articular, coNP cc k * M A cc k and NP cc k 6 = coNP cc k . 16 Pr o of. Let m = ⌊ n δ ⌋ for a sufficien tly small constan t δ = δ ( ǫ ) > 0 . As usual, define or m : { 0 , 1 } m → {− 1 , +1 } by or m ( z ) = 1 ⇔ z = 0 . Let ψ : { 0 , 1 } m → R b e as guarantee d b y Theorem 5.1. F or a mapping α : ( { 0 , 1 } n ) k → [ n ] m , let H α and P α b e d efined in terms of ψ as describ ed earlier in this section. Th en Theorem 5.2 s ho w s th e existence of α su c h that disc P α ( H α ) 6 2 − Ω( √ m ) . (5.7) Define F α : ( { 0 , 1 } n ) k +1 → {− 1 , +1 } by F α ( x, y 1 , . . . , y k ) = or m ( x | α ( y 1 ,...,y k ) ) . It is immediate from the p rop erties of ψ th at P α ( F − 1 α (+1) ∩ H − 1 α (+1)) > 1 6 , (5.8) P α ( F − 1 α (+1) ∩ H − 1 α ( − 1)) = 0 . (5.9) The sought low er b oun d in (5.6) n o w follo ws f r om (5.7)–(5.9) and Corol- lary 4.2. On the other h and, as observed in [DP], the fun ction F α has an effi- cien t nond eterministic p r oto col. Namely , pla yer 1 (who kno ws y 1 , . . . , y k ) nondeterministically selects an elemen t i ∈ α ( y 1 , . . . , y k ) and w rites i on the shared b lackboard. Play er 2 (who knows x ) then announces x i as the output of the proto col. Th is yields the desired up p er b ound in (5.5 ). As p romised, w e will no w sketc h an explicit constr u ction of the function whose existence h as just b een pr ov en. F or this, it su ffices to inv ok e pr evious w ork by Da v id , Pitassi, and Vio la [DPV], who derandomized the c hoice of α in Theorem 5.2. More precisely , in stead of working with a family { H α } of fun ctions, eac h giv en b y H α ( x, y 1 , . . . , y k ) = h ( x | α ( y 1 ,...,y k ) ) , the authors of [DPV ] p osited a sin gle fun ction H ( α, x, y 1 , . . . , y k ) = h ( x | α ( y 1 ,...,y k ) ) , where the new argument α is kn o wn to all p la yers and ranges o v er a small, explicitly giv en subset A of all mappings ( { 0 , 1 } n ) k → [ n ] m . By choosing A to b e pseudorand om, the authors of [DPV] forced the same qualitativ e conclusion in Th eorem 5.2. This devel opment carries o v er unchanged to our setting, and w e obtain our main r esu lt. Theorem 1.1 (Restated from p. 1). L et k 6 (1 − ǫ ) log 2 n, wher e ǫ > 0 is any given c onstant. Then ther e is an ( explicitly given ) function F : ( { 0 , 1 } n ) k → {− 1 , + 1 } with N ( − F ) = O (log n ) 17 and MA ( F ) = n Ω(1) . In p articular, coNP cc k * M A cc k and NP cc k 6 = coNP cc k . Pr o of. Identic al to Th eorem 5.5, with the d escrib ed derandomization of α. 6 On Disjoin tness and Cons tan t-Depth Circuits In this fi n al section, we revisit recen t multipart y analyses of the d isjoin tness function and other constant -depth circuits [C, L S , CA, BH]. W e will see that the program of the previous sections app lies essentia lly unchanged to these other fun ctions. W e start with some notation. F ix a fu nction φ : { 0 , 1 } m → R and an in teger N with m | N . Defin e the ( k , N , m, φ ) -p attern tensor as the k - argumen t fu nction A : { 0 , 1 } m ( N/m ) k − 1 × [ N /m ] m × · · · × [ N /m ] m → R giv en b y A ( x, V 1 , . . . , V k − 1 ) = φ ( x | V 1 ,...,V k − 1 ) , where x | V 1 ,...,V k − 1 = x 1 ,V 1 [1] ,...,V k − 1 [1] , . . . , x m,V 1 [ m ] ,...,V k − 1 [ m ] ∈ { 0 , 1 } m and V j [ i ] denotes the i th elemen t of the m -dimensional v ector V j . (Note that we index the string x b y viewing it as a k -dimen s ional arra y of m × ( N/m ) × · · · × ( N /m ) = m ( N /m ) k − 1 bits.) Th is definition extends p attern matric es [S1, S2] to higher dimensions. The t wo-part y Th eorem 3.6 has b een adapted as follo ws to k > 3 p lay ers. Theorem 6.1 ([C, LS, CA]) . Fix a function h : { 0 , 1 } m → {− 1 , + 1 } and a pr ob ability distribution µ on { 0 , 1 } m such that [ h ◦ µ ( S ) = 0 , | S | < d. L et N b e a g iven inte ger, m | N . L et H b e the ( k , N , m, h ) -p attern tensor. L et P b e the ( k , N , m, 2 − m ( N/m ) k − 1 + m ( N/m ) − m ( k − 1) µ ) -tensor. If N > 4e m 2 ( k − 1)2 2 k − 1 /d, then disc P ( F ) 6 2 − d/ 2 k − 1 . A pro of of this exact formulation is a v ailable in the surve y article [S3], pp. 85–86. W e are no w prepared to apply our tec h n iques to the d isjoin tn ess function. 18 Theorem 6.2. L et N b e a gi ven inte ger, m | N . L et F b e the ( k , N , m, or m ) - p attern tensor. If N > 4e m 2 ( k − 1)2 2 k − 1 /d, then N ( − F ) > Ω √ m 2 k , MA ( − F ) > Ω 4 √ m 2 k / 2 . Pr o of. Let ψ : { 0 , 1 } m → R b e as guaran teed by Theorem 5.1. Fix a f unc- tion h : { 0 , 1 } m → {− 1 , +1 } and a d istribution µ on { 0 , 1 } m suc h th at ψ ( z ) ≡ h ( z ) µ ( z ) . L et H b e the ( k , N , m, h )-pattern tensor. Let P b e the ( k , N , m, 2 − m ( N/m ) k − 1 + m ( N/m ) − m ( k − 1) µ )-pattern tensor, w h ic h is a pr oba- bilit y distr ibution. Then by Theorem 6.1, disc P ( H ) 6 2 − Ω( √ m/ 2 k ) . (6.1) On the other hand, it is clear from th e prop erties of ψ that P ( F − 1 (+1) ∩ H − 1 (+1)) > 1 6 , (6.2) P ( F − 1 (+1) ∩ H − 1 ( − 1)) = 0 . (6.3) In view of (6.1)–(6. 3) and C orollary 4.2 , th e pro of is complete. The function F in Theorem 6.2 is a subfun ction of the m u ltipart y dis- join tn ess fu nction disj : ( { 0 , 1 } n ) k → {− 1 , +1 } , where n = m ( N /m ) k − 1 and disj ( x 1 , . . . , x k ) = n _ j =1 k ^ i =1 x ij . Recall that disjointness has trivial nond eterministic complexit y , O (log n ) . In particular, Theorem 6.2 sho ws that the disjoin tness fun ction separates NP cc k from coN P cc k and witnesses that coN P cc k * MA cc k for up to k = Θ(log log n ) pla yers. Our te c h nique similarly app lies to the follo w-u p w ork on disj oin t- ness by Beame and Huynh-Ngo c [BH], whence w e obtai n the stronger con- sequence that th e disjoin tn ess fun ction separates NP cc k from coNP cc k and wit- nesses that coNP cc k * M A cc k for u p to k = Θ(log 1 / 3 n ) p la ye rs. W e conclude this section with a remark on constan t-depth circuits. Let ǫ b e a suffi cien tly s mall absolute constan t, 0 < ǫ < 1 . F or eac h k = 2 , 3 , . . . , ǫ log n, the authors of [BH] construct a constan t-depth cir- cuit F : ( { 0 , 1 } n ) k → {− 1 , + 1 } with N ( F ) = log O (1) n and R ( F ) = n Ω(1) . A glance at the p r o of in [BH] rev eals, once again, that th e program of our pa- p er is readily applicable to F , w ith the consequence that MA ( − F ) = n Ω(1) . In particular, our wo rk sho ws that NP cc k 6 = coNP cc k and coN P cc k * MA cc k for up to k = ǫ log n pla y ers, as w itn essed b y a constan t-depth circuit. 19 References [BH] P . B eame a nd D.-T. Huynh-Ngoc . Multiparty comm unicatio n complexity and thresho ld circuit size of A C 0 . In Ele ct ro nic Col lo quium on Computa- tional Complexity (ECCC) , September 20 08. Rep ort TR0 8-082 . [BNS] L. Babai, N. Nisan, and M. Szeg edy . Multiparty pro to cols, pseudo random generator s for log space, a nd time-space tr ade-offs. J. Comput. Syst. Sci. , 45(2):204 –232 , 199 2. [BPS] P . Beame, T. Pita s si, and N. Segerlind. L ower b o unds for Lov´ asz-Schrijv er systems and b eyond follow from mult iparty communication complexity . SIAM J . Comput. , 37(3):84 5–869 , 2007. [C] A. Chattopa dh ya y . Discr epancy a nd the p ower of b ottom fan-in in depth- three circuits. In Pr o c. of t he 48th S ymp osium on F oundations of Computer Scienc e (FOCS) , pa ges 449 –458 , 200 7 . [CA] A. Cha tto pa dhy ay a nd A. Ada. Multiparty co mmunication complexity of disjoin tness. In Ele ctr onic Col lo quium on Computational Complexity (ECCC) , Ja nuary 2008. Rep ort TR0 8-002 . [CFL] A. K. Chandra, M. L. F urst, a nd R. J. Lipton. Multi-party proto co ls. In Pr o c. of the 15th Symp osium on The ory of Computing (STOC) , pages 94 – 99, 1 983. [CT] F. R. K. Ch ung and P . T etali. Communication c o mplexity and q uasi ran- domness. SIAM J. Discr ete Ma th. , 6(1):110 –123 , 199 3. [DP] M. David and T. Pitas si. Separating NOF communication co mplexity classes RP and NP . In Ele ctr onic Col lo quium on Computational Complexity (ECCC) , F ebrua ry 2 008. Rep ort TR08-01 4. [DPV] M. David, T. P ita ssi, and E. Viola. Improv ed s eparations b etw ee n no ndeter- ministic and randomized m ultiparty comm unica tion. In Pr o c. of the 12th Intl. Workshop on R andomization and Computation (RAND OM) , pages 371–3 84, 2 008. [HG] J. H ˚ astad and M. Goldmann. O n the power of small-depth thres ho ld cir- cuits. Computational Comple xity , 1:1 13–12 9, 19 91. [K1] H. Kla uck. Low er b ounds for quantum communication complexity . In Pr o c. of the 42nd Symp osium on F oundations of Computer Scienc e (FOCS) , pages 288–2 97, 2 001. [K2] H. Klauck. Rectangle size bounds and threshold c overs in comm unication complexity . In Pr o c. of the 18th Conf. on Computational Complexity (CCC) , pages 118–1 34, 2 003. [KN] E. Kushilevitz and N. Nisan. Communic ation c omplexity . Cambridge Uni- versit y P ress, New Y ork, 1997. 20 [KS] B. Ka lyanasundar am a nd G. Schnitger. The probabilistic communication complexity of s et intersection. SIA M J. Discr et e Math. , 5(4):545–5 57, 1992. [LS] T. Le e a nd A. Shraibman. Disjoint ness is ha r d in the multi-part y num b er- on-the-forehea d mo del. In Pr o c. of the 23r d Conf. on Computational Co m- plexity (CCC) , pages 81– 9 1, 2008. [NS] N. Nisan a nd M. Szegedy . O n the de g ree of Bo olea n functions as real po lynomials. Computational Complexity , 4:3 01–31 3, 19 94. [P] R. Paturi. On the degr ee of p olynomia ls that approximate symmetric Bo olean functions. In Pr o c. of the 24th Symp osium on The ory of Com- puting (S TOC) , pages 468– 474, 1992. [R1] R. Raz. The BNS-Chung criter ion for multi-party communication co mplex- it y . Computational Complexity , 9(2):1 13–12 2, 20 00. [R2] A. A. Razbo rov. On the distributional complexity of disjointness. The or. Comput. S ci. , 106 (2):385– 390, 19 92. [R3] A. A. Razb or ov. Quantum communication complexity of symmetric predi- cates. Izvestiya: Mathematics , 67 (1):145– 1 59, 2 003. [R W] A. A. Razb orov and A. Wigders o n. n Ω(log n ) low er bounds on the size o f depth-3 thre shold cir cuits with AND g ates at the b ottom. Inf. Pr o c ess. L ett. , 4 5(6):303– 307, 1993. [S1] A. A. Sherstov. Sepa rating A C 0 from depth-2 ma jority circuits. SIAM J. Comput. , 38 (6):2113 – 2129 , 2 009. Preliminar y version in 39th STOC, 2007 . [S2] A. A. Sherstov. The pa ttern ma trix metho d for low er b ounds on quantum communication. In Pr o c. of t he 40th Symp osium on The ory of Computing (STOC) , pa ges 85– 94, 200 8. [S3] A. A. Sher stov. Co mm unication lower bo unds using dual polyno mials. Bul- letin of the EA TCS , 95:59– 93, 2008. [SZ] Y. Shi and Y. Zhu. Quantum communication complexity o f blo ck-compo sed functions. Qu antum Information & Computation , 9(5–6):44 4–46 0 , 2009. [Y] A. C.-C. Y ao . On A CC a nd thr e shold circuits. In Pr o c. of the 31st Symp o- sium on F oundations of Comp uter Scienc e (F OCS) , pages 61 9–627 , 1 990. 21
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment