On the Double Coset Membership Problem for Permutation Groups

We show that the Double Coset Membership problem for permutation groups possesses perfect zero-knowledge proofs.

Authors: Oleg Verbitsky

This pap er w as published in A lge b r aic structur es and their app l i c ation s . Pro ceedings of the third in ternatio na l algebraic conference held in framew ork of the Ukrainian mathematical congress (Kiev, 2001), pag es 351–363 . Institute of Mathematics , Ukrainian Academ y of Science s (200 2). Review ed in Mathem a tic al R eview MR2210506 (2006m:68054) and Zentr alblatt f¨ ur Mathematik Z bl 1099.20501. On the Double Coset Mem b ership Problem for P erm utation Groups Oleg V erbitsky Departmen t of Algebra F acult y of Mec h anics & Mathematics Kyiv National Universit y V olo dym y rsk a 60 01033 Ky iv, Ukr aine Abstract W e sho w that th e Double Coset Mem b ersh ip problem for p ermutati on groups p ossesses p erfect zero-knowle dge pro ofs. 1 In tro duction 1.1 Definition of the prob lem Let S m b e a symmetric group of order m . W e supp ose that an elemen t of S m , a p ermutation of an m -elemen t set, is enco ded b y a binary string of length n = ⌈ log 2 m ! ⌉ , m (log 2 m − O (1 )) ≤ n ≤ m log 2 m . Whenev er w e refer to a p ermutation gr oup G , w e mean that G is a subgroup of S m for some m . Throughout the pap er w e a ssume that p erm utation groups are giv en b y a list of their generators. In t his pap er w e address the following a lg orithmic problem considered first by Luks [21]. DCM ( Double Cose t Me mbership ) Given: t w o p erm utations σ and τ and tw o p erm utatio n g roups G and H , all of the same order. R e c o gnize if: σ ∈ Gτ H . 1.2 Curren t complexit y status F or the bac kground on computational complexit y theory the reader is referred to [10]. DCM is in the class NP b y the Babai-Szemer ´ edy Reac habilit y Theorem [5]. This theorem say s that, given a n y set S of generators of a finite group G and any g ∈ G , there exists a seque nce o f elemen ts u 1 , . . . , u l of G suc h that the following conditions are met. 1 1. Each u i either b elongs to S or is obtained by the in v ersion o r the gr o up o per- ation from one or t w o previous elemen ts of the sequence. 2. u l = g . 3. l ≤ (1 + log 2 | G | ) 2 . As σ ∈ Gτ H iff τ − 1 σ ∈ ( τ − 1 Gτ ) H , DCM admits the follow ing refo r m ulation. DCM (An equiv alen t formulation) Given: a p erm utatio n s and tw o p ermutation groups G and H , all of the same order. R e c o gnize if: s ∈ GH . Consider tw o related problems, the first one easier and the second one harder than DCM . Membership in a Pe rmut a tion Group Given: a p erm utation s and a p erm utation gro up G of the same order. R e c o gnize if: s ∈ G . Membership in a 3- f old Group Product Given: a p erm utation s and three p ermutation groups G , H , a nd K , all o f the same order. R e c o gnize if: s ∈ GH K . It is known that t he former problem is solv able in p olynomial time [25, 9] and that the lat t er problem is NP-complete [22]. There a re evidences that t he complex- it y o f DCM is strictly in b et wee n. On the one hand, the problem of recognition if tw o giv en graphs a re isomorphic is p olynomial-time reducible t o DCM [2 1], see also Prop osition 3.2 b elow. DCM is therefore not exp ected to b e solv able in p oly- nomial time as long as t he Graph Isomorphism problem is not solv ed in p olynomial time (the curren tly b est algorithm due to Luks and Z emply ac henk o runs in time exp( O ( √ n log n )) f o r gra phs on n v ertices, see [3]). On the other hand, DCM b e- longs t o the complexit y class coAM (see Subsection 2.1 for the definition). By [8], if NP is a sub class of coAM, then the p olynomial-time hierarc h y of complexit y classes collapses to its second leve l, i.e., Σ P 2 = Π P 2 (see [10]). As the latter consequence is widely considered unlik ely , it is unlik ely tha t DCM is NP-complete. Lik e the mem b ership in coAM, some other complexit y-t heoretic results known for Graph Isomorphism also generalize to DCM . Both the problems hav e pr o gr am che ckers [7], and b oth are low for the complexit y class PP [2 0]. It is w orth noting that sev eral other group-theoretic pr o blems are p olynomial- time equiv alen t with DCM . W e men tion a few examples from the list o f suc h prob- lems compiled in [21, 1 9]: Giv en p erm utatio n groups G , H and p erm utations σ , τ , (a) find generators for G ∩ H ; (b) recognize if Gσ and H τ in tersect; (c) if σ ∈ G , find the cen tralizer of σ in G ; (d) if σ, τ ∈ G , recognize if the cen tralizer of τ in S m in tersects Gσ . In [7] it is show n that DCM is equiv alen t with the problem, giv en s ∈ GH , to find a factorization s = g h with g ∈ G and h ∈ H . 2 1.3 Our result A natural question to a sk ab out an NP problem whose p olynomial-time solv abilit y and NP-completeness are unkno wn is if it p ossesses a p erfect or a statistical zero- kno wledge in teractive pro of system. Informally sp eaking, a z ero-kno wledge pro of system for a recognition pro blem of a langua ge L is a proto col fo r t w o parties, the pro ver and the ve rifier, t ha t allows the prov er to convince the ve rifier that a giv en input b elongs to L , with high confide nce but w ithout communic ating the ve rifier an y informat ion (the r ig orous definitions are in Subsection 2.1). The concept of a zero-know ledge pro of has nota ble applications in designing cryptographic proto cols and in estimating the computational complexit y o f a lan- guage recognition problem. Namely , by [1] the class PZK of languag es ha ving p erfect zero-kno wledge pro o f syste ms is a sub class of coAM. Th us, the existence of a p erfect zero-kno wledge pro of of the mem b ership in L not only has a cryptographic meaning but also implies that L is in coAM and hence cannot b e NP-complete unles s the p olynomial-time hierarc hy collapses. F or the Graph Isomorphism problem, its mem b ership in coAM w as prov en di- rectly in [24] and its mem b ership in PZK was pro ven in [14]. F or DCM , the pro of of its mem b ership in coAM giv en in [4] is direct. In the presen t pap er w e prov e that DCM is also in PZK. W e therefore extend the list of problems in PZK that curren tly includes Gra ph Isomorphism [14], Quadratic R esiduosit y [16], a problem equiv alen t to D iscrete L o garithm [13], and appro ximate v ersions of the Shortest V ector and Closest V ector problems for in teger lattices [11]. 2 Bac kgrou n d on zero- kn o wledge pro ofs 2.1 Definitions W e denote the length of a binary w ord w b y | w | . W e consider languages o v er the binary alphab et whic h are subsets of { 0 , 1 } ∗ . The c omple ment of L is the languag e ¯ L = { 0 , 1 } ∗ \ L . Note tha t the DCM problem can b e represen ted as a recognition problem for the language L = { ( s, G, H ) : s ∈ GH } , where ( s, G, H ) is a suitable binary enco ding of the triplet consisting of a p erm utation s and the lists of generators for p erm utation gro ups G and H . W e use the standard computational mo del o f a deterministic T u ring machine , abbreviated further on a s TM. W e assume that a TM has t hree tap es, namely , the input ta p e, the output tap e, and the w ork tap e where all computations are p erformed. A pr ob abilistic TM, abbreviated fur t her on as PTM, in addition has the fourth tap e con taining a p oten tially infinite random binary string. Assuming that a PTM halts on input w and random string r , w e denote its running time by t ( w , r ). A PTM is p olynom i a l-time if t ( w , r ) is b ounded b y a p olynomial in | w | for all w and r . Assuming that a PTM halts o n w for almost all r , the function t ( w , r ) fo r a fixed w can b e considered as a random v ariable on the probability space { 0 , 1 } N of all 3 random strings. A PTM is exp e cte d p olynomial-time on L ⊆ { 0 , 1 } ∗ if for all w ∈ L the expectation of t ( w , r ) is b ounded b y a p olynomial in | w | . An inter active pr o of system h V , P i , further on abbreviated as IPS, consists of t w o PTMs, a p olynomial-time V c alled the verifier and a computationally unlimited P called the pr over . T he input tap e is common for the v erifier and the pro ver. The v erifier and the prov er a lso share a communic ation tap e whic h allow s message exc hange b et w een them. The system w orks as follo ws. First both the mac hines V a nd P are giv en an input w and eac h of them is giv en an individual random string, r V for V and r P for P . Then P and V alternatingly write messages to o ne another in the communic ation tap e. V computes its i -t h message a i to P based on the input w , the random string r V , and all previous messages from P to V . P computes its i -th message b i to V based on the input w , the random string r P , and all previous messages from V to P . After a num b er of message exc hanges V terminates in teraction and computes an output based on w , r V , and all b i . The output is denoted b y h V , P i ( w ). Note that , for a fixed w , h V , P i ( w ) is a random v ariable depending on b oth r a ndom strings r V and r P . Let ǫ ( n ) b e a function of a na t ura l a rgumen t taking on p ositiv e real v alues. W e call ǫ ( n ) ne gligible if ǫ ( n ) < n − c for ev ery c and all n star t ing from some n 0 ( c ). F or example, an exp onential ly smal l function ǫ ( n ) = d − n , where d > 1, is negligible. W e sa y tha t h V , P i is an IPS for a language L with err or ǫ ( n ) if the following t wo conditions a re fulfilled. Completeness. If w ∈ L , then h V , P i ( w ) = 1 with proba bility at least 1 − ǫ ( | w | ). Soundness. If w / ∈ L , then, for an arbitrary interacting PTM P ∗ , h V , P ∗ i ( w ) = 1 with probabilit y at most ǫ ( | w | ). W e will call an y prov er P ∗ in teracting with P on input w / ∈ L ch e ating . If in the completeness condition w e ha ve h V , P i ( w ) = 1 with probabilit y 1, w e sa y that h V , P i has one-side d err or ǫ ( n ). W e sa y that h V , P i is an IPS for a language L if h V , P i is a n IPS fo r L w ith negligible error. An IPS is public-c oin if the concatenation a 1 . . . a k of the v erifier’s messages is a prefix of his random string r V . A r ound is sending o ne message fro m the verifie r to t he pro v er or from the pro ve r to the v erifier. The class AM consists of those languages ha ving IPSs with error 1 / 3 and with n um b er of rounds bo unded by a constan t for all inputs. A language L b elongs to the class coAM iff its complemen t ¯ L b elongs t o AM. Giv en an IPS h V , P i and an input w , let view V ,P ( w ) = ( r ′ V , a 1 , b 1 , . . . , a k , b k ) where r ′ V is a par t of r V scanned by V during work on w and a 1 , b 1 , . . . , a k , b k are all messages from P to V and from V to P ( a 1 ma y b e empt y if the first message is sen t b y P ). Note tha t the v erifier’s messages a 1 , . . . , a k could b e excluded b ecause they are efficien tly computable from the other comp onents. F or a fixed w , view V ,P ( w ) is a random v ariable dep ending o n r V and r P . An IPS h V , P i is p erfe ct zer o-know le dge o n L if f o r ev ery interacting p olynomial- time PTM V ∗ there is a PTM M V ∗ , called a s imulator , that on ev ery input w ∈ L runs in exp ected p olynomial time and pro duces output M V ∗ ( w ) whic h, if considered 4 as a random v ariable dep ending on a random string of M V ∗ , is distributed identically with view V ∗ ,P ( w ). The latter condition means that P [ M V ∗ ( w ) = z ] = P [view V ∗ ,P ( w ) = z ] f or all z . If only a weak er condition that X z | P [ M V ∗ ( w ) = z ] − P [view V ∗ ,P ( w ) = z ] | is negligible is true, w e call h V , P i statistic al zer o-know le dge . These notions formalize the claim that the v erifier gets no information during interaction with the prov er: Ev erything that the v erifier g ets he can get without the pro ve r by running the sim ulator. According to the definition the v erifier learns nothing ev en if he deviates from the original program and fo llo ws an arbitrary pr o babilistic polynomial- time prog ram V ∗ . W e will call the ve rifier V h o nest and all other v erifiers V ∗ che ating . If the existence of a sim ulator is claimed only for the honest verifie r, w e call suc h a pro of system honest-verifier p erfe ct ( or statistic al) ze r o-know le dge . The class of languages L ha ving IPSs that are p erfect (resp. statistical) zero- kno wledge on L is denoted by PZK (resp. SZK). Recall that the error here is sup- p osed neglig ible. The k ( n ) -fold se quential c omp osition of an IPS h V , P i is the IPS h V ′ , P ′ i in whic h V ′ and P ′ on input w execute the pro grams of V and P sequen tially k ( | w | ) times, eac h time with indep endent c hoice o f random strings r V and r P . A t the end of in teraction V ′ outputs 1 iff h V , P i ( w ) = 1 in all k ( | w | ) executions. The initial system h V , P i is called atomic . In t he k ( n ) -fol d p ar al lel c omp osi tion h V ′′ , P ′′ i of h V , P i , the prog ram of h V , P i is executed k ( | w | ) times in parallel, that is, in eac h round all k ( | w | ) v ersions of a message are sen t from one mac hine to another at once as a long single message. In ev ery par a llel execution V ′′ and P ′′ use indep enden t copies of r V and r P . A t the end of in teraction V ′ outputs 1 iff h V , P i ( w ) = 1 in all k ( | w | ) executions. 2.2 Kno wn results on zero-kno wledge pr o ofs W e first notice a simple prop ert y of sequen tial comp osition of IPSs. Prop osition 2.1 If h V , P i is an IPS for a language L with one- sided constant error ǫ , then the k ( n ) - fold sequen tial comp osition of h V , P i is an IPS for L with one-sided error ǫ k ( n ) . P arallel comp osition ob viously preserv es the n umber of rounds, the public-coin prop ert y , and the prop erty of error to b e one-sided. It is not hard to prov e that k -fold parallel comp osition reduces t he one-sided error ǫ to ǫ k . It is a lso not hard to pro ve that parallel comp osition preserv es p erfect and statistical zero-knowled ge f o r the honest v erifier. These observ ations are summarized in the next prop osition. 5 Prop osition 2.2 Assume that h V , P i is a honest-ve rifier p erfect zero-kno wledge public-coin IPS for a langua g e L that o n all inputs w o r ks in a constant c rounds with one-sided constan t error ǫ . Then k ( n ) -fold parallel comp osition of h V , P i is a honest-v erifier p erfect zero-know ledge IPS for L that works in c rounds with error ǫ k ( n ) . W e also refer to the follow ing deep results in the theory of zero-kno wledge pro ofs. Prop osition 2.3 (A iello-H ˚ astad [1]) SZK ⊆ coAM. Prop osition 2.4 (Ok amoto [23]) 1. Every honest-verifie r statistical zero-know ledge IPS for a language L can b e transformed in an ho nest-verifie r statistical zero-kno wledge public-coin IPS for L . 2. If L has an honest-v erifier statistical zero-knowle dge public-coin IPS, then ¯ L has a honest-verifie r stat istical zero-knowled ge constant-round IPS. Note that t he item 2 of Prop osition 2.4 strengthens Prop osition 2.3 b ecause by [1 7] ev ery IPS can b e made public-coin at cost of decreasing the num b er of rounds in 2. Prop osition 2.5 (Goldreich- Sahai-V ad han [15]) Ev ery honest-v erifier statisti- cal zero-kno wledge public-coin IPS for a language L can b e transformed in a general statistical zero-know ledge pub lic-coin IPS for L . If the error of the initial IPS is one-sided, so is the error of the resulting IPS. Note that, to ac hiev e the negligible error, the transformation of Prop osition 2 .5 mak es the n um b er of rounds increasing with the input size increasing, ev en if the initial IPS is constan t- r ound. A transformation preserving the constant n umber of rounds is kno wn only under an unprov en assumption ab o ut the hardness of the Discrete Logarithm problem (t he formal statemen t of the assumption can b e found in [6]). Prop osition 2.6 (B ellare-Micali-Ostro vsky [6]) Supp ose that a language L has an honest-v erifier statistical zero-knowle dge IPS that on ev ery input w w orks in c ( | w | ) rounds with error at most 1 / 3 . Then, under the assumption on the hardness of Discrete Logarithm, L has a general statistical zero-kno wledge IPS that on input w w orks in O ( c ( | w | )) rounds with exp onen tially small error. 3 Bac kgrou n d on p e rm utation groups Giv en a finite s et X , b y a r andom eleme n t of X w e mean a random v ariable uniformly distributed o v er X . Prop osition 3.1 (Sims [25, 9]) 6 1. There is a p olynomial-time algorithm for recognizing the Members hip in a Permut a tion Group . 2. There is a probabilistic p olynomial- t ime alg o rithm t ha t, give n a list of g ener- ators for a p erm utation group G , outputs a random elemen t of G . The DCM problem is at least as hard as testing isomorphism of t w o giv en graphs. Prop osition 3.2 (Luks [21], Hoffmann [18]) The Graph Isomorphism problem is p olynomial-time reducible to DCM . W e include a pro of for the sake of completeness. Pro of. Consider t wo g r aphs of order n with adja cency matr ices A = ( a ij ) and B = ( b ij ). Let S 1 = { ( i, j ) : a ij = 1 } and S 2 = { ( i, j ) : b ij = 1 } . Let G b e the group of p ermutations of the square { 1 , . . . , n } 2 generated b y si- m ultaneous transp ositions of i -th and j -th row s and i -th and j -th columns for all 1 ≤ i < j ≤ n . The graphs are isomorphic iff G con tains a p erm utat io n σ suc h that σ ( S 1 ) = S 2 . Let H b e the group of p erm utations τ suc h tha t τ ( S 1 ) = S 1 and s b e an arbitrary p erm utation suc h that s ( S 1 ) = S 2 . As easily seen, a p erm utation σ as ab ov e exists iff s ∈ GH . ✷ Note that the reduction describ ed allo ws one to transform any zero-know ledge pro of system fo r DCM in a zero-know ledge pro of system for G raph Isomorphism. 4 Zero-knowledge pro ofs for DCM Theorem 4.1 The DCM problem has an honest-ve rifier p erfect zero-know ledge three-round public-coin IPS with one-sided error 1 / 2 . Pro of. On input ( s, G, H ) suc h that s ∈ GH the IPS h V , P i pro ceeds a s follow s. 1st r ound. P generates random elemen ts g ∈ G and h ∈ H , computes t = g sh , and sends t to V . V c heck s if t is a p erm utat ion of the give n o rder a nd if not (this is p ossible in the case of a che ating prov er) halts and outputs 1. 2nd r ound. V c ho oses a random bit b ∈ { 0 , 1 } and sends it to P . 3r d r ound. Case b = 0 . P sends V p erm utations g and h . V c hec ks if g ∈ G , h ∈ H , and t = g sh . Case b 6 = 0 (this include s the po ssibilit y of a mess age b / ∈ { 0 , 1 } pro duced b y a c heating verifier). P dec omp oses s in to the pro duct s = g 0 h 0 with g 0 ∈ G a nd h 0 ∈ H , computes g 1 = g g 0 and h 1 = h 0 h , and sends g 1 and h 1 to V . V chec ks if g 1 ∈ G , h 1 ∈ H , and t = g 1 h 1 . V halts and outputs 1 if the conditions are c hec k ed succes sfully and 0 otherwise. 7 This IPS is ob viously public-coin. W e need to c hec k that this is indeed a n IPS for DCM with one-sided error 1/2 and, moreo v er, tha t this is a honest-v erifier p erfect zero-kno wledge IPS. Completeness. If s ∈ GH , t hen it is clear that V outputs 1 with probabilit y 1. Soundness. Assume that s / ∈ GH and consider an arbitrary che ating prov er P ∗ . Observ e that if b oth t = g sh , g ∈ G , h ∈ H and t = g 1 h 1 , g 1 ∈ G , h 1 ∈ H , then s ∈ GH . It fo llo ws that, for at least one v alue of b , V outputs 0 and therefore V outputs 1 with probabilit y at most 1/ 2. Zer o-know le dge. Assume that s ∈ GH . During in teractio n with P , V sees view V ,P ( s, G, H ) = ( b, t, b, g ′ , h ′ ) where g ′ and h ′ are r eceiv ed b y V in the 3 rd round. If b = 0, then t = g sh , g ′ = g , and h ′ = h . If b = 1, then t = g ′ h ′ , g ′ = g g 0 , and h ′ = h 0 h . In b oth the cases g ′ and h ′ are random elemen ts of G and H resp ectiv ely . The random v ariable view V ,P ( s, G, H ) can b e therefore g enerated by the fo llo wing sim ulator: Generate a ra ndom bit b a nd random elemen ts g ′ ∈ G and h ′ ∈ H ; If b = 0, set t = g ′ sh ′ ; If b = 1, set t = g ′ h ′ . ✷ Corollary 4.1 The DCM problem has an honest-v erifier p erfect zero- knowled ge three-round public-coin IPS with one-sided error 2 − n . Pro of. By Prop osition 2.2 the n -fold para llel comp osition of the IPS from Theorem 4.1 reduce s the error to 2 − n and preserv es the pro p erties of t he atomic system. ✷ Let D ouble Coset Non-Me mbership , abbreviated as DCNM , b e the prob- lem opp osite to DCM , t ha t is, giv en a p erm utation s and t w o p erm utation gro ups G and H , to recognize if s / ∈ GH . The DCNM problem is clearly p olynomial-time equiv alen t with recognition of the set-theoretic complemen t of DCM , where the latter is enco ded as a lang uage in the binary alphab et. Corollary 4.2 DCNM has an honest-v erifier statistical zero-kno wledge constan t- round IPS. Pro of. The corollary follows from Corollar y 4.1 by Prop osition 2.4. W e also giv e an alternativ e direct pro of of this claim describing an honest-v erifier p erfect zero-kno wledge tw o-round IPS h V , P i for D CNM with one-sided error 1 /2. This system, fo r the case of p erm utation groups, g eneralizes the IPS suggested in [2] for the problem of testing the mem b ership in a finite group g iv en by a list o f generators and an oracle access to t he gro up op eration. On input ( s, G, H ) suc h that s / ∈ GH t he system w o rks as follo ws. 1st r ound. V c ho oses a random bit b to b e the first bit of a random string r V and, based on the subseque n t bits of r V , generates random elemen ts g ∈ G and h ∈ H . If b = 0, V computes t = g h ; If b = 1, V computes t = g sh . Then V sends t to P . 2nd r ound. P recognizes if t ∈ GH . If so, P sets a = 0; If not, P sets a = 1. Then P sends a to V . 8 V c hec ks if a = b a nd halts. If the equality is true, V outputs 1; Otherwise V outputs 0. Completeness. Assume that s / ∈ GH . In the first round, t ∈ GH if b = 0 and t / ∈ GH if b = 1. Therefore V outputs 1 with probability 1. Soundness. Assume that s ∈ GH . Then t ∈ GH regardless of the v alue of b . Moreo v er, t is the pro duct of random elemen ts of G and H and, as a random v ariable, is indep enden t of the random v ariable b . It follows that in the second round a message from the c heating prov er P ∗ to V , which is a f unction of s , G , H , r P ∗ , and t , is equal to b with probabilit y at most 1/ 2. Hence h V , P ∗ i ( s, G , H ) = 1 with probabilit y at most 1/2. Zer o-know le dge. Assume that s / ∈ GH . During interaction with P , V sees view V ,P ( s, G, H ) = ( r ′ V , t, a ), where a equals the first bit b of r ′ V . The sim ulator therefore just generates a random s tring r V , extracts the first bit b from it, sets a = b , based on the remaining bits of r V computes g and h , based on b , g , a nd h computes t , and sets r ′ V to b e the prefix of r V that w as actually used for these purp oses. ✷ Corollary 4.3 DCM is in SZK. Moreo ver, DCM has a statistical zero-know ledge public-coin IPS with one-sided error. Pro of. Apply the transformatio n from Prop osition 2.5 to the IPS from Corol- lary 4.1. Note that another pro of of the mem b ership of DCM in SZK can b e g iv en b y applying Prop ositions 2.4 and 2.5 to the IPS in the alternativ e pro of of Coro l- lary 4.2. ✷ Corollary 4.4 ( Babai-M oran [4]) DCM is in coAM. Therefore DCM is not NP- complete unless the p olynomial-time hierarc hy collapses at the second lev el. Pro of. This is an immediate consequence of Corollary 4.2 or a consequence of Corollary 4.3 based on Prop osition 2.3. ✷ Corollary 4.5 Under the assumption on the hardness of Discrete Logarithm, DCM has a constan t-ro und statistical zero-kno wledge IPS with exp onen tia lly small error. Pro of. The corollary follows from Theorem 4.1 by Prop osition 2.6. ✷ Theorem 4.2 The n -fo ld sequen tia l comp osition of the IPS in The orem 4.1 is a p erfect zero-kno wledge public-coin IPS for DCM with exponentially small error. Hence DCM is in PZK. Pro of. Denote the comp osed IPS b y h V , P i . As the atomic system is public-coin, so is h V , P i . By Prop osition 2.1 h V , P i is an IPS for DCM with one-sided error 2 − n . W e ha v e to prov e that h V , P i is p erfect zero-kno wledge. 9 F or each v erifier V ∗ in teracting with P w e describe a proba bilistic exp ected p olynomial-time sim ulator M V ∗ . The M V ∗ uses the progra m o f V ∗ as a subroutine. Assume that the running time of V ∗ is b ounded by a p olynomial q ( n ) in the input size. On input w , M V ∗ will run the program of V ∗ on input w with rando m string r , where r is the prefix of M V ∗ ’s random string of length q ( | w | ). In all other cases M V ∗ will use the remaining part of its ra ndom string. W ork of M V ∗ on input w = ( s, G, H ) consists of | w | stages, where a stage corre- sp onds to an iteratio n o f the atomic system. Stage i . M V ∗ c ho oses r a ndom elemen ts g i ∈ G and h i ∈ H and a random bit a ∈ { 0 , 1 } . If a = 0, M V ∗ computes t i = g i sh i ; If a = 1, it computes t i = g i h i . Then M V ∗ computes b i = V ∗ ( w , r, t 1 , g 1 , h 1 , . . . , t i − 1 , g i − 1 , h i − 1 , t i ), the message that V ∗ ( w , r ) sends P in the i -th sequen tial iteration of the atomic sys tem after receiving P ’s message t i and under the condition tha t in the preceding iterations P ’s messages w ere t 1 , g 1 , h 1 , . . . , t i − 1 , g i − 1 , h i − 1 . If b i and a are simultaneous ly equal to or differen t from 0, then M V ∗ puts v i = ( t i , b i , g i , h i ) a nd pro ceeds to the ( i + 1 )-th stage. If exactly one of b i and a is equal to 0, then M V ∗ restarts the same i -th stage with new indep enden t c hoice of a , g i , h i . After a ll stag es are completed, M V ∗ halts and outputs ( r ′ , v 1 , . . . , v | w | ), where r ′ is the prefix of r actually used by V ∗ during in teraction o n input w with the pro v er sending the messages t 1 , g 1 , h 1 , . . . , t | w | , g | w | , h | w | . Notice that it migh t happ en that in unsucces sful attempts to pass some stage V ∗ used a prefix of r longer than r ′ . W e first c hec k that M V ∗ terminates in expected p o lynomial t ime whenev er s ∈ GH . Since V ∗ is p olynomial-time, one at tempt to pass Stage i , i ≤ | w | , ta k es time b ounded by a p olynomial in | w | . Rec all t ha t M V ∗ is progra mmed so that a and r are indep enden t. F urthermore, a and t i are indep enden t. Ind eed, if a = 1, then t i = g i h i is the pro duct of random elemen ts of G and H . If a = 0, then t i = ( g i g 0 )( h 0 h i ) is suc h a pro duct as well. Here g 0 ∈ G and h 0 ∈ H are elemen ts of an arbitrary decomp osition s = g 0 h 0 . It follows that a and b i are independen t and therefore a n execution of the stage is successful with probability 1/2. W e conclude that on a v erage eac h stag e consists of 2 executions. Th us, o n av erage M V ∗ mak es 2 | w | p olynomial-time executions and this tak es exp ected p olynomial time. W e finally need to c hec k that, whenev er s ∈ GH , the output M V ∗ ( w ) is dis- tributed identic ally with view V ∗ ,P ( w ). Notice that b oth the random v ariables de- p end on V ∗ ’s random string r . It therefore suffices to sho w that the distributions are iden tical when conditioned o n a n arbitrary fixed r . F or 0 ≤ i ≤ | w | , let D i M ( w , r ) de- note t he probability distribution of ( r ′ , v 1 , . . . , v i ) conditioned on r , and D i V ∗ ,P ( w , r ) denote the distribution of the part of view V ∗ ,P ( w ) formed up to the i -th sequen tial iteration. With this nota t io n, w e ha v e to pr ov e that D | w | M ( w , r ) = D | w | V ∗ ,P ( w , r ). Using the induction on i , w e prov e that D i M ( w , r ) = D i V ∗ ,P ( w , r ) for ev ery 0 ≤ i ≤ | w | . The ba se case of i = 0 is trivial. Let i ≥ 1 and assume that P h D i − 1 M ( w , r ) = u i − 1 i = P h D i − 1 V ∗ ,P ( w , r ) = u i − 1 i (1) 10 for ev ery v alue u i − 1 . Giv en u i − 1 , assume now that b oth D i − 1 M ( w , r ) = u i − 1 and D i − 1 V ∗ ,P ( w , r ) = u i − 1 , and under these conditions consider how the i -th comp o- nen ts v i = ( t i , b i , g i , h i ) a r e distributed in u i = u i − 1 v i according to D i M ( w , r ) and D i V ∗ ,P ( w , r ). W e will sho w that P h D i M ( w , r ) = u i − 1 v i    D i − 1 M ( w , r ) = u i − 1 i = P h D i V ∗ ,P ( w , r ) = u i − 1 v i    D i − 1 V ∗ ,P ( w , r ) = u i − 1 i (2) for eve ry v alue v i . T ogether with (1) this will imply the identit y of D i M ( w , r ) and D i V ∗ ,P ( w , r ). T o prov e (2), w e will sho w that according to the b oth conditional distributions v i is uniformly distributed on the set S = n ( t, b, g , h ) : t ∈ GH , b = V ∗ ( w , r, u i − 1 , t ) , g ∈ G, h ∈ H , t = g sh if b = 0 and t = g h if b 6 = 0 o . Giv en t and s , define sets R ( t ) = { ( g , h ) : g ∈ G, h ∈ H, g h = t } a nd R s ( t ) = { ( g , h ) : g ∈ G, h ∈ H , g sh = t } . The first claim of the fo llo wing lemma app eared in [18]. Lemma 4.1 Let k = | G ∩ H | . Assume that s = g 0 h 0 with g 0 ∈ G and h 0 ∈ H . Then the follo wing statemen ts are true. 1. Every t ∈ GH has k represen tations t = g h with g ∈ G and h ∈ H , i.e., | R ( t ) | = k . If t = g 1 h 1 , then all ot her represen t ations are t = ( g 1 f )( f − 1 h 1 ) , (3) where f ranges ov er group G ∩ H . 2. F or ev ery t , t he mapping α ( g , h ) = ( g g 0 , h 0 h ) is one-to-one from R s ( t ) to R ( t ) . 3. Every t ∈ G H has k rep resen tations t = g sh with g ∈ G and h ∈ H , i.e., | R s ( t ) | = k . 4. If φ : G × H → GH is defined by φ ( g , h ) = g h , then | φ − 1 ( t ) | = k for ev ery t ∈ GH . 5. If ψ : G × H → GH is defined by ψ ( g , h ) = g sh , then | ψ − 1 ( t ) | = k for ev ery t ∈ GH . 6. If t = g h is the pro duct of uniformly distributed random elemen ts g ∈ G and h ∈ H , then t is uniformly distributed on GH . 7. If a uniformly distributed random pair ( g , h ) ∈ G × H is conditioned on g h = t for an arbitrary fixed t ∈ G H , then ( g , h ) is unifor mly distributed on R ( t ) . 11 8. If t = g sh and g ∈ G and h ∈ H are uniformly distributed random elemen ts, then t is unifo r mly distributed on GH . 9. If a uniformly dis tributed random pair ( g , h ) ∈ G × H is conditioned on g sh = t for an arbitrary fixed t ∈ G H , then ( g , h ) is unifor mly distributed on R s ( t ) . Pro of. W e first pro ve Item 1 . Let e denote the iden tity p erm uta tion. Clearly that we hav e at least k represen tatio ns of the form (3) . On the other hand, ev ery represen tation t = g h is of this for m. Indeed, w e ha ve ( g − 1 g 1 )( h 1 h − 1 ) = e and hence b oth g − 1 g 1 and h 1 h − 1 are sim ultaneously in G and in H . T o prov e Item 2, observ e t ha t α is indeed fr om R s ( t ) to R ( t ). The map α ′ ( g , h ) = ( g g − 1 0 , h − 1 0 h ) is easily seen to b e from R ( t ) to R s ( t ) and inv erse to α . Items 1 and 2 imply Item 3, Item 3 implies Item 5, and Item 5 implies Item 8. Item 1 implies Item 4 , and Item 4 implies Item 6. Items 7 a nd 9 are t rue by the definition of R ( t ) and R s ( t ). ✷ The distribution D i V ∗ ,P ( w , r ) conditioned on D i − 1 V ∗ ,P ( w , r ) = u i − 1 is samplable as follow s. Cho ose random e lemen ts g ∈ G and h ∈ H . Compute t i = g sh and b i = V ∗ ( w , r, u i − 1 , t i ). If b i = 0, set g i = g and h i = h , otherwise set g i = g g 0 and h i = h 0 h . Clearly , this distribution of ( t i , b i , g i , h i ) is ov er S . By Item 8 of Lemma 4.1, t i is uniformly distributed on GH . If b i = 0, then b y Item 9 of Lemma 4.1, for ev ery fixed t i , the pair ( g i , h i ) is uniformly distributed on R s ( t ). If b i 6 = 0, then by Item 2 of L emma 4.1, for ev ery fixed t i , the pair ( g i , h i ) is uniformly distributed on R ( t ). It follo ws that D i V ∗ ,P ( w , r ) conditioned on D i − 1 V ∗ ,P ( w , r ) = u i − 1 is uniform on S . Consider now the sampling pro cedure for the distribution D i M ( w , r ) conditioned on D i − 1 M ( w , r ) = u i − 1 as in the description of t he simulator M V ∗ . Under the condition that a = 0, by Items 8 and 9 of L emma 4.1, t i is distributed uniformly ov er GH and for ev ery fixed v alue o f t i , the pair ( g i , h i ) is unifor mly distributed ov er R s ( t ). Under the condition that a = 1, by Items 6 and 7 o f Lemma 4.1, t i is distributed unifor mly o ver GH and for ev ery fixed v alue of t i , the pair ( g i , h i ) is unifor mly distributed o v er R ( t ). This leads to an equiv alen t sampling pro cedure: Cho ose a random t i ∈ GH , compute b i = V ∗ ( w , r, u i − 1 , t i ); If b i = 0, choose a random pair ( g i , h i ) in R s ( t i ), otherwise in R ( t ). It follows tha t D i M ( w , r ) conditioned on D i − 1 M ( w , r ) = u i − 1 is uniform on S . ✷ Remark 4.1 The sim ulato r in the pro of of Theorem 4.2 is blac k-b ox , that is, for eac h V ∗ it follows the same pro gram that uses the strategy of V ∗ as a subroutine. It should b e noted that b y [12] t he para llel comp osition of the IPS in T heorem 4.1 is not zero-kno wledge with black-box sim ulator unless DCM is decidable in probabilistic p olynomial time. 5 F uture w ork A natural question a rises if o ur results can b e extended to matrix gr oups ov er finite fields. One of the reasons why this case is more complicated is that, unlik e p erm uta - 12 tion groups, no efficien t test of mem b ership fo r matrix groups is know n. W e in tend to tac kle this question in a subsequen t pap er. References [1] B. Aiello and J. H ˚ astad. P erfect zero-knowledge la nguages can b e recognized in t wo rounds. In Pr o c. of the 2 8 th IEEE Ann. Symp. o n F oundations of Computer Scienc e (FOCS) , pages 439–4 48, 1987. [2] L . Ba bai. Lo cal expansion of v ertex-transitiv e graphs and random generation in finite groups. In Pr o c. of the 23 r d ACM Ann. Symp. on The ory of Comp uting (STOC) , pages 16 4–174, 199 1. [3] L . Babai and E.M.Luks . Canonical lab eling o f graphs. In Pr o c . of the 15 th A CM Ann. Symp. on the The ory of Co m puting (STOC) , pages 1 71–183, 1983. [4] L . Ba bai and S. Moran. Arth ur-Merlin games: a randomized pro of system, and a hierarc hy of complexit y classes. Journal of Computer and System Scienc es , 36:254–27 6, 198 8. [5] L . Babai and E. Szemer ´ edi. On the complexit y o f matrix gro up problems. In Pr o c. of the 25 th IEEE A nn. Symp. on F oundations of C o mputer Scienc e (FOCS) , pages 229–240, 1984. [6] M. Bellare, S. Micali, and R. O strovsk y . The (true) complexit y of statistical zero kno wledge. In Pr o c. of the 22 nd ACM Ann. Symp. o n The ory of Computing (STOC) , pages 49 4–502, 199 0. [7] M. Blum and S. Kannan. D esigning pro grams that c hec k their work. J. Asso c. Comput. Mach. , 42 (1):269–291, 19 9 5. [8] R . B. Boppana, J. H ˚ as tad, and S. Zac hos. Do es co- NP ha ve short in teractiv e pro ofs? Information Pr o c essing L etters , 25 :127–132, 1 9 87. [9] M. L. F urs t, J. Hop croft, and E. M. Luks. P olynomial-t ime algorithms for p erm utation groups. In Pr o c. of the 2 1 st IEEE Ann. Symp. on F oundations of Computer Scien c e (FOCS) , pages 36–41, 1980. [10] M. R. G arey and D. S. Johnson. Computers and Intr actability. A guide to the the ory of N P -c ompleteness . W. H. F reeman, 1979 (a Russian tra nslation a v ailable). [11] O. G oldreic h and S. G oldw asser. O n the limits on the non-approximabilit y of lattice problems. In Pr o c. of the 30 th A CM A nn. Symp. on T he ory of Computing (STOC) , pages 1– 9, 199 8. [12] O. G oldreic h and H. Kr a w czyk. On the comp osition of zero-kno wledge pro of systems . SI AM Journal on Com puting , 25(1):1 69–192, 1996. 13 [13] O. G oldreic h and E. Kushilevitz. A p erfect zero-kno wledge pro of for a decision problem equiv a lent to Discrete Logarithm. Journal of Cryptolo gy , 6:97–1 1 6, 1993. [14] O. Goldreich , S. Micali, a nd A. Wigderson. Pro ofs that yield nothing but their v alidit y or all languages in NP ha ve zero-knowle dge pro of systems . J. Asso c. Comput. Mach. , 38 (3):691–729, 19 9 1. [15] O. Goldreic h, A. Sahai, and S. V adhan. Honest-v erifier statistical zero- kno wledge equals general stat istical zero-kno wledge. In Pr o c. of the 30 th A C M A nn. Symp. on The ory of Computing ( S TOC) , pages 399–40 8, 199 8 . [16] S. G o ldw asser, S. Micali, and C. Rac k off. The knowled ge complexit y of in ter- activ e pro of systems. SIAM Journa l on Computing , 18(1):186–2 08, 1989. [17] S. G oldw asser and M. Sipser. P riv ate coins v ersus public coins in in teractiv e pro of systems. In Pr o c. of the 18 th A CM Ann. S ymp. on the The ory of Co m- puting ( S TOC) , pages 59–68, 1986 . [18] C. Hoffmann. Gr oup-the or etic algorithms and Gr aph Isomorphism , v olume 1 36 of L e ctur e Notes i n Com p uter Scienc e . Springer V erlag, 1982. [19] C. Hoffmann. Sub complete g eneralizations of Graph Isomorphism. Journal of Computer and System Scienc es , 25:332–359, 1 9 82. [20] J. K¨ obler, U. Sch¨ oning, and J. T or´ an. Graph Isomorphism is lo w for PP. In Symp osium on The or etic al Asp e cts of Computer Scien c e , v olume 577 of L e ctur e Notes in Computer Scienc e , pages 401–411. Spring er V erlag, 1992. [21] E. M. Luks. Isomorphism o f graphs of b ounded v alence can b e tested in p oly- nomial t ime. Journal of Computer and System Scienc es , 25:42–65, 1982. [22] E. M. L uks. A r esult cite d in: L. Babai. Automorphism groups, isomorphism, reconstruction. Handb o ok of Combinatorics , Ch. 27 , pa g es 144 7–1540. Elsevier Publ., 1995. [23] T. O k amoto. On relationships b etw een statistic al zero- knowledge pro ofs. In Pr o c. of the 28 th ACM Ann. Symp. on The ory of Co mputing (STOC ) , pages 649–658, 1996. [24] U. Sc h¨ oning. Gr aph isomorphism is in the lo w hierarc h y . In Pr o c e e dings of the ST A CS’87 , L e ctur e Notes in Computer Sc i e nc e, 247, pages 11 4–124, New Y ork/Berlin, 1987. Springer-V erla g . [25] C. C. Sims. Some gr oup the or etic a lgorithms , v olume 69 7 of L e ctur e Notes in Computer Scien c e , pages 108–12 4. Springer V erlag , Berlin, 19 78. 14

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment