Game interpretation of Kolmogorov complexity

The Kolmogorov complexity function K can be relativized using any oracle A, and most properties of K remain true for relativized versions. In section 1 we provide an explanation for this observation by giving a game-theoretic interpretation and showi…

Authors: Andrej A. Muchnik, Ilya Mezhirov, Alex

Game interpretation of K olmogoro v comple xity Andrej A. Muchnik ∗ , Ilya Mezhirov , Ale xander Shen, Nikolay V ereshchagin Abstract The Kolmog orov comple xity function K can be relati vized using any oracle A , and most prope rties of K remain true for relati vized versions K A . In section 1 we pro vide an explana tion for this observ ation by giv ing a game-the oretic in terpretatio n and sho wing that all “n atural” proper ties are eit her true for a ll K A or false fo r all K A if w e restrict oursel ves to suf fi ciently p owerful oracle s A . This result is a simple consequence of Martin’ s determin acy theorem, b ut its pr oof is ins tructi ve: it sho ws ho w on e can pro ve st atements about K olmogoro v comple xity by constructin g a spec ial game an d a winning strate gy in this game. 1 Game interpretation Consider all functions defined on the s et of binary strings and ha v ing non-negati ve integer values, i.e., the set F = N { 0 , 1 } ∗ . Let α be a property of such a function (i.e., a subset of F ). W e say that α is O ( 1 ) - stable if f 1 ∈ α ⇔ f 2 ∈ α for any two functions f 1 , f 2 ∈ F such that f 1 ( x ) = f 2 ( x ) + O ( 1 ) , i.e., the dif ference | f 1 ( x ) − f 2 ( x ) | is bounded. Let A be an oracle (a set of strings). By K A ( x ) we denote th e K olmo gorov compl exity of a strin g x relativized to oracle A , i .e., the length of the shortest descript ion for x i f the decompressor is allowed t o use A as an oracle. (See [3] or [10] for more details; we may use either plain complexity (denoted usuall y by C or KS ) or prefix com plexity (denoted usually by K or KP ) thou gh the game interpretation w ould be slig htly dif ferent; see below . ) For a given A the function K A is defined up to an O ( 1 ) additive term, therefore an O ( 1 ) - stable property α is well defined for K A (does not depend on the specific version of K A ). So α ( K A ) becomes a property of the oracle A . It may be true for some orac les and false for other ones. For example, i f w n is a n -bit prefix of Chaiti n’ s random real Ω , the ( O ( 1 ) -stable) property “ K A ( w n ) > 0 . 5 n + O ( 1 ) ” is true for trivial oracle A = 0 and false for A = 0 ′ . The following result (a special case of a cl assical result of D. M artin [4], see the di scussion below) sh ows, howe ver , that for “usual” α the property α ( K A ) is either true for all suf ficiently lar ge A or f alse for all sufficiently lar ge A . ∗ The game interpretation considered in this paper was suggested by And rej Muchnik (24.02.195 8– 18.03 .2007 ) in his talks at Kolmogorov seminar (Moscow L omono sov Un i verisity). The examples are added and text is prepared by I. Mezhirov ( me zhirov @gmai l.com , University of Kaiserslautern), A. Shen ( alexa nder. shen@lif.univ-mrs.fr , LIF Mar seille, CNRS & University Aix–Marseille, on leave f rom IITP , RAS, Moscow) and N. V ereshchag in ( nikol ay.ve reshchagin@gmail.com , Moscow State Lomonosov Un i ver- sity) who are r esponsible for all errors an d omissions. Preparation of th is paper was supp orted in part by ANR Sycomor e and NAFIT ANR-08-EMER-008 -0[1,2 ] grants and RFB R 09-0 1-007 09- grant. 1 Pr opositi on . Let α be a B or el pr operty . Then ther e exists an oracle A 0 such that either α ( K A ) i s true for all A ≥ T A 0 or α ( K A ) is false for all A ≥ T A 0 . Here ≥ T stands for T uring reducibility . The statement is true for dif ferent versions of com- plexity (plain complexity , prefix complexity , decision complexity , a prio ri complexity , mono- tone com plexity etc.). W e provide the proof for plain c om plexity C and t here d escribe the changes needed for other versions. Pr oof . Consider the following infinite game wi th full information. T w o players called (as usual) Alice and Bob enumerate graphs of two functions A and B respectively; arguments and values of A and B are b inary strings. The players’ m oves alternate; at each m ove player may add finitely many pairs to the graph of h er/his function but cannot delete the pairs t hat are already there (so the v alues of A and B th at are alrea dy defined remain unchanged). The winner is declared as follows. Let K A and K B be the complexity functions that corre- spond to decompressors A and B , i.e., K A ( x ) = min { l ( p ) | A ( p ) = x } where l ( p ) s tands for th e length of p ; t he function K B is defined in a similar way . Let us agree that Alice wins if the function K ( x ) = min ( K A ( x ) , K B ( x )) satisfies α . If not, Bob win s. (A technical correction: function s K A and K B may hav e infinite values; we assume that α is somehow extended to such functions, e.g., is f alse for all functions with infinite v alues.) Lemma . If Ali ce has a computa ble winning strate g y in this game, then α ( C ) is tru e for ( plain ) complexity function C; if Bob has a computable winning strate gy , then α ( C ) is false . Pr oof of the Lemma is straightforward. Assume that Alice has a computable win ning strategy . Let her u se th is strategy against the enum eration of the graph of optimal decompressor function (so K B ( x ) = C ( x ) for all x ). Note t hat in fact Bob ig nores the m oves of Alice and enumerates the graph of B at it s o wn pace. Since both playes use computable strategies, the game is computable. Therefore K A ≤ K B + O ( 1 ) due to the optim ality of B , and min ( K A ( x ) , K B ( x )) = K B ( x ) + O ( 1 ) = C ( x ) + O ( 1 ) . Since Alice wins and α is O ( 1 ) -stable, the function C has p roperty α . The s ame argument (with exchanged roles of Alice and Bob) can be used if B ob h as a winning strategy .  The statement and the proof o f the lemma can be relativized: if Alice/Bob has a winni ng strategy that is A -computable for some oracle A , then α ( C A ) is true/false. Now recall Martin’ s th eorem on the determinacy of Borel games: the wi nning conditi on of the game described i s a Borel set (since α has this property), so either Alice or Bob has a winning strategy in th e game. So if the oracle A is powerful enough (is above the strategy in the hiearchy of T -degrees), the property α ( K A ) is true (if Alice has a winning A -computable strategy) or f alse (if Bob has a winn ing A -computable strategy). Theorem is prove n.  2 2 Discussion Let us make se veral remarks. • As we hav e sai d, this propositio n is a consequence of an ol d general result proved by Martin. Lemma on p. 68 8 of [4] together with Borel determinacy [5, 6] g uarantees that for e very Borel Turing-in variant property Φ of infinite binary sequences either Φ is true for all sequences in some upper cone (in the degrees semil attice), or Φ is false for all sequences in some upper cone. It remains to note that the property ϕ ( A ) = α ( K A ) is a T uring -in variant Borel property . The proof in [4] uses a different (and sim pler) game: two players alternate adding bits to a sequence , and the referee checks whether th e resultin g infinite sequence satisfies Φ . The adva ntage of our game is th at it is more tailored to the d efinition of Kolmogorov complexity and therefore can be used as a protot ype of games needed to prove some specific statements about K olmogorov complexity . • Note that not all theorems in algorith mic in formation theory are O ( 1 ) -stable. F or ex- ample, m ost of the results about algorithmic properties of complexity funct ion are not s table. (The non-computablity of the compl exity function or its upper semicomputablity is not a stable property , while t he non-existence of a non trivial computable lower boun d is stable. Also the T u ring-completeness of C is a non-stable ass ertion though th e stronger claim “any function that is O ( 1 ) -clos e to C can be u sed as an oracle to decide h alting problem” is stabl e.) The other assum ption (Borel property) seems less restrictive: it is hard to im agine a theorem about K olmogo rov com plexity where the property in question w on’t be a Borel one by construction. • One may ask whether the s tatement of our t heorem can be used as a p ractical tool to prove the properti es of K olmogo rov complexity . The answer is y es and no at the same time. Indeed, it is con venient to use so me kind of game whil e proving resul ts about Kolmogorov complexity , and u sually the ar gument goes in the same way: we let the winnin g strate gy play against t he “default” strategy of t he opponent and the fact t hat the winning strategy wi ns implies the statement in question. Howe ver , it is con venient to consider more special games. F or example, pro vin g the inequality C ( x , y ) ≥ C ( x ) + C ( y | x ) − O ( log n ) (for s trings x and y o f length at most n ), we would consider a game where Alice wins if K B ( x , y ) < k + l implies that either K A ( x ) < k + O ( log n ) or K A ( y | x ) < l + O ( lo g n ) for ev ery n , k , l and for all strings x , y of lengt h at most n . This example mot iv ates the following version of the main theorem. Let α be a property of two functions i n F , i.e., a subset of F × F . Assume that α is monotone in the following sense: i f α ( f , g ) is t rue, f ′ ( x ) ≤ f ( x ) + O ( 1 ) , and g ′ ( x ) ≥ g ( x ) − O ( 1 ) , then α ( f ′ , g ′ ) is t rue, too. Consider the version of the game when Alice wins if α ( K A , K B ) is true. If Alice has a computable winning st rategy , then α ( C , C ) is true; if Bob has a computable wi nning s trategy , then α ( C , C ) is false. (The proo f remains essentially the same.) W e provide se veral e xamples where game i nterpretation is used to prove s tatements about K olmogo rov complexity in Section 3; ot her examples can be found in [9] and in the survey [12 ]. • Going in th e other d irection, one would like to e xtend this result to arbitrary results of computablili ty theory not necessarily related to K olm ogorov complexity . One of the results 3 (Martin’ s theorem) was already mentio ned. Even more general (in some sens e) extension is discussed in [8]. • It i s easy to modify the proof to cover diff erent ve rsion s of K olmogorov complexity . F or example, for prefix complexity we may consider p refix-stable decom pressors where F ( p ) = x implies F ( p ′ ) = x for ev ery p ′ that has prefix p ; similar mod ifications work for monotone and decision complexity . For a priori complexity the players sho uld specify lower approximations to a semimeasure. • One m ay change the rules of the g ame and let Al ice and Bob directly provide up per bounds K A and K B instead of enumerating graphs for A and B . Initiall y K A ( x ) = K B ( x ) = + ∞ for ev ery x ; at each st ep t he p layer may decrease finitely many v alues of the corresponding function. The restriction (th at goes back to Levin [2]) is that for e very n there is at mos t 2 n strings x such t hat K A ( x ) < n (the same restriction for K B ). Thi s approach works for prefix and decision complexities (b ut not for the monotone one). 3 Examples Conditional complexity and total pr ograms Let x and y be two strings. The conditional complexity C ( x | y ) of x when y is kno wn can be de- fined as the length of the shortest program that transforms y into x (a ssu ming the programming language is optimal). What i f we require this program to be total (i.e., defined e verywhere)? It turns out that this requirement can change the situation drastically: there exist two strings x and y of length n such that C ( x | y ) = O ( log n ) b ut any total program that transforms y to x has complexity (and length) n − O ( log n ) . (Note t hat a total program th at maps ev erythin g to x has complexity at most n + O ( 1 ) , so the bound is quite tight.) T o prove this statement, we use the following game. Fix some n . W e enumerate a graph of some function f : B n → B n (at each mov e we add some pairs to that graph). The opponent enumerates a list of at most 2 n − 1 to tal functions g 1 , g 2 , . . . (at each move the opponent may add some functions t o thi s list). W e win th e game if there exist strings x , y ∈ B n such that f ( y ) = x but g i ( y ) 6 = x for all i . Why we can win in this game : First we choose some x and y and declare that f ( y ) = x . After ev ery (no n-trivial) m ove of the opponent we choose some y where f is still un defined and declare f ( y ) = x where x is differe nt from currently known g 1 ( y ) , g 2 ( y ) , . . . . The num ber of opponent ’ s moves is less than 2 n , therefore an unused y still exists (we use only one point for e very move of the opponent) and a v alue x different fr om al l g i ( y ) exists. Why the statement is true : Let us use our s trategy agains t the following op ponent strategy: enumerate all total functions B n → B n that ha ve com plexity less than n . (Each function is considered here as a list of its values.) This strategy is computable (giv en n ) and therefore the game is computabl e. There fore, for the win ning pai r ( x , y ) we ha ve C ( x | y ) = O ( log n ) si nce n is enough to describe the process and therefore to comput e function f . On the other hand, any total function that maps y to x has complexity n − O ( 1 ) , otherwise the list of its values would appear in the enumeration. So i f we denote by C ( x | y ) the lengt h of the shortest p rogram for a total function that m aps y to x , we get a (non-computable) upper b ound for C ( x | y ) that sometimes differs s ignificantly from C : it is possible that C ( x | y ) is about n while C is O ( log n ) (for strings x and y of length n ). 4 The conditional complexity defined is this way was considered also by B runo Bauwens [1] (who used a diff erent not ation). Extracting randomness r equires Ω ( log n ) additional bits Let us consi der a question that can be considered as K olm ogorov-complexity version of ran- domness extraction (thoug h t he s imilarity is superficial). As sume th at a string x is “weakly random” in the follo wing sense: its complexity is high ( at least n ) b ut still can be much smaller than its length, which is polynomial in n . W e want to “e xtract” randomness out of x , i.e., to get a s tring y such that y is random (=incompressible: its length is close to its complexity) using fe w addit ional bits, i.e., C ( y | x ) sho uld be small. When is it possibl e? The natural approach: take the shortest program for x as y . T hen y is indeed in compressible ( C ( y ) = l ( y ) + O ( 1 ) ; here l ( y ) stands for the length of y ). And t he complexity of y when x is known is O ( l og n ) : kn owing x and the length of a short est program for x , we can find (at least some) sho rtest program for x . T aking the fi rst n bit of this shortest program, w e get a string o f length n , complexity n + O ( log n ) and O ( log n ) conditional complexity re lative to x . What if we put a stronger requirement and requiere C ( y | x ) to be O ( 1 ) or o ( log n ) ? It turns that “randomness extraction” in this stronger sense is not always possibl e: ther e exists a string x of lengt h n 2 that has complexity at least n such that eve ry st ring y of length n th at has con- ditional complexity C ( y | x ) less than 0 . 5 log n has un conditional complexity O ( lo g n ) (i.e., is highly compressibl e). (The s ame result is true for all strings y of length less than n , so we cannot extract e ven n / 2 “good random bits” using o ( log n ) advi ce bits.) T o prov e this s tatement, consider the following game. There are two sets L = B n (“left part”) and R = B n 2 (“right part”). The op ponent at each m ove may choose t wo elements l ∈ L and r ∈ R and add an edge bet ween them (declaring l to be a “neighbor” of r ). Th e restriction i s that e very element in R s hould ha ve at most d = ⌈ √ n ⌉ neighbors. W e m ay mark some elem ents of L as “simpl e”. W e win i f t here is at least 2 n elements in R that have the following property: all their neighbors ar e marked . Why the statement is true if we can win the game (using a computable strategy): Let th e opponent declar e x ∈ L to be a neighbor of y ∈ R if C ( x | y ) < 0 . 5 log n . Then e very y has a t most d neighbors. The process i s computable, so the game can be effecti vely si mulated. Therefore, all x declare d as “simple” indeed have c om plexity O ( log n ) since eac h x can be described by n and its ordinal number in the e num eration of simple elements (the latter requires 0 . 5 log n bits). Among 2 n elements in R that ha ve th e winn ing property there is o ne th at has c omp lexity at least n , and this is exactly what we c laim ed. How to win the game : W e do no thing while there are 2 n (or more) elements in R that hav e no neighbors in L (since this implies the required property). After 2 n 2 − 2 n elements get neighbors in L , we mark the neighbor that is used most often. It is a neighbor of at least ( 2 n 2 − 2 n ) / 2 n = 2 n 2 − n − 1 > 2 n 2 − 2 n elements in R , and we restrict our attention to th ese “selected” elements ignoring all other elements of R . T hen we do n othing while at l east 2 n of selected elements hav e no s econd neighbor . After that we mark the most used s econd n eighbor and hav e at least ( 2 n 2 − 2 n − 2 n ) / 2 n > 2 n 2 − 4 n elements that ha ve two marked neighbors. In t his way we either wait indefinitely at some step (and in this case we ha ve at least 2 n elements that ha ve only mark ed neighbors) or finally get 2 n 2 − 2 d n > 2 n elements who have d marked n eighbors and therefore cannot hav e non-marked ones, so we win. 5 Note that we cou ld change the gam e allowing t he opponent to declare 2 n elements in R as simple and requirin g in the wi nning condit ion that there is a non -simple element in R that has no non-simple neigh bors. This would make the g ame closer t o origin al statement about K olmogo rov com plexity b ut a bit more complicated. This example is adapted from [11]. The compexity of a bijection For any two st rings x and y one may look for a shortest program for a bi jectiv e function that maps x to y . Evidentl y , it is not shorter than a shortest program for a total function that maps x to y , therefore we get a lower bound C ( y | x ) − O ( 1 ) for a length (and complexity) of s uch a program. Since bijection can b e ef fectively rev ersed, the bound can be made s ymmetric and we conclude that the length of a program for a bijection that maps x to y is at least max ( C ( x | y ) , C ( y | x )) − O ( 1 ) . What about upper boun ds? Imagine there exists a simp le total function that maps x to y and oth er simpl e total functi on that maps y to x . Can we guarantee that there exists a simple bijective total function that maps x to y ? T o s implify the discussion, let us assume that x and y are of l ength n , the bijection should be length-preserving and n is known (used as a condition in all the complexities). This question corresponds to a game. Ou r opponent produces some total functions f 1 , f 2 , . . . : B n → B n and g 1 , g 2 , . . . : B n → B n claiming that one of f i maps x to y , and one of g j maps y to x . Knowing this functions (b ut not x , y ), we have to produce bijections h 1 , h 2 , . . . : B n → B n and guarantee that one o f them maps x to y . (More precisely , the opponent wins if there exist x , y , i and j such t hat f i ( x ) = y and g j ( y ) = x b ut h k ( x ) 6 = y for all k .) The q uestion no w is: how many bijections do we need to beat the opponent that can produce at most m bijections of each type? At first it seems t hat m bijections are enough. Indeed, l et us consider a bipartit e graph where x and y are connected by an edge if f i ( x ) = y and g j ( y ) = x for some i and j . Th is graph has degree at m ost m at both s ides (e.g., x can be connected o nly to f 1 ( x ) , . . . , f m ( x ) ). Each bipartite graph where each ver tex has degree at most m and both parts are of the same size, can be covered by m bijection graphs (we add edges to get degrees exactly m and then use Hall’ s criterion for matchings). This a rgument, if corr ect, would imply the upper bou nd max ( C ( x | y ) , C ( y | x )) + O ( l og n ) for the minimal complexity of t he program th at comput es a bi jection that m aps x t o y . (Here O ( log n ) is added to take into account that we need to know n for all our constructions.) Indeed, let the opponent to enumerate all the total functions B n → B n that ha ve complexity a t m ost u = m ax ( C ( x | y ) , C ( y | x )) . It is a com putable process that inv olves at most 2 u functions. Beating this s trategy of the op- ponent, we computably gener ate at most 2 u bijections (as we ha ve assumed) and ea ch bijection can be encoded by its ordinal num ber (at m ost u bits) and n (this requires O ( log n ) bits). W in- ning condition guarantees that one of these bijections maps x to y . 6 Howe ver , this argument (and the result itself) is wrong. The probl em is that the opponent does not tell us all its mappings at once but gi ves them one by one and we ha ve to react imme- diately (otherwise we lose if the opponent does not make anything else). So we need to repeat this procedure after each move of the oppo nent, which gi ves Θ ( m 2 ) bijectio n if opponent m akes m moves. And this bound can be obtained by a much more simple strate gy: f or every f i and g j con- sider a bijection h i j that extents a partial matching x ↔ y ⇔ f i ( x ) = y and g j ( y ) = x . This strategy gi ves upper bound C ( x | y ) + C ( y | x ) + O ( log n ) . The main point of this example is that g ame arguments work in both direction s: the absense of the winnin g strategy for us (and the existence of the winnin g strategy for the opponent) implies that the upper bound we wanted to prov e is not true at all. For example, the winni ng strategy in ou r game (for us) exists o nly if the number of our bijections is Ω ( m 2 ) where m is the maximal num ber of opponent’ s moves. It can be shown as follows. Let us assume that all the opponent’ s functions are constant functions (i.e., map all the elements of B n into one element). In ot her terms, the opponent jus t selects vertices at both sides of t he graph, and our goal is t o provide bijectio ns between each pair of selected vertices. It is easy to see that we would need Ω ( m 2 ) bijections: i ndeed, if the opponent at each move selects a vertex that is not connected yet to vertices s elected earlier (which is alwa ys possible if the number of v ertices i s large compared to m 2 ) t hen we need Ω ( m ) ne w bijections to pro vide these new connections. T ranslating this observation into Kolmogorov complexity language, we get the following statement: for e very k and n such th at n > 2 k there exist two strings x and y of lengt h n such that C ( x ) , C ( y ) ≤ k + O ( lo g n ) but any bijectio n that maps x to y has comp lexity 2 k − O ( 1 ) . T o show this, use the trivial strategy at our side (we list all programs of leng th less than 2 k that turn out to define a bijection B n → B n ; thi s property is enumerable) and let the opponent use the winning s trategy described above (choosing elements not connected to already chosen elements by kno wn bijections; the inequality n > 2 k guarantees that Ω ( 2 k ) steps ar e poss ible, since ( 2 k ) 2 = 2 2 k < 2 n ). Al l chosen elem ents hav e com plexity at most k + O ( log n ) and by the winning condi tion they are some of them not connected by a bijection of complexity less than 2 k . Contrasting pr efix a nd plain complexity Here we give a game-based proo f of J. Miller’ s result [7]. (The original p roof in [7] uses a diffe rent scheme and in volves the Kleene fixed-point theorem.) Let Q be a co-enumerable set of strings (i.e., its complement is enumerable) that for ev ery n contains at least one st ring of length n . Then f or every c ther e e xist s n and x of length n such that K ( x ) < n + K ( n ) − c . Here K stands f or prefix complexity; the contrast with the plain complexity a rises because for plain complexity the set of incompressible strings (that h a ve maximal poss ible complexity) is co-enumberable. (Not e also that t he maxim al value of K ( x ) for strings of length n is K ( n ) + O ( 1 ) .) T o p rove this statement, let us consider the following game specified by a natural number C and a finite family of di sjoint finite sets S 1 , . . . , S N . Durin g the game each element s ∈ S = 7 ∪ N j = 1 S j is labeled by two no n-negati ve rational n umbers A ( s ) and B ( s ) called “ Al ice w eight” and “Bob’ s weight”. Initiall y all weights are zeros. Alice and Bob m ake alt ernate mov es. On each move each player may increase her/his weight of se veral elements s ∈ S . Both players must obey the follo wing total weight restrictions: ∑ s ∈ S A ( s ) ≤ 1 and ∑ s ∈ S B ( s ) ≤ 1 . In addition, Bob mu st be “fair”: for e very j Bob’ s weights of all s ∈ S j must be equal. That means that basically Bob assigns weights to j ∈ { 1 , . . . , N } and Bob’ s weight B ( j ) of j is th en e venly distributed a mo ng all s ∈ S j so that B ( s ) = B ( j ) / # S j for all s ∈ S j . Alice need not be f air . This extra requirement is somehow compensated by allowing Bob to “disable” certain s ∈ S . Once an s is disabled it cannot be “ena bled” an y more. Alice c annot disable or enable anything. For ev ery j Bob is not allowed to disable all s ∈ S j : ever y set S j should contain at least one element that is enabled (=not disabled). The game is infinite. Alice wins if at the end of t he g ame (or , better to say , in the limit) there exists a n enabled s ∈ S such that A ( s ) B ( s ) ≥ C . Now we have (as usual) to e xplain two things: wh y Alice h as a (c om putable) winning strategy in the game (wi th s ome assumpt ions on the parameters of the game) and why this implies Miller’ s theorem. Lemma. Alice has a computable winning strate gy if N ≥ 2 8 C and # S j ≥ 8 C for all j ≤ N . Let us show first why this statement implies the theorem. Let C = 2 c and N = 2 8 C = 2 2 c + 3 Let us take the sets of all strings of length log 8 C + 1 , . . . , log 8 C + N as S 1 , . . . , S N . Then S j consists of 2 j · 8 C elements; th e conditions of th e lem ma are sati sfied and hence Alice has a computable winning strategy . Consider the following Bob’ s s trategy in this g ame: he enumerates the com plement o f Q and disables all its elements; in parallel, he approxim ates t he prefix complexity from abo ve and once he finds out that K ( n ) does not exceed some l , he increases the weights of all 2 n strings of length n u p to 2 − l − n . Thus at the end of the game B ( x ) = 2 − K ( n ) − n for a ll s ∈ S that ha ve length n (i.e., for s ∈ S j where j = n − log 8 C ). Alice’ s lim it weight function x 7→ A ( x ) is l ower s emi-computable given c , as both Alice’ s and Bob’ s strategies are computable gi ven c . Therefore (since prefix complexity is e qual to the logarithm of a priori probability) K ( s | c ) ≤ − log A ( s ) + O ( 1 ) 8 for all s ∈ S . As Alice wins, there exists a string s ∈ Q of some length n ≤ N + log 8 C such that A ( s ) / B ( s ) ≥ C , i.e., − log A ( s ) ≤ − log B ( s ) − c = K ( n ) + n − c . This implies that K ( s | c ) ≤ K ( n ) + n − c + O ( 1 ) , and K ( s ) ≤ K ( n ) + n − c + 2 log c + O ( 1 ) . This is a bit weaker statement that we need: we wanted K ( s ) < K ( n ) + n − c . T o fix this, app ly thi s argument to c ′ = c + 3 log c in p lace of c . For all lar ge enough c we t hen hav e K ( s ) < K ( n ) + n − c . It remains to prove the Lemma by showing a winning strategy for Alice. Pr oof of the Lemma. The strategy i s rather straighforward. The main idea is that playing with one S i , Alice can forc e Bob to spend twice more weight than she does. Then she switches to next S i and s o on until Bob’ s weight is exhausted while she has solid reserves. T o ac hieve h er goal on one set of M elements, Alice assigns sequentially we igh ts 1 / 2 M , 1 / 2 M − 1 , . . . , 1 / 2 1 and after each m ove waits until Bob increases his weight or disables the correspond ing element. Since he cannot disabl e all elements and is forced to u se the same weights for all elements while Al ice pu ts mo re than half of the weight on the l ast element, Alice has factor M / 2 as a handicap, and we may assume that M beats C -factor that Bob has in his fa vor . Now the formal details. Assume first that # S j = M = 4 C for all j and N = 2 M . (W e will show later how to adjust the proof to the case when | S j | ≥ 8 C and N ≥ 2 8 C .) Alice picks an element x 1 ∈ S 1 and assigns the weight 1 / 2 M to x 1 . Bob (to a void losing the entire game) has either to assign a weight of more t han 1 / C 2 M to all el ements in S 1 , or to disable x 1 . In the second case Alice picks another element x 2 ∈ S 1 and assigns a (twice bigger) weight of 2 / 2 M to it. Again Bob has a d ilemma: either to increase the weig ht for all element s of S 1 up to 2 / C 2 M , or to disable x 2 . In the second case Alice picks x 3 , assigns a weight of 4 / 2 M to it, and so on. (If t his process continues l ong enough, the l ast weig ht would be 2 M − 1 / 2 M = 1 / 2.) As Bob cannot disable all the element s of S 1 , at some step i the first case occurs, and Bob assigns a weight greater than 2 i / C 2 M to all the elements of S 1 . Then Al ice st ops pl aying wi th S 1 . Note that the tot al Alice’ s weight of S 1 (let us call it β ) is the su m of the geometric sequence: β = 1 / 2 M + 2 / 2 M + · · · + 2 i − 1 / 2 M < 2 i / 2 M ≤ 1 . Thus Alice obeys the rules. Not e th at t otal Bob’ s weight of S 1 is mo re than M 2 i − 1 / C 2 M = 2 i + 1 / 2 M , which exceeds at least two t imes the total Alice’ s weight spent on S 1 . This implies, in particular , that Bob cannot beat Alice’ s weight for the last element if the game comes to this stage (and Alice wins the game in this case.) Then Alice proceeds to the second set S 2 and repeats the procedure. Ho wever thi s time she uses weights α / 2 M , 2 α / 2 M , . . . , where α = 1 − β is the weight still av ailable for Alice. Ag ain she forces Bob t o use twice m ore weight than she do es. Then Alice repeats the procedure for the third set S 3 with the remaining weight etc. Let β j is th e the total weight Al ice s pent on the sets S 1 , . . . , S j , and α j = 1 − β j the weight remaining after the first j itera ti ons. By construction, Bob’ s to tal wei ght spent on sets S 1 , . . . , S j 9 is greater than 2 β j , so we ha ve 2 β j < 1 and hence α j > 1 / 2. Consequently , Alice’ s total weight of each S j is more than 1 / 2 M + 1 . Hence after at most N = 2 M iterations Alice wins. If the size o f S j are lar ge but different, we need to m ake so me mod ification. (W e cannot use the same app roach starting with 1 / 2 M where M is the size of the set: if Bob beats the first element with factor C , he spends twice more weight t han Alice but still a sm all amount, s o we do not hav e enough sets for a contradiction.) Howe ver , the mod ification is easy . If the number of elements in S j is a multiple of 4 C (which is the case we u se), we can split elements o f S j into 4 C groups of equal size, and treat all members of each group G as one element. This means that if the above algorithm asks to assign t o an “element” (group) G a wei ght w , Alice di stributes the weight w uniformly am ong members of G and waits until either Bob disables all elements of the group or assigns 4 C -bigger weight to all elements of S j . If S j is not a multipl e of 4 C , the groups are not equal (the worst case is when some groups hav e one element while other ha ve two e lement s), so to compensate for this we heed to use 8 C instead of 4 C . Note that excess in the numb er of groups (when N is bi gger t han required 8 C ) does not matter at all, we just ignore some groups.  Note that this proof provides also some bound for n (the length of the string); this bound is (almost) the same as gi ven in Theorem 6.1 in [7]. Not e also that instead of class ifying s trings according to t heir length, we could sp lit them (effe ctively) into arbitrary finite sets G n whose cardinalities monotoni cally increase and are unbounded. Then for every string x ∈ G n we ha ve K ( x ) ≤ # G n + K P ( n ) + O ( 1 ) and for every co-enumerable set Q that i ntersects every G n there exists n and x ∈ G N ∩ Q such that K ( x ) ≤ # G n + K ( n ) − c (for the same reasons). References [1] Bruno Bauwens, personal commun ication. [2] Leonid Levin, V arious measures of com plexity for finite objects, Soviet Mat h. Dokl. , 17 (2), p. 522-526 (1976 ). See http://www.cs.b u.edu/fac/lnd/dvi/vm-e.pdf for a corrected translation. [3] Mi ng Li, P aul V itanyi, An Intr oduction to K olmogor ov Comple xity and Its Applicati ons . 3rd ed., Springer , 2008 . [4] Donald A. Martin, The a xio m of determinateness and reduction principles in the analytical hierarchy , Bull. Amer . Math. Soc. , 74 :687–689 (1968). [5] Donald A. Martin, Bore l Determinacy , The An nals of Mathematics , 2nd Ser ., 102 (2):363 – 371 (Sept. 1975). [6] Donald A. Martin, A purely inductiv e proof of Bore l determinacy , Recur sion theory , Pro- ceedings of the AMS–ASL summer institut e held in Ithaca, New Y ork, 1982, p. 303– 308. [7] Joseph S. Mill er , Contrasting plain and prefix-free complexities. Preprint a vailable at http://www. math.wisc.edu/~jmiller/downloads.html . 10 [8] Andrej Muchnik, On the basic structures of t he descriptive theory of algori thms, So viet Math. Dokl. , 32 , p. 671–674 (1985). [9] Andrej Muchnik, Alexander Shen, Mik hail Ust inov , Ni kolai K. V ereshchagin, Michael V . Vyugin, Non-reducible descriptions for conditional K olmogorov complexity , Theor etical Computer Science , 384 (1), p. 77-86 (2007). [10] Alexander Shen, Algorithmi c Info rmation Theory and K olmogor ov Complexity , Lec- ture notes of a course taught at the Uppsala University . A vailable as T echnical Report at http://www. it.uu.se/research/publications/reports/2000-034/ [11] Nikolay V ereshchagin, Mikhail Vyugin, Independent minimum length programs to trans- late between give n strings, Theor etical Comput er Science , 271 (1–2), p. 131–143 (2002). [12] Nikolay V ereshchagin, K olm ogorov c om plexity and Games, Bullet in of the E ur opean Association for Theor eti cal Computer Science , 94 , Feb . 2 008, p. 51–83. 11

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment