Concave Programming Upper Bounds on the Capacity of 2-D Constraints

The capacity of 1-D constraints is given by the entropy of a corresponding stationary maxentropic Markov chain. Namely, the entropy is maximized over a set of probability distributions, which is defined by some linear requirements. In this paper, cer…

Authors: Ido Tal, Ron M. Roth

Conca v e Programming Upper Bounds o n the Capacity of 2-D Constraints ∗ Ido T al Ron M. Roth Computer Science Department, T echn ion, Haif a 3 2000 , Israel. Email: idotal@ieee.o rg, ronny@cs.tec hnion.ac.il Abstract —The c apacity of 1-D constraints is giv en by the entropy of a corresponding stationary maxentropic Markov chain. Namely , the entropy i s maximized over a set of probability distributions, whi ch is defined by some linear requirements. In this paper , certain aspects of t his characterization are extended to 2-D constraints. The re su lt is a method fo r calculating an upp er bound on the capacity of 2-D constraints. The key steps ar e: The maxe n tropic stationary pr obab ility distribution on sq uare configurations i s considered. A set of linear equalities and i nequalities is derived from th is stationarity . The r esu lt is a concav e pro gram, which can be easily solved numerically . Our method improv es u pon previous upper boun ds fo r th e capacity of the 2-D “no indep endent bits” constraint, as well as certain 2-D RLL constraints. I . I N T RO D U C T I O N Let Σ be a finite a lphabet. A one- dimensiona l (1- D) c on- straint is a set S of word s over Σ . For the set S to be c alled a 1-D con straint, there m ust exist an edge-lab eled graph G with the fo llowing prope rty: a word w = w 1 w 2 . . . w n is in S iff there exists a path in G for which the suc cessi ve edg e la bels are w 1 , w 2 , . . . , w n (see [1]). A two dimen sional (2-D) constraint over Σ is a gener- alization of a 1-D constraint; it is a set S of r ectangular configur ations over Σ and is define d throug h a p air of vertex - labeled gr aphs ( G row , G col ) , wher e G row = ( V , E row , L ) and G col = ( V , E col , L ) . Namely , both grap hs share the sam e ver - tex set an d the same vertex labeling fun ction L : V → Σ . The constraint S = S ( G row , G col ) consists of all finite recta ngular configur ations ( w i,j ) over Σ with the f ollowing p roperty : Let A be the rectan gular index set of ( w i,j ) ( i,j ) ∈ A . Th ere exists a co nfiguratio n ( u i,j ) ( i,j ) ∈ A over the vertex set V such that (a) for each ( i, j ) ∈ A we have w i,j = L ( u i,j ) ; ( b) each r ow in ( u i,j ) is a p ath in G row ; (c) each colum n in ( u i,j ) is a path in G col . Examp les of 2-D con straints in clude the square constraint [ 2], 2-D run length-limited (RLL) co nstraints [3], 2- D symmetric runlength- limited (SRLL) constraints [4], and the “no isolated bits” constraint [5]. Let S be a g i ven 2- D constrain t over a finite alp habet Σ . Denote by Σ M × N the set o f M × N configurations over Σ , and let S M ,N = S ∩ Σ M × N , S M = S ∩ Σ M × M . ∗ This work was supported by grant No. 2002197 from the United-Sta tes– Israel Binat ional Science Foundat ion (BSF), Jerusalem, Israel. The capacity of S is equal to cap ( S ) = lim M →∞ 1 M 2 · log 2 | S M | . (1) In this paper, we show a metho d for c alculating an upper bound on cap ( S ) . T wo o ther metho ds of calculating an up per bound on the capacity of a 2-D con straint are the fo llowing: The first metho d is the so ca lled “stripe method , ” in which we fix a positiv e integer N , and boun d cap ( S ) by cap ( S ) ≤ lim M →∞ 1 M · N · log 2 | S M ,N | . (2) Namely , we consid er o nly stripes of width N , and essentially get a 1-D constraint (since we may regard eac h o f the po ssible row values a s a letter in an auxiliary alph abet). The RHS of (2) is easily calculated for mo dest values o f N : Let G be the edge-lab eled graph co rrespon ding to the 1 -D constraint, an d let A G be the ad jacency matr ix of G . Den ote by λ ( A G ) the Perron eigenv alu e of A G . By [1, § 3.2 ], the RHS of (2) is equal to λ ( A G ) . The secon d method for u pper-bound ing cap ( S ) is the g eneralization presented by Forchhamm er and Justesen [6] to the method of Calkin and W ilf [ 7]. The capacity of a given 1 -D constraint is known to be equal to the value of an op timization program , where the optimization is on the entropy of a certa in statio nary Markov chain, an d is carried ou t over the condition al p robabilities of that chain ( see [1, § 3. 2.3]). W e tr y to extend certain a spects of this chara cterization of capacity to 2-D co nstraints. What results is a (gener ally no n-tight) u pper bound o n cap ( S ) . The structure of this paper is as follows. In Section II, we set up some notation. Then, in Section III, we show the existence of a certain station ary rando m variable taking values on S M and having en tropy appro aching the c apacity of S , as M → ∞ . W e then conside r a re lati vely small su b- configur ation of that ran dom variable, an d deno te it by X ( M ) . The section co ncludes with an upp er bound on the capa city of S , wh ich is a f unction o f th e pro bability distribution o f X ( M ) . In Section I V, we derive a set o f linear equations which hold o n the prob ability distribution of X ( M ) . In Section V, we argue as follows: T he bou nd der iv ed in Section III is a function of the pro bability distribution of X ( M ) , which we d o not k now how to calcu late; h owe ver , by Section I V we k now that th is probab ility distribution is sub ject to a set of linear requirem ents. Th us, we fo rmalize an op timization prob lem, where the unknown pro bability distribution is r eplaced by a set of variables, subjec t to the ab ove-mentioned linear requirem ents. Th e maxim um of this o ptimization pr oblem is an upper bo und on the capacity o f S . W e then show that this o ptimization proble m is easily solved, since it is an instance of conve x pro grammin g. I n Section VI, we show our computatio nal results. Finally , in Section VII we present an asymptotic analysis of our metho d. W e note at this po int that altho ugh this pap er deals with 2 -D constraints, our method can be easily gene ralized to h igher dimensions as well. I I . N OTA T I O N This section is dev oted to setting up som e n otation. A. Ind ex sets and config urations Denote th e set of integers by Z . A (2 -D) index set U ⊆ Z 2 is a set of integer pa irs. A 2-D con figuration over Σ with an index set U is a function w : U → Σ . W e deno te such a configur ation a s w = ( w i,j ) ( i,j ) ∈ U , where for all ( i, j ) ∈ U , we have that w i,j ∈ Σ . I n this paper, index sets will always be deno ted by upper-case Gr eek letters or upper-case Roman letters in the sans-serif fon t. Since m any of ou r config urations will be M × N , we have set aside special no tation for th eir index sets; let B M ,N = { ( i, j ) : 0 ≤ i < M , 0 ≤ j < N } . Also, deno te B M = B M ,M = { ( i, j ) : 0 ≤ i, j < M } . For integers α, β we deno te th e shif ting of U by ( α, β ) as σ α,β ( U ) = { ( i + α, j + β ) : ( i, j ) ∈ U } . Moreover , by abuse of no tation, let σ α,β ( w ) be the shifted configur ation (with index set σ ( U ) ): σ α,β ( w ) i + α,j + β = w i,j . For a configuratio n w with index set U , and an i n dex set V ⊆ U , den ote the r estriction of w to V by w [ V ] = ( w [ V ] i,j ) ( i,j ) ∈ V ; namely , w [ V ] i,j = w i,j , where ( i, j ) ∈ V . W e deno te the restrictio n of S to U by S [ U ] : S [ U ] = { w : there exists w ′ ∈ S suc h that w ′ [ U ] = w } . (3) B. Strict total order A strict total order ≺ is a relation on Z 2 × Z 2 , satisfying the fo llowing con ditions for a ll ( i 1 , j 1 ) , ( i 2 , j 2 ) , ( i 3 , j 3 ) ∈ Z 2 . • If ( i 1 , j 1 ) 6 = ( i 2 , j 2 ) , then either ( i 1 , j 1 ) ≺ ( i 2 , j 2 ) o r ( i 2 , j 2 ) ≺ ( i 1 , j 1 ) , but not b oth. • If ( i 1 , j 1 ) = ( i 2 , j 2 ) , then neither ( i 1 , j 1 ) ≺ ( i 2 , j 2 ) nor ( i 2 , j 2 ) ≺ ( i 1 , j 1 ) . • If ( i 1 , j 1 ) ≺ ( i 2 , j 2 ) and ( i 2 , j 2 ) ≺ ( i 3 , j 3 ) , then ( i 1 , j 1 ) ≺ ( i 3 , j 3 ) . For ( i, j ) ∈ Z 2 , de fine T ( ≺ ) i,j as all the indexes preceding ( i, j ) . Namely , T ( ≺ ) i,j =  ( i ′ , j ′ ) ∈ Z 2 : ( i ′ , j ′ ) ≺ ( i, j )  . C. Entr opy Let X and Y be two random variables. Denote p x = Prob( X = x ) . and p y | x = Prob( X = x, Y = y ) / Pr o b( X = x ) . The entropy of X is d enoted by H ( X ) and is eq ual to H ( X ) = X x p x log p x , where the sum is on all x for which P rob( X = x ) is positiv e. Similarly , we defin e th e conditio nal entropy H ( Y | X ) as H ( Y | X ) = X x p x X y p y | x log p y | x , where we sum on all x for which p x is positiv e and all y f or which p y | x is positiv e. I I I . A P R E L I M I N A RY U P P E R B O U N D O N cap ( S ) Let M b e a p ositiv e integer and let W be a random variable taking values on S M . W e say that W is stationary if for all U ⊆ B M , all α, β ∈ Z such that σ α,β ( U ) ⊆ B M , and all w ′ ∈ S [ U ] , we have that Prob( W [ U ] = w ′ ) = P r ob( W [ σ α,β ( U )] = σ α,β ( w ′ )) . The following is a co rollary of [8, T heorem 1.4]. The proof is giv en in the Ap pendix. Theor em 1: There exists a series of rando m v ariab les ( W ( M ) ) ∞ M =1 with the following pr operties: (i) Each W ( M ) takes values on S M . (ii) The prob ability d istribution of W ( M ) is stationary . (iii) The normalize d entropy of W ( M ) approa ches cap ( S ) , cap ( S ) = lim M →∞ 1 M 2 · H ( W ( M ) ) . (4) W e n ow proceed towards deriving Lem ma 2 below , w hich giv e s an upp er bo und on cap ( S ) , and makes use of the stationarity prop erty . W e note in advance that th is bou nd is not actu ally m eant to be calculated. Thu s, its u tility will be made clear in the following sections. In ord er to e nhance the exposition, we accomp any the der iv ation with two run ning examples. Running Example I: Define the lexicog raphic orde r ≺ lex as follows: ( i 1 , j 1 ) ≺ lex ( i 2 , j 2 ) iff • i 1 < i 2 , or • ( i 1 = i 2 and j 1 < j 2 ). Running Example II: Define the “inter leav ed raster scan ” order ≺ irs as follows: ( i 1 , j 1 ) ≺ irs ( i 2 , j 2 ) iff • i 1 ≡ 0 (mo d 2) and i 2 ≡ 1 (mo d 2 ) , or • i 1 ≡ i 2 (mo d 2) and i 1 < i 2 , or • i 1 = i 2 and j 1 < j 2 . (See Figure 1 for both examples.) For the rest of this section, fix p ositiv e integers r and s , and define the index set Λ = B r,s . 1 2 3 4 5 6 7 8 9 10 11 12 1 3 14 15 16 17 1 8 19 20 21 22 2 3 24 25 ≺ lex 1 2 3 4 5 16 17 1 8 19 20 6 7 8 9 10 21 22 2 3 24 25 11 12 1 3 14 15 ≺ irs Fig. 1. An entry labeled i in the left (right) configuration preced es an entry label ed j accor ding to ≺ lex ( ≺ irs ) iff i < j . W e will refer to Λ as “the patch. ” T he bou nd we derive in Lemma 2 will be a fun ction o f th e following: • the strict total order ≺ , • the integers r and s , which de termine the orde r r × s of the patch Λ , • an integer c , which will d enote the num ber of “colors” we encoun ter , • a colorin g function f : Z 2 → { 1 , 2 , . . . , c } , mappin g each point in Z 2 to one of c color s, • c indexes, ( a γ , b γ ) c γ =1 , su ch that for all 1 ≤ γ ≤ c , ( a γ , b γ ) ∈ Λ (namely , each color γ has a design ated po int in th e patch, which may or may not be of color γ ). The fun ction f must satisfy two req uirements, which we now elabor ate on. Our first require ment is: for all 1 ≤ γ ≤ c , lim M →∞ { ( i, j ) ∈ B M : f ( i, j ) = γ } M 2 = 1 c . (5) Namely , as the or ders o f W ( M ) tend to infinity , each co lor is equally 1 likely . Our secon d r equireme nt is as follows: ther e exist index sets Ψ 1 , Ψ 2 , . . . , Ψ c ⊆ Λ su ch that f or all in dexes ( i, j ) ∈ Z 2 , σ i ′ ,j ′ (Ψ γ ) = T ( ≺ ) i,j ∩ σ i ′ ,j ′ (Λ) , (6) where γ = f ( i, j ) , i ′ = a γ − i , and j ′ = b γ − j . N amely , le t ( i, j ) be such tha t f ( i, j ) = γ , an d shift Λ such that ( a γ , b γ ) is shifted to ( i, j ) . Now , consider the set of all ind exes in the shifted Λ which pr ecede ( i, j ) : this set must b e equal to the correspo ndingly shifted Ψ γ . Running Example I: T ake r = 4 and s = 7 as th e patch orders. Let the numb er of c olors be c = 1 . Thus, we must define f = f lex as f ollows: fo r all ( i, j ) ∈ Z 2 , f lex ( i, j ) = 1 . T ake th e po int cor respond ing to the single color as ( a 1 = 3 , b 1 = 5) . See also Figure 2(a). Running Example II: As in the previous examp le, take r = 3 and s = 5 as the patch ord ers. Let the n umber of colo rs be c = 2 . Define f = f irs as follows: f irs ( i, j ) = ( 1 i ≡ 0 (mod 2) 2 i ≡ 1 (mod 2) . 1 In fact, it is possible to gene ralize (5), and require only that the limit e xists for all γ . W e hav e not found this gene ralization useful. • • • lex irs (a) (b) γ = 1 γ = 2 Fig. 2. The left (right) column corresponds to Running Example I (II). The configurat ions are of order r × s and represent the inde x set Λ . The • symbol is in position ( a γ , b γ ) . T he shaded part is Ψ γ . T ake ( a 1 = 3 , b 1 = 5) and ( a 2 = 2 , b 2 = 4) . See also Figure 2(b). Lemma 2: Let ( W ( M ) ) ∞ M =1 be as in T heorem 1 and define X ( M ) = W ( M ) [Λ] . Let ≺ , r , s , c , f , (Ψ γ ) c γ =1 , and ( a γ , b γ ) c γ =1 be given. For 1 ≤ γ ≤ c , define Υ γ = { ( a γ , b γ ) } ∪ Ψ γ . Let Y γ = X ( M ) [Υ γ ] and Z γ = X ( M ) [Ψ γ ] (note that Y γ and Z γ are func tions of M ) . T hen, cap ( S ) ≤ lim sup M →∞ 1 c c X γ =1 H ( Y γ | Z γ ) . Pr oof: Let X , W an d T i,j be sho rthand fo r X ( M ) , W ( M ) and T ( ≺ ) i,j , respectively . First n ote that Y γ = W [Υ γ ] and Z γ = W [Ψ γ ] . W e show that lim M →∞ 1 M 2 H ( W ) ≤ lim sup M →∞ 1 c c X γ =1 H ( Y γ | Z γ ) . Once this is proved, the claim follows fro m (4). By the chain rule [9, Theore m 2.5 .1], we have H ( W ) = X ( i,j ) ∈ B M H ( W i,j | W [ T i,j ∩ B M ]) . W e now re call (6) and define the index set ¯ ∂ to be the largest subset of B M for which the fo llowing conditio n ho lds: fo r all ( i, j ) ∈ ¯ ∂ , we have that σ i ′ ,j ′ (Ψ γ ) ⊆ B M , (7) where hereafte r in the proof , γ = f ( i, j ) , i ′ = a γ − i , an d j ′ = b γ − j . Define ∂ = B M \ ¯ ∂ . No te that since r and s are constant, and Ψ 1 , Ψ 2 , . . . , Ψ c ⊆ Λ , then | ∂ | M 2 = O (1 / M ) . Thus, on the one hand, we have 1 M 2 X ( i,j ) ∈ ∂ H ( W i,j | W [ T i,j ∩ B M ]) ≤ log 2 | Σ | · O (1 / M ) . On the other hand, from (6) and (7) we ha ve that for all ( i, j ) ∈ ¯ ∂ , σ i ′ ,j ′ (Ψ γ ) ⊆ T i,j ∩ B M . Hence, since con ditioning reduces en tropy [9, Theorem 2 .6.5], 1 M 2 X ( i,j ) ∈ ¯ ∂ H ( W i,j | W [ T i,j ∩ B M ]) ≤ 1 M 2 X ( i,j ) ∈ ¯ ∂ H ( W i,j | W [ σ i ′ ,j ′ (Ψ γ )]) = 1 M 2 X ( i,j ) ∈ ¯ ∂ H ( W [ { ( i, j ) } ∪ σ i ′ ,j ′ (Ψ γ )] | W [ σ i ′ ,j ′ (Ψ γ )]) = 1 M 2 X ( i,j ) ∈ ¯ ∂ H ( Y γ | Z γ ) , where the last step fo llows from the station arity of W ( M ) . Recalling (5), the proof follows. The following is a simple co rollary of Lemma 2. Cor ollary 3: L et ( W ( M ) ) ∞ M =1 be as in Theore m 1 a nd define X ( M ) = W ( M ) [Λ] . Fix positive integers r and s . Let ℓ be a p ositiv e integer, an d let ( ρ h k i ) ℓ k =1 be non -negativ e reals such that P ℓ k =1 ρ h k i = 1 . For every 1 ≤ k ≤ ℓ , let ≺ h k i , c h k i , f h k i , (Ψ h k i γ ) c γ =1 , an d ( a h k i γ , b h k i γ ) c γ =1 be giv en . Also, fo r 1 ≤ γ ≤ c h k i , let Υ h k i γ = { ( a h k i γ , b h k i γ ) } ∪ Ψ h k i γ . Define Y h k i γ = X ( M ) [Υ h k i γ ] and Z h k i γ = X ( M ) [Ψ h k i γ ] (note that Y h k i γ and Z h k i γ are functions of M ). Then, cap ( S ) ≤ lim sup M →∞ ℓ X k =1 ρ h k i c h k i c h k i X γ =1 H ( Y h k i γ | Z h k i γ ) . Corollary 3 is the most g eneral way we have found to state o ur results. This genera lity will in deed help us later on . Howe ver , almost none of the intuition is lost if the read er has in mind the much simpler case of ℓ = 1 , ρ h 1 i = 1 , c h 1 i = 1 , ≺ h 1 i = ≺ lex , ( a h 1 i 1 , b h 1 i 1 ) = ( r − 1 , t ) , and Ψ h 1 i 1 = Λ ∩ T ( a h 1 i 1 ,b h 1 i 1 ) , (8) where 0 ≤ t < s . Th is simpler case was dem onstrated in Running Example I. I V . L I N E A R R E Q U I R E M E N T S Recall th at X ( M ) = W ( M ) [Λ] is a n r × s sub -configu ration of W ( M ) , and thus station ary as well. In this section, we for- mulate a set of line ar requireme nts (equalities and ineq ualities) on the prob ability distribution of X ( M ) . For the rest of this section, let M be fixed and let X be shorthand for X ( M ) . A. Linear r eq uir ements fr om stationa rity In th is su bsection, we f ormulate a set of linear req uiremen ts that follow from the stationar ity of X ( M ) . Let x ∈ S [Λ] be a realization of X . Deno te p x = Prob( X = x ) . W e start with the trivial re quiremen ts. Obviously , we mu st have for all x ∈ S [Λ ] th at p x ≥ 0 . Also, X x ∈ S [Λ] p x = 1 . Next, we show how we ca n use station arity to get mo re linear equation s on ( p x ) x ∈ S [Λ] . Let Λ ′ = { ( i, j ) : 0 ≤ i < r − 1 , 0 ≤ j < s } . For x ′ ∈ S [Λ ′ ] we must have by stationar ity that Prob( X [Λ ′ ] = x ′ ) = P r ob( X [ σ 1 , 0 (Λ ′ )] = σ 1 , 0 ( x ′ )) . (9) As a co ncrete examp le, suppose th at r = s = 3 . W e claim that Prob  X = 1 0 0 0 0 1 ∗ ∗ ∗  = P r ob  X = ∗ ∗ ∗ 1 0 0 0 0 1  , where ∗ denotes “don’t c are”. Both the left-hand an d right- hand sides of ( 9) are marginal- izations of ( p x ) x . Th us, we get a set of linear equation s on ( p x ) x , namely , for all x ′ ∈ S [Λ ′ ] , X x : x [Λ ′ ]= x ′ p x = X x : x [ σ 1 , 0 (Λ ′ )]= σ 1 , 0 ( x ′ ) p x . T o get more equation s, we now apply the same ration al horizon tally , instead o f vertically . Let Λ ′′ = { ( i, j ) : 0 ≤ i < r , 0 ≤ j < s − 1 } . for all x ′′ ∈ S [Λ ′′ ] , X x : x [Λ ′′ ]= x ′′ p x = X x : x [ σ 0 , 1 (Λ ′′ )]= σ 0 , 1 ( x ′′ ) p x . B. Linear equ ations fr om reflection, transposition, and com- plementation W e now show that if S is reflectio n, transpo sition, or compleme ntation inv ariant (defined below), t h en we can deriv e yet more linear equation s. Define v M ( · ) ( h M ( · ) ) as the vertical (ho rizontal) reflec- tion of a rectang ular configu ration with M ro ws (colum ns). Namely , ( v M ( w )) i,j = w M − 1 − i,j , and ( h M ( w )) i,j = w i,M − 1 − j . Define τ as the transp osition o f a configur ation. Namely , τ ( w ) i,j = w j,i . For Σ = { 0 , 1 } , denote b y comp( w ) the bitwise comple- ment of a configu ration w . Nam ely , comp( w ) i,j = ( 1 if w i,j = 0 0 otherwise . W e state three similar lemmas, and prove the first. The proof of the other two is similar . Lemma 4: Supp ose that S is such that for all M > 0 and w ∈ Σ M × M , w ∈ S ⇐ ⇒ h M ( w ) ∈ S ⇐ ⇒ v M ( w ) ∈ S . Then, w .l.o.g ., the proba bility distribution of W is such that for all w ∈ S M , Prob( W = w ) = Prob( W = h M ( w )) = Prob( W = v M ( w )) . (10) Lemma 5: Supp ose that S is such that for all M > 0 and w ∈ Σ M × M , w ∈ S ⇐ ⇒ τ ( w ) ∈ S . Then, w .l.o .g., W is such that for all w ∈ S M , Prob( W = w ) = P rob( W = τ ( w )) . (11) Lemma 6: Supp ose that Σ = { 0 , 1 } and S is such th at for all M > 0 and w ∈ Σ M × M , w ∈ S ⇐ ⇒ comp( w ) ∈ S . Then, w .l.o .g., W is such that for all w ∈ S M , Prob( W = w ) = P rob( W = comp( w )) . (12) Pr oof of Lemma 5: Let h and v be sho rthand for h M and v M , respe cti vely . For M fixed, we defin e a new rand om variable W new taking values o n S M , with th e following distribution: for all w ∈ S M , Prob( W new = w ) = 1 4 X w ′ ∈ { w, h ( w ) ,v ( w ) , h ( v ( w )) } Prob( W = w ′ ) . Since h ( h ( w )) = v ( v ( w )) = w and h ( v ( w )) = v ( h ( w )) we get tha t (10) holds fo r W new . Mor eover , b y the concavity of the entropy f unction, H ( W ) ≤ H ( W new ) . Thus, the pr operties defined in Theorem 1 hold for W new . If t h e condition of Lemma 4 hold s, then we get the following equations by stationarity . For all x ∈ S [Λ ] , p x = p v r ( x ) = p h s ( x ) . If th e co ndition o f Lem ma 5 holds, th en th e fo llowing holds by stationarity . Assume w .l.o.g. that r ≤ s , and let ˜ Λ = { ( i, j ) : 0 ≤ i, j < r } . For all χ ∈ S [ ˜ Λ] , X x : x [ ˜ Λ]= χ p x = X x : x [ ˜ Λ]= τ ( χ ) p x . If t h e condition of Lemma 6 hold s, then we get the following equations by stationarity . For all x ∈ S [Λ ] , p x = p comp( x ) . V . A N U P P E R B O U N D O N cap ( S ) For th e rest of th is section, let r , s , ℓ , ρ h k i , ≺ h k i , c h k i , f h k i , Ψ h k i γ , and ( a h k i γ , b h k i γ ) be given as in Corollary 3 . Recall from Corollary 3 that we are in terested in H ( Y h k i γ | Z h k i γ ) , in order to bound cap ( S ) from above. As a first step, we fix M and express H ( Y h k i γ | Z h k i γ ) in terms of the p robabilities ( p x ) x of the r andom variable X ( M ) . For giv e n 1 ≤ k ≤ ℓ and 1 ≤ γ ≤ c h k i , let y ∈ S [Υ h k i γ ] and z ∈ S [Ψ h k i γ ] be realizations of Y h k i γ and Z h k i γ , r espectively . Let p h k i γ , y = Prob( Y h k i γ = y ) and p h k i γ , z = P r ob( Z h k i γ = z ) ( p h k i γ , y and p h k i γ , z are functio ns of M ). From here onward, let p y and p z be sho rthand for p h k i γ , y and p h k i γ , z , respectively . Both p y and p z are marginalizations of ( p x ) x , namely , p y = X x ∈ S [Λ] : x [Υ h k i γ ]= y p x , p z = X x ∈ S [Λ] : x [Ψ h k i γ ]= z p x . Thus, for giv en γ and k , H ( Y h k i γ | Z h k i γ ) = X y ∈ S [Υ h k i γ ] − p y log 2 p y + X z ∈ S [Ψ h k i γ ] p z log 2 p z is a function of the pro babilities ( p x ) x of X ( M ) . Our next step will b e to reason as fo llows: W e have found linear requireme nts th at th e p x ’ s satisfy and expressed H ( Y h k i γ | Z h k i γ ) as a functio n o f ( p x ) x . Howev er, we do no t know of a way to ac tually ca lculate ( p x ) x . So, instead of the probab ilities ( p x ) x , co nsider the variab les ( ¯ p x ) x . Fro m this line of thoug ht we get ou r main theorem. Theor em 7: The value of th e op timization prog ram given in Figure 3 is an upp er bo und on cap ( S ) . Pr oof: First, notice that if we take ¯ p x = p x , then (b y Section IV) all the req uirements wh ich th e ¯ p x ’ s are subject to indeed hold, and the objective function is equal to ℓ X k =1 ρ h k i c h k i c h k i X γ =1 H ( Y h k i γ | Z h k i γ ) . maximize ℓ X k =1 ρ h k i c h k i c h k i X γ =1 Ξ( k , γ ) over the variables ( ¯ p x ) x ∈ S [Λ] , where for 1 ≤ k ≤ ℓ , 1 ≤ γ ≤ c h k i , y ∈ S [Υ h k i γ ] , z ∈ S [Ψ h k i γ ] , we define ¯ p h k i γ , y , X x ∈ S [Λ] : x [Υ h k i γ ]= y ¯ p x , ¯ p h k i γ , z , X x ∈ S [Λ] : x [Ψ h k i γ ]= z ¯ p x , Ξ( k , γ ) , − X y ∈ S [Υ h k i γ ] ¯ p h k i γ , y log 2 ¯ p h k i γ , y + X z ∈ S [Ψ h k i γ ] ¯ p h k i γ , z log 2 ¯ p h k i γ , z , and the variables ¯ p x are sub ject to th e following requirem ents: (i) X x ∈ S [Λ] ¯ p x = 1 . (ii) For all x ∈ S [Λ] , ¯ p x ≥ 0 . (iii) For a ll x ′ ∈ S [Λ ′ ] , X x : x [Λ ′ ]= x ′ ¯ p x = X x : x [ σ 1 , 0 (Λ ′ )]= σ 1 , 0 ( x ′ ) ¯ p x . (iv) For all x ′′ ∈ S [Λ ′′ ] , X x : x [Λ ′′ ]= x ′′ ¯ p x = X x : x [ σ 0 , 1 (Λ ′′ )]= σ 0 , 1 ( x ′′ ) ¯ p x . (v) (If S is re flection (resp. com plementation ) inv arian t) For all x ∈ S [Λ] , ¯ p x = ¯ p v r ( x ) = ¯ p h s ( x ) ( resp. ¯ p x = ¯ p comp( x ) ) . (vi) (If S is transp osition inv ar iant) For all χ ∈ S [ ˜ Λ] , X x : x [ ˜ Λ]= χ ¯ p x = X x : x [ ˜ Λ]= τ ( χ ) ¯ p x . Fig. 3. Optimizat ion program over the variab les ¯ p x (assuming w .l.o.g. that r ≤ s ). The optimum is an upper bound on cap ( S ) . So, the max imum is an uppe r bound on th e above eq uation. Next, by compac tness, a maxim um ind eed exists. Since the maximum is not a fu nction of M , the claim now follows fro m Corollary 3. W e no w proceed to sh ow that the optimization problem in Figure 3 is an instan ce of concave pr ogramm ing [1 0, p. 137], an d thu s easily calcu lated. Since the requir ements that the variables ( ¯ p x ) x are subject to ar e lin ear , this re duces to showing that the objective fun ction is co ncave in ( ¯ p x ) x . Lemma 8: The o bjective function in Figure 3 is concave in the v ariable s ( ¯ p x ) x ∈ S [Λ] , subject to them being non-negativ e. Pr oof: Recall that for all 1 ≤ k ≤ ℓ we have that ρ h k i c h k i is non-n egati ve. Th us, it suffices to prove that for all 1 ≤ k ≤ ℓ and 1 ≤ γ ≤ c h k i , the functio n Ξ( k , γ ) is conc av e in the variables ( ¯ p x ) x . So , let k an d γ be fixed, and let ¯ p y and ¯ p z be shorthand for ¯ p h k i γ , y and ¯ p h k i γ , z , r espectively . Recalling the definition of ¯ p h k i γ , y and ¯ p h k i γ , z in Figure 3 and th e fact that Ψ h k i γ ⊆ Υ h k i γ , we get that Ξ( k , γ ) = X y ∈ S [Υ h k i γ ] z = y [Ψ h k i γ ] − ¯ p y log 2 ¯ p y ¯ p z . Thus, it suffices to show that each summand is con cav e in ( ¯ p x ) x . This is in deed the case: let ( ¯ p (1) x ) x ∈ S [Λ] and ( ¯ p (2) x ) x ∈ S [Λ] be non-negative. Le t 0 ≤ ξ ≤ 1 be given, and define ( ¯ p (3) x ) x ∈ S [Λ] as ¯ p (3) x = ξ ¯ p (1) x + (1 − ξ ) ¯ p (2) x , x ∈ S [Λ] . For t = 1 , 2 , 3 , den ote by ¯ p ( t ) y and ¯ p ( t ) z the marginaliza tions correspo nding to ( ¯ p ( t ) x ) x . Obviously , ¯ p (3) y = ξ ¯ p (1) y + (1 − ξ ) ¯ p (2) y , y ∈ S [Υ h k i γ ] . and ¯ p (3) z = ξ ¯ p (1) z + (1 − ξ ) ¯ p (2) z , z ∈ S [Ψ h k i γ ] . W e must show that fo r a ll y ∈ S [Υ h k i γ ] , z = y [Ψ h k i γ ] ¯ p (3) y log 2 ¯ p (3) y ¯ p (3) z ≤ ξ ¯ p (1) y log 2 ¯ p (1) y ¯ p (1) z + (1 − ξ ) ¯ p (2) y log 2 ¯ p (2) y ¯ p (2) z . This is ind eed the case, b y the log sum ineq uality [9, p. 29]. V I . C O M P U T AT I O NA L R E S U LT S At this poin t, w e have form ulated a concave optimization problem , and wish to solve it. There are quite a f ew prog rams, termed solvers , that enable o ne to d o so. Many such solvers — most o f them p roprietar y — are hosted on the servers of the NEOS pro ject [11][12][13], and th e public may sub mit moderately sized optimization problems to th em. W e have coded ou r optim ization prob lems in th e AMPL modelin g languag e [14], and subm itted them to NEOS. Essentially , a solver starts with some initial guess as to the optimizing value of ( ¯ p x ) x ∈ S [Λ] , and then iterativ ely imp roves the value of the objective function. This process is terminated when the solver decides that it is “close enou gh” to the optimum . Den ote by e p = ( e p x ) x ∈ S [Λ] this “close enough” assignment to th e variables. Of course, w e must sup ply an upper b ound on cap ( S ) , n ot an app roximatio n to on e. Thu s, let e f an d e g = ( e g x ) x , x ∈ S [Λ] , be the value o f th e objective fun ction and its gra dient at e p , respectively . Obviou sly , e f is a lower b ound on th e value of our o ptimization pr oblem. For an upper boun d 2 , we replace 2 W e remark in passing that if we had cho s en to optimize the dual prob lem [10, p. 215], then the “dual of ” e f would already have been an upper bound. Ho weve r, we hav e not managed to state the dual problem in closed form. 1 5 2 6 3 7 4 8 12 9 13 10 14 11 15 19 16 20 1 7 21 18 ≺ skip Fig. 4. An entry labeled i in the configurati on prece des an entry labele d j accordi ng to ≺ skip if f i < j . the objective function in Figure 3 by maximize   e f + X x ∈ S [Λ] e g x · ( ¯ p x − e p x )   , and get a linear pr ogram (the value of which can b e calcu lated exactly). By c oncavity , the v a lue of this linear program is indeed an u pper bound . So, we use NEOS yet again to solve it. For th e sake of doub le-checkin g, we submitted the above optimizatio n pr oblems to two solvers: IPOPT [1 5] an d MOSEK. Before stating our compu tational results, let us first d efine one mo re strict total ord er , which we have termed the “skip” order, ≺ skip (see Figu re 4). W e have that ( i 1 , j 1 ) ≺ skip ( i 2 , j 2 ) iff • i 1 < i 2 , or • ( i 1 = i 2 and j 1 ≡ 0 (mo d 2) and j 2 ≡ 1 (mo d 2) ), or • ( i 1 = i 2 and j 1 ≡ j 2 (mo d 2) and j 1 < j 2 ) Our co mputation al results appear in T able I. T o the best of our knowledge, they are presently t h e tightest. The penultimate column con tains upper bo unds obtained by the method de- scribed in [6]. When available, these com pared-to bou nds are taken from previously published work, as indicated to th e right of them. T he rest are the result of ou r implem entation of [6]. For referen ce, the last column con tains correspon ding lower bound s. W e note that the indexes ( a h k i γ , b h k i γ ) and coefficients ρ h k i used f or e ach constraint wer e optimized by hand , thr ough trial and error . Also, we note that when applying our method to the 2 -D (1 , ∞ ) -RLL constrain t, our boun d was in ferior to the one presented in [2] (utilizing the metho d of [7 ]). V I I . A S Y M P T OT I C A NA L Y S I S For a given co nstraint S and positive integers r and s , let t be an in teger such that 0 ≤ t < s . Denote by µ ( r, s, t ) the value o f th e optimizatio n pro gram in Figure 3, wher e th e parameters are as in (8). In this sectio n, we show that even if we re strict o urselves to this simple case, we g et an upp er bound which is asymptotica lly tight, in the following sense. Theor em 9: For all ǫ > 0 , there exist r 0 > 0 , s 0 > 0 , 0 ≤ t 0 < s 0 such that for all r ≥ r 0 , s ≥ s 0 , t 0 ≤ t ≤ s − ( s 0 − t 0 ) , we have that µ ( r , s, t ) − cap ( S ) ≤ ǫ . In o rder to prove Theor em 9, we need the following lemma. Lemma 10: For all ǫ > 0 , ther e exist r 0 > 0 , s 0 > 0 , 0 ≤ t 0 < s 0 such that µ ( r 0 , s 0 , t 0 ) − cap ( S ) < ǫ . Pr oof: Another well known metho d for b oundin g cap ( S ) from above is th e so called “stripe method ”, mention ed in the introductio n. Na mely , for some given θ , consider the 1-D constraint S = S ( θ ) defin ed as f ollows. The alphabe t o f the constraint is Σ θ . A word of length r ′ satisfies S if and only if when we write its entries as r ows of le ngth θ , one b elow the other, we get an r ′ × θ configu ration wh ich satisfies th e 2 -D constraint S . Define the norm alized capacity of S as c cap ( S ) = 1 θ cap ( S ) . By th e definition of cap ( S ) , the nor malized ca pacity ap - proach es cap ( S ) as θ → ∞ . Thus, fix a θ such that c cap ( S ) − cap ( S ) ≤ ǫ/ 2 . W e say th at a 1 -D con straint ha s memory m if ther e exists a grap h r epresenting it, an d all path s in the gr aph of length m with th e same labelin g ter minate in the sam e vertex. By [1, Theo rem 3 .17] and its pr oof, there exists a series of 1-D constraints { S m } ∞ m =1 such th at S ⊆ S m , the m emory of S m is m , and lim m →∞ cap ( S m ) = cap ( S ) . Thu s, fix m such that c cap ( S m ) − c cap ( S ) ≤ ǫ/ 2 . T o finish the pro of, we now show that µ ( r 0 , s 0 , t 0 ) ≤ c cap ( S m ) , where r 0 = m + 1 , s 0 = 2 · θ , t 0 = θ − 1 . Note that µ ( r 0 , s 0 , t 0 ) is the maximum of H ( ¯ X m ,θ − 1 | ¯ X [ T ( ≺ lex ) m ,θ − 1 ∩ B m +1 , 2 · θ ]) , (13) over a ll r andom variables ¯ X ∈ S m +1 , 2 · θ with a prob ability distribution satisfying ou r lin ear requireme nts. For all 0 ≤ φ < θ we ge t by the (imp osed) stationar ity of ¯ X th at (13) is boun ded fro m above b y H φ = H ( ¯ X m ,φ | ¯ X [ T ( ≺ lex ) m ,φ ∩ B m +1 ,θ ]) . So, (13) is also boun ded fro m above b y 1 θ θ − 1 X φ =0 H φ . (14) The first θ colum ns of ¯ X form a con figuration with ind ex set B m +1 ,θ . By our linea r requir ements, station arity (specifically , vertical stationarity) ho lds for this con figuration as well. So, we may d efine a station ary 1-D Ma rkov chain [ 1, § 3 .2.3] on T ABLE I U P P E R - B O U N D S O N T H E C A PA C I T Y O F S O M E 2 - D C O N S T R A I N T S . Constrai nt r s k ≺ used Uppe r bound Comparison Lowe r bound (2 , ∞ ) -RLL 3 8 7 ≺ lex , ≺ skip 0.4457 0.4459 [16] 0.444202 [17] (3 , ∞ ) -RLL 4 8 5 ≺ lex 0.36821 0.3686 [16] 0.365623 [18] (0 , 2) -RLL 3 5 2 ≺ lex 0.816731 0.817053 0.816007 [18 ] n.i.b . 3 4 1 ≺ skip 0.92472 0.927855 0.922640 [17 ] S m , with entr opy given by (14). That entropy , in tu rn, is at most c cap ( S m ) . Pr oof of Theorem 9: Th e following inequ alities are easi ly verified: µ ( r , s, t ) ≥ µ ( r + 1 , s , t ) . µ ( r , s, t ) ≥ µ ( r , s + 1 , t ) . µ ( r , s, t ) ≥ µ ( r , s + 1 , t + 1) . The proof follows fro m them and Lemma 1 0. A P P E N D I X Our goal in this appen dix is to prove Th eorem 1. Ess en tially , Theorem 1 will turn o ut to be a cor ollary of [8, Theo rem 1 .4]. Howe ver , [8, Theor em 1 .4] deals with configu rations in w hich the index set is Z 2 . So, some defin itions an d auxiliary lemmas are in order . Recall that ( G row , G col ) is th e pair of vertex-labeled graphs throug h which S = S ( G row , G col ) is defined. Also, recall that each member of S is a configu ration with a rectang ular index set. Namely , the index set of a configuratio n in S is σ i,j ( B M ,N ) , f or some i , j , M , and N . W e now give a very similar definition to that of S , only now we requ ire that the index set of each configuration is Z 2 . Namely , define S = S ( G row , G col ) as f ollows: A configur ation ( w i,j ) ( i,j ) ∈ Z 2 over Σ is in S ( G row , G col ) iff th ere exists a co nfiguration ( u i,j ) ( i,j ) ∈ Z 2 over the vertex set V with th e f ollowing pro p- erties: for all ( i, j ) ∈ Z 2 , (a) th e la beling of u i,j satisfies L ( u i,j ) = w i,j ; (b) there exists an edg e fro m u i,j to u i,j +1 in G row ; (c) there exists an edge from u i,j to u i +1 ,j in G col . For po siti ve integers M , N > 0 , define S M ,N as the restriction of S to B M ,N . Namely , S M ,N = S [ B M ,N ] , where the d efinition o f the restriction o peration is as in (3). Also, for M equal to N , define S M = S M ,M . Note that for all M , N > 0 we have S M ,N ⊆ S M ,N , (15) and there are cases in which the inclu sion is strict. Next, define the capacity of S as cap ( S ) = lim M →∞ 1 M 2 · log 2 |S M | . The limit inde ed exists, by sub -additivity (see [3, Appen dix], and referen ces therein ). For integers M , N > 0 and δ ≥ 0 , denote C M ,N , δ = σ − δ, − δ ( B M +2 δ,N +2 δ ) and let S M ,N , δ = S [ C M ,N , δ ] . Note that the in dex set C M ,N , δ of each ele ment of S M ,N , δ is simply B M ,N , padde d with δ colu mns to the righ t and left and δ rows to the top an d bottom. The fo llowing lemma will help us bridge the gap between finite and infinite index sets. Lemma 11: Let w be a con figuration over th e finite alphabet Σ with index set B M ,N . If fo r all δ ≥ 0 we have that w ∈ S M ,N , δ [ B M ,N ] , (16) then we must have that w ∈ S M ,N . Pr oof: Define the following au xiliary directed graph. The vertex set is [ δ ≥ 0 { ˆ w ∈ S M ,N , δ : ˆ w [ B M ,N ] = w } . For every δ ≥ 0 , there is a d irected edge f rom w 1 ∈ S M ,N , δ to w 2 ∈ S M ,N , δ +1 iff w 1 = w 2 [ C M ,N , δ ] . It is ea sily seen th at this graph is a d irected tree with root w , as d efined in [ 19, § 2.4]. Since ( 16) hold s fo r all δ ≥ 0 , the vertex set of the tree is infinite ( and cou ntable). On th e oth er hand , since th e alph abet size | Σ | is fin ite, the out-d egree of each vertex is finite. Thus, by K ¨ onig’ s In finity Lemma [19, Theorem 2.8] , we must have an infinite path in the tree starting from the roo t w . Denote the vertices o f the above-men tioned infinite path as w = w [0] , w [1] , w [2] , . . . . W e now show how to fin d a configu ration ( w ′ i,j ) ( i,j ) ∈ Z 2 such that w ′ ∈ S an d w = w ′ [ B M ,N ] . For each ( i, j ) ∈ Z 2 , defin e w ′ i,j as fo llows: let δ ≥ 0 be such th at ( i, j ) ∈ C M ,N , δ , a nd take w ′ i,j = w [ δ ] i,j . It is easily seen that w ′ is well d efined and contained in S . The fo llowing lemma states that altho ugh the inclusion in (15) may be strict, the capacities of S and S are equ al. Lemma 12: Let S and S be as previously defined. Then , cap ( S ) = cap ( S ) . (17) Pr oof: By (1 5), we must have that cap ( S ) ≤ cap ( S ) . For the other direction, it suffices to prove tha t for all M > 0 , cap ( S ) ≤ 1 M 2 log 2 |S M | . (18) So, let us fix M and prove the above. By Lemma 11, ther e exists δ ≥ 0 such that fo r a ll w ∈ Σ M × M , w 6∈ S M = ⇒ w 6∈ S M ,M ,δ [ B M ] . For t > 0 , let M ′ be shorth and for M ′ = t · M . By the definition of capacity , w e have that cap ( S ) = lim t →∞ 1 ( M ′ ) 2 log 2 | S M ′ | . (19) Now , let us p artition B M ′ into the following disjoint sub-sets of indexes: for 0 ≤ i, j < t , define the set D i,j = σ i · M ,j · M ( B M ) . Let w ′ ∈ S M ′ . Notice that for all 0 ≤ i, j < t fo r which σ i · M ,j · M ( C M ,M ,δ ) ⊆ B M ′ , (20) we mu st have tha t w ′ [ D i,j ] is eq ual to some co rrespon dingly shifted elemen t of S M . On the other hand , for M an d δ fixed, the num ber of pairs ( i, j ) for which (20) d oes not hold is O ( t ) . Thus, a simple calculation gives us that 1 ( M ′ ) 2 log 2 | S M ′ | ≤ 1 M 2 log 2 |S M | + O (1 /t ) . This, together with (19), proves (18). For a giv en M > 0 , define the set F ( M ) of co nfiguratio ns with index set Z 2 as fo llows: a configu ration ( w i,j ) ( i,j ) ∈ Z 2 is in F ( M ) iff for all ( i, j ) ∈ Z 2 , w [ σ i,j ( B M )] ∈ S M . Namely , e ach M × M “patch” is a co rrespond ingly shifted element of S M . Note that there exist vertex-labeled graph s G row ( M ) an d G col ( M ) such that F ( M ) = S ( G row ( M ) , G col ( M )) . Specif - ically , the vertex set of bo th gr aphs is equal to S M ; th e label of each such vertex is its lower -lef t en try; there is an ed ge from w 1 ∈ S M to w 2 ∈ S M in G row ( M ) ( G col ( M ) ) iff th e first M − 1 rows (column s) of w 1 are eq ual to the last M − 1 (rows) colu mns of w 2 . Thu s, cap ( F ( M )) exists. A lso, since w ∈ S implies w ∈ F ( M ) , we h av e cap ( S ) ≤ cap ( F ( M )) . (21) The following is a direct coro llary o f [8, Theorem 1.4]. Cor ollary 13: For all M > 0 , there exists a stationary random variable W ( M ) taking values on F ( M )[ B M ] su ch th at cap ( F ( M )) ≤ 1 M 2 H ( W ( M ) ) . (22) Pr oof of Theorem 1: Notice that F ( M )[ B M ] = S M ⊆ S M . Thus, take W ( M ) as in Corollary 13 and n otice that it satisfies condition s (i) and (ii) in Th eorem 1. From (17), (21), and (22) we get that cap ( S ) ≤ lim M →∞ 1 M 2 · H ( W ( M ) ) . But since W ( M ) takes values on S M , we have by [9, Page 19] that the above ineq uality is in fact an e quality . Thus, condition (iii) is proved. R E F E R E N C E S [1] B. H. Marcus, R. M. Roth, and P . H. Siegel, “Constrained systems and coding for recording chan nels, ” in Handbook of Coding Theory , V . Pless and W . Huf fman, Eds. Am sterdam: Else vier , 1998, pp. 1635–17 64. [2] W . W eeks and R. E. Blahut, “The capacit y and coding gain of certain check erboard codes, ” IE EE T rans. Inform. Theory , vol. 44, pp. 1193– 1203, 1998. [3] A. Kato and K. Zege r, “On the capac ity of tw o-dimensional run-length constrai ned code, ” IEEE T rans. Inform. Theory , v ol. 45, pp. 1527–15 40, 1999. [4] T . Etzion, “Cascadi ng methods for runlength-l imited arrays, ” IEEE T rans. Inform. Theory , vol. 43, pp. 319–324, 1997. [5] S. Hale vy , J. Chen, R. M. Roth, P . H. Sie gel, and J. K. W olf, “Impro ved bit-stuf fing bounds on two-dime nsional constrain ts, ” IEEE T rans. In- form. Theory , vol. 50, pp. 824–838, 2004. [6] S. Forchhammer and J. Justesen, “Bounds on the capacity of constrained two-di mensional codes, ” IEE E T rans. Inform. Theory , vol. 46, pp. 2659– 2666, 2000. [7] N. Calkin and H. S. W ilf, “The number of indepe ndent sets in a grid graph, ” SIAM J. Discrete Math. , vol. 11, pp. 54–60, 1997. [8] R. Burton and J. E. Steif, “Non-uniquene ss of measures of maximal entrop y for subshifts of finite type, ” E rg od. Th. Dynam. Sys. , vol. 14, pp. 213–235, 1994. [9] T . M. Co ver and J. A. T homas, Elements of Info rm ation Theory . Wi ley , 1991. [10] S. Boyd and L . V ande nberghe, Con vex Optimizatio n . Cambridge, UK: Cambridge Unive rsity Press, 2004. [11] J. Czyzy k, M. P . Mesnier , and J. J. Mor ´ e, “The NEOS serve r, ” IEEE Computati onal Science & Engineering , vol. 5, no. 3, pp. 68–75, 1998. [12] W . Gropp and J. J. Mor ´ e, “Optimizat ion en vironments and the NEOS serve r, ” in Appr oximation Theory and Optimization . Cambridge Uni- versi ty Press, 1997, pp. 167–182. [13] E. D. Dolan, R. Fourer , J. J. Mor ´ e, and T . S. Munson, “The NEOS server for optimizat ion: V ersion 4 and beyond, ” Mathemati cs and Computer Science Division, Argonne National Laboratory , Argonne , IL, T ech. Rep., 2002. [14] R. Fourer , D. M. Gay , and B. W . Ke rnighan, AMPL: A Modeli ng Languag e for Mathematical Pr ogramming , 2nd ed. Duxbury Press, 2002. [15] A. W ¨ achter and L. T . Biegler , “On the implementation of an interior - point filte r line-search algorithm for lar ge-scale nonlinear programming, ” Math. Pr ogram. , vol. 106, no. 1, pp. 25–57, 2006. [16] S. Forchhammer and T . V . Laursen, “Entropy of bit-stuf fing-induced measures for two- dimensional checke rboard constraints, ” IEEE T rans. Inform. Theory , vo l. 53, pp. 1537–1546, 2007. [17] I. T al and R. M. Roth, “Bounds on the rate of 2-D bit-stuf fing encoders, ” in P r oc. IEEE Int’l Symp. Inform. Theory (ISIT’2008) , T oronto, Ontario, Canada , 2008. [18] A. Sharo v and R. M. Roth, “T wo-dimensional constra ined coding based on tiling , ” in Pr oc. IEEE Int’l Symp. Inform. Theory (ISIT’2008) , T oronto, Ontario, 2008, pp. 1468–1472. [19] S. Even, Graph Algorithms . Computer Science Press, 1979.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment