Many-Help-One Problem for Gaussian Sources with a Tree Structure on their Correlation

In this paper we consider the separate coding problem for $L+1$ correlated Gaussian memoryless sources. We deal with the case where $L$ separately encoded data of sources work as side information at the decoder for the reconstruction of the remaining…

Authors: Yasutada Oohama

1 Man y-Help-One Problem for Gaussian Sources with a T ree Structure on Their Correlation Y asutada Oohama Abstract —In this paper we consider the separate coding problem fo r L + 1 correlated Gaussi an memoryless sources. W e deal with the case where L separately encoded data of source s work as sid e i nform ation at the decoder for the reco nstruction of the remaining source. The determination p roblem of the rate distortion region for thi s system is the so called many-help -one problem and has been known as a highly ch all en ging problem. The author determined the rate d istortion region in the case where the L sources working as partial side information are conditionally independ ent if the r emaining source we wish to reconstruct is giv en. This condition on the correlation is called the CI condition. In thi s paper we extend the author’ s prev ious result to the case where L + 1 sources satisfy a kind of tree structure on their correla tion. W e call thi s tree structure of information sources the TS condition, which contains the CI condition as a special case. In this paper we derive an explicit outer bound of the rate di stortion region when informa tion sources satisfy t he TS condition. W e fu rth er d eriv e an explicit suffi cient cond ition fo r th is outer bound to be tight. In particular , we determine the sum rate part of the rate distortion region fo r the case where informa tion sources satisfy the TS condition. For some class of Gaussian sour ces with the TS condition we derive an explicit recursi ve f ormula of this sum rate p art. Index T erms —Multiterminal sour ce co ding, many-help-one problem, Gaussian, rate-distortion region, CEO problem. I . I N T R O D U C T I O N In multi-user sourc e networks separate coding systems of correla te d info rmation sources are significant fr om both theoretical an d p ractical p oint of view . The first fundam ental result on tho se co ding systems was obtained by Slep ian and W olf [1]. They co nsidered a separate source coding system of two cor related in formatio n sources. Th o se two sources are separately encode d and sent to a single destination, where the decoder reconstru ct the orig in al so urces. In the ab ove source codin g sy stem, we can consider the situation, where the deco d er wish es to rep roduce on e of two sources. W e call this source the primary sour ce . In this case the remainin g source that we ca ll the a uxiliary source works as a partial side inf ormation at the deco der for the reconstructio n of the primary source . W yner [2 ], Ahlswede and K ¨ orner [3] determined the adm issible r a te region for this system, th e set that con sists of a pair o f transmission rates f or wh ich the primary source can be decod e d with an arb itrary small error probab ility . W e can na tu rally extend the system studied b y W yn er , Ahlswede and K ¨ orner to the one where th e re are several separately enco ded data of aux iliar y sou r ces ser ving as side Manuscript recei ved xxx, 20xx; re vised xxx, 20xx. Y . Oohama is with the Departmen t of Informat ion Scienc e and Intelligen t Systems, Uni versity of T okushima, 2-1 Mina mi J osanjima-Cho, T okushima 770-8506, Japan. informa tio ns at the deco d er . The determination of the admis- sible rate region fo r this system is called the many- help-on e problem . In this sense W yne r, Ahlswede and K ¨ orner so lved the so called one-helps-on e pro blem. The m a ny- help-on e pr oblem has been k nown a s a high ly challengin g p roblem. T o d ate, partial solutions given by K ¨ orner and Marton [4], Gelfand and Pinsker [5], Oo hama [8],[10], and T avildar et a l. [11] ar e known. Gelfand and Pinsker [5] studied an in teresting case of th e many-help- one pr oblem. Th ey determin ed the ad missible rate region in the c a se, wh ere the auxiliary sources are cond ition- ally indep endent if the p rimary sour c e is given. W e hereaf ter say the above corre lation co ndition on the infor mation sour ces the CI condition . In Oo h ama [8], th e autho r extended the m any-help-o ne problem studied by G e lfand and Pinsker [5] to a continuo us case. He considered th e many-he lp -one problem for L + 1 correlated memoryless Gaussian sources, where L auxiliary sources work as partial side information at the decoder for the reco nstruction of th e primar y source. The mean squ are error was adopted as a distor tion criterion between the de- coded output and the original primary source output. The rate distortion region was defined by the set of all tran sm ission rates for which the average d istortion can be up p er b ound ed by a prescribed level. In [8], the auth or d etermined th e rate distortion region when inform a tio n sou r ces satisfy the CI condition . This r esult contains the auth or’ s p revious works for Gau ssian o ne-helps- o ne p roblem [6] and Gaussian CEO problem [7]. The problem still remains open for Gaussian sources with general co r relation. Pandya e t al. [9] studied the g eneral case and derived an outer b ound of the rate distortio n region usin g some variant of bou nding technique the author [6] u sed to prove the converse coding theo rem for Gaussian one-help s-one problem . Howe ver , their bou nding metho d was not sufficient to provide a tight result. In Ooham a [10], the autho r extended the result of [8 ]. He consid ered a case o f correlation on Gaussian source s, where L + 1 sour ces satisfy a kin d o f tree stru cture on their correlation . Th e autho r called this tr ee structu r e of inform a tio n sources th e TS co ndition. T he TS condition contains th e CI condition as a special case. In [10], the author derived an explicit outer bound of the rate disto r tion region for Gaussian sources satisfy in g TS con dition. Fur thermor e , he ha d shown that for L = 2 , th is outer bound coincides with the rate dis- tortion region. The auth o r also presented a sufficient co ndition for the outer bound to coincide with the rate distortion region. Subsequen tly , T avildar et al. [11] extended the TS condition to a bin ary Gau ss Markov tree structu re con dition. They 2 studied a charac terization of the rate distortion region for Gaussian sou rce with the co mplete binary tree structur e a nd succeeded in it. T o der ive their result, they made the full use of the complete binary tree structur e of the source . They furth er determined the rate d istortion region for G a ussian sources with general tree structure . In Ooham a [10], the analysis for match ing condition of the rate distortio n region and the derived outer bound was not sufficient, so that th e autho r could not realize that there exists a part of the rate distortion r egio n wher e the outer bo und derived by him coincid es with the rate distortion region. I n this paper we give a further analy sis on m atching condition for the outer bou nd derived by Oo hama [1 0] to coin cide with the rate distortion region a n d de r iv e a c ondition much stronger than the m atching condition in [10]. Through th is analysis we o btain an insight on a way of examining the sum rate part of th e rate d istortion region to show that for Gaussian sources with th e TS co ndition the min imum sum rate part of the outer bou nd given by Oohama [10] is tight. This result implies that in Ooham a [10], the au thor had alre a dy obtain ed an explicit ch aracterization of the sum rate part of the rate distortion region b e fore the work by T avildar et al. [11]. On this optimal sum rate we d erive its explicit recursive formula for so m e class o f Gaussian sources with th e T S condition . Our formu la co ntains th e result of Oo hama [7] for Gaussian CEO problem as a special case. The rest of this paper is organ ized as follows. In Sectio n II, we presen t a pro b lem formulatio n and state the previous works. In Section III, we gi ve our main r esult. W e first derive an explicit outer bound of th e rate distortion region when informa tio n sou rces satisfy the T S co ndition. This outer bound is essentially the same as the auth or’ s previous outer b ound in [10], but it h as a form mo re suitable th an the previous one for analysis of a match ing c o ndition. Using the derived outer bound , we presen ted an explicit sufficient condition for the outer boun d to coin cide with th e inner boun d . In Section IV , we investigate the sum rate part of the rate distortion region . W e sho w that fo r the outer bou nd in this paper and that in [10], the ir sum rate p a rts coincid e with the sum rate p art of the inner bound. He n ce, in the case where informa tion so urces satisfy the TS con dition, we establish an explicit c h aracterization of the sum r a te p art of the ra te distortion region. This optimal sum rate has a for m of optimization prob lem. For so me class of the Gaussian sour c e with the TS cond ition, we solve th is o ptimization problem to establish an exp licit recu r si ve formu la of the optima l sum rate. In Section V , we gi ve the p roofs o f the r esults. Finally , in Section VI, we conclud e the p aper . I I . P R O B L E M S TA T E M E N T A N D P R E V I O U S R E S U LT S In this section we state the p roblem fo rmulation and p re- vious re su lts. W e first state some n otations used througho ut this paper . Let Φ = { 1 , 2 , · · · , | Φ |} and A i , i ∈ Φ be arbitrary sets. Con sider a ran d om variable A i , i ∈ Φ taking values in A i . W e write n d irect prod uct o f A i as A n i △ = A i × · · · × A i | {z } n . X 0 X 1 . . . X L X 0 ✲ X 1 ✲ . . . X L ✲ ϕ 0 ϕ 0 ( X 0 ) ϕ 1 ϕ 1 ( X 1 ) . . . . . . ϕ L ϕ L ( X L ) ❅ ❅ ❅ ❅ ❅ ❘ ✲ ✓ ✓ ✓ ✓ ✓ ✓ ✼ ψ ✲ ˆ X 0 Fig. 1. Communicat ion system with L s ide informations at the decode r . Let a r a ndom vector consisting of n inde penden t copies o f the random variable A i be den oted by A i = A i, 1 A i, 2 · · · A i,n . W e write an elem ent of A n i as a i = a i, 1 a i, 2 · · · a i,n . Let S be an arbitrary subset of Φ . Let A S and A S denote random vectors ( A i ) i ∈ S and ( A i ) i ∈ S , r espectiv ely . Similarly , let a S denote a vector ( a i ) i ∈ S . When S = { k , k + 1 , · · · , l } , w e also use the notation A l k for A S and use similar notations for o ther vectors o r rand om variables. When k = 1 , we som e times omit subscript 1. Thro ugho ut this paper all log arithms are taken to the natural. A. F ormal Statement of the Pr oblem Let X i , i = 0 , 1 , 2 , · · · , L be cor r elated zero mean Gaus- sian r andom variables tak ing values in rea l lines X i . L et Λ = { 1 , 2 , · · · , L } . Th e CI co ndition Ooh ama [8 ] tr eated correspo n ds to the case where X 1 , X 2 , · · · , X L are indepen- dent if X 0 is gi ven. In this pap er we deal with the case where X 1 , · · · , X L have some correlation when X 0 is given. L e t { ( X 0 ,t , X 1 ,t , · · · , X L,t ) } ∞ t =1 be a stationary memo ryless mul- tiple Gaussian sour ce. For each t = 1 , 2 , · · · , ( X 0 ,t , X 1 ,t , · · · , X L,t ) ob eys the same d istribution as ( X 0 , X 1 , · · · , X L ) . The m ultiterminal sou rce c oding sy stem treated in this paper is depicted in Fig. 1. For each i = 0 , 1 , · · · , L , the data sequence X i is separ ately encoded to ϕ i ( X i ) by enco der function ϕ i . The encoded data ϕ i ( X i ) , i = 0 , 1 , · · · , L are sent to the informatio n p rocessing center , where the decoder observes them and ou tputs the estimation ˆ X 0 of X 0 by using the decoder function ψ . Th e en c o der functio ns ϕ i , i = 0 , 1 , · · · , L are defined by ϕ i : X n i → M i = { 1 , 2 , · · · , M i } (1) and satisfy rate constrain ts 1 n log M i ≤ R i + δ (2) where δ is an ar bitrary prescribe d po siti ve numbe r . The decoder functio n ψ is defined by ψ : M 0 × M 1 × · · · × M L → X n 0 . (3) Denote by F ( n ) δ ( R 0 , R 1 , · · · , R L ) the set that con sists of all the ( L + 2) tuple of encoder and deco der fu nctions ( ϕ 0 , ϕ 1 , · · · , ϕ L , ψ ) satisfying (1)- (3). Let d ( x, ˆ x ) = ( x − ˆ x ) 2 , ( x, ˆ x ) ∈ X 2 0 be a square distortion measure. For X 0 and its 3 estimation ˆ X 0 = ψ ( ϕ 0 ( X 0 ) , ϕ 1 ( X 1 ) , · · · , ϕ L ( X L )) , define the average distortion by ∆( X 0 , ˆ X 0 ) △ = 1 n n X t =1 E d ( X 0 ,t , ˆ X 0 ,t ) . For a given D > 0 , the rate v ector ( R 0 , R 1 , · · · , R L ) is admissible if fo r any p o siti ve δ > 0 and any n with n ≥ n 0 ( δ ) , there exists ( ϕ 0 , ϕ 1 , · · · , ϕ L , ψ ) ∈ F ( n ) δ ( R 0 , R 1 , · · · , R L ) such that ∆( X 0 , ˆ X 0 ) ≤ D + δ . Let R L ( D ) denote the set of all the adm issible rate vector . Our aim is to characterize R L ( D ) in an explicit form. On a for m of R L ( D ) , we have a particular in terest in its su m rate pa r t. T o examine this quantity , define R sum ,L ( D , R 0 ) △ = min ( R 0 ,R 1 , ··· ,R L ) ∈R L ( D ) ( L X i =1 R i ) . T o determin e R sum ,L ( D , R 0 ) in an explicit form is also of our interest. By the rate-d istortion th e ory f or single Gaussian sources, when R 0 ≥ 1 2 log + [ σ 2 X 0 D ] , R 1 = R 2 = · · · = R L = 0 is admissible. Here log + a = max { log a, 0 } . Hence, we h av e R L ( D ) ∩  R 0 ≥ 1 2 log + [ σ 2 X 0 D ]  = { ( R 0 , R 1 , · · · , R L ) : R 0 ≥ 1 2 log + [ σ 2 X 0 D ] R i ≥ 0 , i ∈ Λ } . Throu g hout this pap er we assume tha t D ≤ σ 2 X 0 and R 0 < 1 2 log[ σ 2 X 0 D ] . B. T ree Structur e of Gaussian Sources In this subsection we explain the tree stru cture of Gaussian source which is an important class o f correlation . Consider the case where the L + 1 ran dom v ariables X 0 , X 1 , · · · , X L satisfy the following corr elations: Y 0 = X 0 , Y l = Y l − 1 + Z l , 1 ≤ l ≤ L, X l = Y l + N l , 1 ≤ l ≤ L − 1 , X L = Y L , N L = Z L ,          (4) where Z i , i ∈ Λ are L in depend ent Gaussian rando m variables with mean 0 and variance σ 2 Z i and N i , i = 1 , 2 , · · · , L − 1 are L − 1 indep endent Gau ssian random variables with mean 0 a n d variance σ 2 N i . W e assume th at Z L is independ ent of X 0 and that N L − 1 is ind ependen t of X 0 and Z L . W e can see that the above ( X 0 , X 1 , · · · , X L ) has a kind of tree stru cture(TS). W e say that the source ( X 0 , X 1 , · · · , X L ) satisfies th e TS condition w h en it satisfies (4). The TS con- dition contain s the CI con dition as a special case b y letting σ Z i , i = 1 , 2 , · · · , L − 1 be zero. Let S be an arb itrary subset of Λ . The TS c ondition is equ i valent to th e co ndition that for S ⊆ Λ , the rando m variables X S , ( X 0 , Z L − 1 ) , X S c form Markov c h ains X S → ( X 0 , Z L − 1 ) → X S c in this order . Th e TS and CI cond itions in the case of L = 4 are shown in Fig. 2 and 3, respectively . ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 3 ❡ ✛ N 4 ❄ X 4 ❡ ✛ Y 3 Y 3 N 3 ❄ X 3 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 2 ❡ ✛ Y 2 Y 2 N 2 ❄ X 2 ❄ X 0 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 1 ❡ ✛ Y 1 Y 1 N 1 ❄ X 1 Fig. 2. TS condition in the case of L = 4 . ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 3 = 0 ❡ ✛ N 4 ❄ X 4 ❡ ✛ X 0 X 0 N 3 ❄ X 3 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 2 = 0 ❡ ✛ X 0 X 0 N 2 ❄ X 2 ❄ X 0 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 1 = 0 ❡ ✛ X 0 X 0 N 1 ❄ X 1 Fig. 3. CI conditio n in the case of L = 4 . C. Pr evious Results In this subsection we state th e p revious results on the determinatio n pr oblem of R L ( D ) . Let U i , i = 0 , 1 , · · · , L b e random variables taking values in real lin es U i . For S ⊆ Λ , define G ( D ) △ =  ( U 0 , U L ) : ( U 0 , U L ) is a Gaussian rand om vector that satisfies U L → X L → X 0 → U 0 U S → X S ∪{ 0 } → X S c → U S c for a ny S ⊆ Λ and E[ X 0 − ˜ ψ ( U 0 , U L )] 2 ≤ D for so me linear mapp ing ˜ ψ : U 0 × U L → X 0 . } , where S c △ = Λ − S . Let π =  1 · · · i · · · L π (1) · · · π ( i ) · · · π ( L )  be an arb itrary per mutation on Λ and Π b e a set of all permutatio ns on Λ . For S ⊆ Λ , we set π ( S ) △ = { π ( i ) } i ∈ S . Define L subsets S i , i = 1 , 2 , · · · , L o f Λ b y S i △ = { i, i + 1 , 4 · · · , L } . Set ˜ R π ,L ( D ) △ = { ( R 0 , R 1 , · · · , R L ) : Ther e exists a rand om vector ( U 0 , U L ) ∈ G ( D ) su ch that R 0 ≥ I ( X 0 ; U 0 | U L ) R π ( i ) ≥ I ( X π ( i ) ; U π ( i ) | U π ( S c i ) ) for i = 1 , 2 , · · · , L } , ˜ R (in) L ( D ) △ = conv ( [ π ∈ Π ˜ R (in) π ,L ( D ) ) , where conv { A } den otes a convex hull of th e set A . Then, we have the following. Theor em 1 (Ooh ama [8]): For Gau ssian sources with gen- eral corre lation ˜ R (in) L ( D ) ⊆ R L ( D ) . For Gaussian sour c es with the CI cond ition the inn er boun d ˜ R (in) L ( D ) is tight, that is ˜ R (in) L ( D ) = R L ( D ) . The above inner bound ˜ R (in) L ( D ) can be regard ed as a variant of the inner bound which is well known as the inn er bound of Ber ger [ 13] and T u ng [14]. Theorem 1 contain s the solution that Oohama [6] obtained to the one-h elps- one problem for Gaussian sources as a special case. When R 0 = 0 , the second result o f Theo rem 1 has some implications for th e Gaussian CEO prob lem studied by V isw anath an an d Berger [15] and Ooham a [7] and source cod in g problem for multiterminal communica tio n systems with a remote source in vestigated by Y a m amoto and Itoh [16] and Flynn and Gray [17]. The notion o f TS cond itio n for Gaussian sou rces was first introdu c ed by Ooh a m a [10]. T avildar et al. [11] e xtend ed the TS condition to a bin ary Gauss Markov tree structure condition . They studied a full characteriz a tion of the rate distortion region fo r Gaussian sources with a binary tree structure. In the next s ection we shall state the results o f T avildar et al. [11] a nd compar e them with our results. I I I . R E S U LT S O N T H E R AT E D I S T O RT I O N R E G I O N In this section, we s tate our main results on inner and outer bounds of R L ( D ) in th e case whe re ( X 0 , X 1 , · · · , X L ) satisfies the TS conditio n . A. Definition of Fun ctions an d their P r ope rties In this subsectio n we define several fu nctions which are necessary to describ e o ur results an d present their pro perties. Let r i , i ∈ Λ b e nonnegative number s. Define th e sequence of nonnegative fu n ctions { f l ( r L l ) } L − 1 l =1 ∪{ f 0 ( r L ) } by the f ollow- ing recursion : f L − 1 ( r L L − 1 ) = 1 − e − 2 r L − 1 σ 2 N L − 1 + 1 − e − 2 r L σ 2 N L , f l ( r L l ) = f l +1 ( r L l +1 ) 1+ σ 2 Z l +1 f l +1 ( r L l +1 ) + 1 − e − 2 r l σ 2 N l , L − 2 ≥ l ≥ 1 , f 0 ( r L ) = f 1 ( r L ) 1+ σ 2 Z 1 f 1 ( r L ) .                (5) Next, we d efine the sequence of non negativ e functions { g l ( D , r 0 ) } l =0 , 1 ∪ { g l ( D , r 0 , r l − 1 ) } L − 1 l =2 by the following recur sion: g 0 ( D , r 0 ) = e − 2 r 0 D − 1 σ 2 X 0 , g 1 ( D , r 0 ) = g 0 ( D, r 0 ) 1 − σ 2 Z 1 g 0 ( D, r 0 ) , g l +1 ( D , r 0 , r l ) = " g l ( D, r 0 ,r l − 1 ) − 1 σ 2 N l ( 1 − e − 2 r l ) # + 1 − σ 2 Z l +1 " g l ( D, r 0 ,r l − 1 ) − 1 σ 2 N l ( 1 − e − 2 r l ) # + , 1 ≤ l ≤ L − 2 ,                              (6) where [ a ] + = max { a, 0 } . Let B L ( D ) be the set of all nonnegative vectors r L 0 that satisfy f 0 ( r L ) ≥ g 0 ( D , r 0 ) = e − 2 r 0 D − 1 σ 2 X 0 . Let ∂ B L ( D ) be th e bou ndary of B L ( D ) , th a t is, the set of all nonnegative vectors r L 0 that satisfy f 0 ( r L ) = g 0 ( D , r 0 ) = e − 2 r 0 D − 1 σ 2 X 0 . W e can easily show that the functions we have defined satisfy the following prop erty . Pr operty 1 : a) For each i ∈ Λ , f 0 ( r L ) is a mono tone increasing fu nction of r i . For each 1 ≤ l ≤ L and for each i = l , l + 1 , · · · , L , f l ( r L l ) is a monoto ne increasing fu nction of r i . b) For each 2 ≤ l ≤ L − 1 and for each i = 0 , 1 , · · · , l − 1 , g l ( D , r 0 , r l − 1 ) is a monoto ne de creasing function o f r i . c) I f r L 0 ∈ B L ( D ) , then , for 0 ≤ l ≤ L − 1 , g l ( D , r 0 , r l − 1 ) ≤ f l ( r L l ) . In the above L inequalities the equ alities simultaneo usly hold if and only if r L 0 ∈ ∂ B L ( D ) . Define F ( r L ) △ = L − 1 Y l =1  1 + σ 2 Z l f l ( r L l )  , G ( D, r 0 , r L − 2 ) △ = L − 1 Y l =1  1 + σ 2 Z l g l ( D , r 0 , r l − 1 )  . For S ⊆ Λ , defin e f 0 ( r S ) △ = f 0 ( r L )   r S c = 0 , F ( r S ) △ = F ( r L )   r S c = 0 . W e can easily sho w th at the functions F ( r L ) and G ( D, r 0 , r L − 2 ) satisfy the following p roper ty . Pr operty 2 : a) For ea c h i ∈ S , F ( r S ) is a m o noton e incr easing function of r i . b) For each i = 0 , 1 , · · · , L − 2 , G ( D , r 0 , r L − 2 ) is a monoto ne decreasing func tio n of r i . c) I f r L 0 ∈ B L ( D ) , then G ( D, r 0 , r L − 2 ) ≤ F ( r L ) . 5 The equality hold s if and only if r L 0 ∈ ∂ B L ( D ) . For D > 0 , r i ≥ 0 , i ∈ Λ and S ⊆ Λ , defin e J S ( D , r 0 , r L − 2 , r S | r S c ) △ = 1 2 log + " G ( D , r 0 ,r L − 2 ) F ( r S c ) · σ 2 X 0 e − 2 r 0 n 1+ σ 2 X 0 f 0 ( r S c ) o D · Y i ∈ S e 2 r i # , K S ( r S | r S c ) △ = 1 2 log " F ( r L ) F ( r S c ) · 1+ σ 2 X 0 f 0 ( r L ) 1+ σ 2 X 0 f 0 ( r S c ) · Y i ∈ S e 2 r i # . W e can show th at for S ⊆ Λ , K S ( r S | r S c ) and J S ( D , r 0 , r L − 2 , r S | r S c ) satisfy the following two prop erties. Pr operty 3 : a) I f r L 0 ∈ B L ( D ) , then , fo r any S ⊆ Λ , J S ( D , r 0 , r L − 2 , r S | r S c ) ≤ K S ( r S | r S c ) . The equality hold s when r L 0 ∈ ∂ B L ( D ) . b) Suppose that r L ∈ B L ( D ) . If r L   r S = 0 still belon gs to B L ( D ) , then , J S ( D , r 0 , r L − 2 , r S | r S c )   r S = 0 = K S ( r S | r S c ) | r S = 0 = 0 . Pr operty 4 : Fix r L ∈ B L ( D ) . For S ⊆ Λ , set ρ S = ρ S ( r S | r S c ) △ = J S ( D , r 0 , r L − 2 , r S | r S c ) . By d efinition it is obvious that ρ S , S ⊆ Λ are nonnegative. W e can show that ρ △ = { ρ S } S ⊆ Λ satisfies the followings: a) ρ ∅ = 0 . b) ρ A ≤ ρ B for A ⊆ B ⊆ Λ . c) ρ A + ρ B ≤ ρ A ∩ B + ρ A ∪ B . In g eneral (Λ , ρ ) is called a co-polymatr oid if the no nnegative function ρ on 2 Λ satisfies th e above three pro perties. Similarly , we set ˜ ρ S = ˜ ρ S ( r S | r S c ) △ = K S ( r S | r S c ) , ˜ ρ = { ˜ ρ S } S ⊆ Λ . Then, (Λ , ˜ ρ ) also has the same thr e e properties as th o se o f (Λ , ρ ) and bec o mes a co-poly matroid. B. Results In this subsection we present our results on inner and outer bound s of R L ( D ) . In the previous work [1 0], we deri ved an outer bound of R L ( D ) . W e d enote this outer bound by ˆ R (out) L ( D ) . Accordin g to [10], ˆ R (out) L ( D ) is given by ˆ R (out) L ( D ) = n ( R 0 , R L ) : There exists a nonnegative vector ( r 0 , r L ) such that R 0 ≥ r 0 ≥ 1 2 log +  σ 2 X 0 n 1+ σ 2 X 0 f 0 ( r L ) o D  , R i ≥ r i for any i ∈ Λ , R 0 + X i ∈ S R i ≥ 1 2 log +  G ( D , r 0 ,r L − 2 ) σ 2 X 0 F ( r S c ) n 1+ σ 2 X 0 f 0 ( r S c ) o D  + L X i =1 r i for any S ⊆ Λ . } . Set R (out) L ( D , r L 0 ) △ = { ( R 0 , R 1 , · · · , R L ) : R 0 ≥ r 0 , X i ∈ S R i ≥ J S  D , r 0 , r L − 2 , r S | r S c  , for any S ⊆ Λ . } , R (in) L ( r L 0 ) △ = { ( R 0 , R 1 , · · · , R L ) : R 0 ≥ r 0 , X i ∈ S R i ≥ K S ( r S | r S c ) , for any S ⊆ Λ . } , R (out) L ( D ) △ = [ r L 0 ∈B L ( D ) R (out) L ( D , r L 0 ) , R (in) L ( D ) △ = [ r L 0 ∈B L ( D ) R (in) L ( r L 0 ) . Our main result is as follows. Theor em 2: For Gaussian sources with the TS conditio n R (in) L ( D ) ⊆ ˜ R (in) L ( D ) ⊆ R L ( D ) ⊆ ˆ R (out) L ( D ) ⊆ R (out) L ( D ) . Proof of th is theorem will be given in Section V . The inclusion R L ( D ) ⊆ ˆ R (out) L ( D ) and an ou tline of pr oof of this inclusion was given in Ooha m a [ 10]. Furtherm ore, b y Theorem 1, we have ˜ R (in) L ( D ) ⊆ R L ( D ) . Hen ce, it suffices to show ˆ R (out) L ( D ) ⊆ R (out) L ( D ) and R (in) L ( D ) ⊆ ˜ R (in) L ( D ) to prove Theo r em 2. Proo fs of those two inclusio ns will b e given in Section V . W e can directly pr ove R L ( D ) ⊆ R (out) L ( D ) in a manner similar to that of Oohama [10]. For the detail o f the direct proof of R L ( D ) ⊆ R (out) L ( D ) , see App endix B. An essential dif feren ce between R (out) L ( D ) and R (in) L ( D ) is the difference between J S ( D , r 0 , r L − 2 , r S | r S c ) in the definition of R (out) L ( D ) and K S ( r S | r S c ) in the definition of R (in) L ( D ) . By Property 3 pa rt a) and the defin itions of R (out) L ( D , r L 0 ) and R (in) L ( r L 0 ) , if r L 0 ∈ ∂ B L ( D ) , then , R (out) L ( D , r L 0 ) = R (in) L ( r L 0 ) . 6 ❡ ✛ N 2 ❄ X 2 ❄ X 0 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 1 ❡ ✛ Y 1 Y 1 N 1 ❄ X 1 ❄ X 0 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 1 ❡ ✛ Y 1 Y 1 N 1 ❄ X 1 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 2 = 0 ❡ ✛ Y 1 Y 1 N 2 ❄ X 2 ❡ ✛ N 3 ❄ X 3 Fig. 4. TS conditions in the case of L = 2 and the case of L = 3 and Z 2 = 0 . This gap suggests a possibility that in some cases those two bound s match. I n the following we present a su fficient con- dition for R (out) L ( D ) ⊆ R (in) L ( D ) . W e consider the following condition on G ( D , r 0 , r L − 2 ) . Condition: For each l = 1 , 2 , · · · , L − 2 , e 2 r l G ( D, r 0 , r L − 2 ) is a monoto ne increasing fu nction of r l . W e call the above c ondition the MI co ndition. The f ollowing is our main result on a matching conditio n on inne r and ou te r bound s. Lemma 1: For Gaussian source s with the TS condition if G ( D, r 0 , r L − 2 ) satisfies the MI cond ition, then, R (out) L ( D ) ⊆ R (in) L ( D ) . Proof of this lem ma is given in Sectio n V . Note that wh e n L = 2 or σ Z l = 0 , for l = 2 , 3 , · · · , L − 1 under the TS condition , we have G ( D, r 0 , r L − 2 ) = 1 + σ 2 Z 1 g 1 ( D , r 0 ) , which satisfi es the MI condition. TS co nditions in the case of L = 2 and the case o f L = 3 , Z 2 = 0 is sho wn in Fig. 4. Note that those two conditions are different from the CI condition . Combining Lemma 1 and Theo rem 2, we establish the following. Theor em 3: For Gaussian sources with the TS con dition R (in) 2 ( D ) = R 2 ( D ) = ˆ R (out) 2 ( D ) = R (out) 2 ( D ) . Furthermo re, if G ( D , r 0 , r L − 2 ) satisfies the MI conditio n , then, R (in) L ( D ) = R L ( D ) = ˆ R (out) L ( D ) = R (out) L ( D ) . In Oohama [10], the eq uality R 2 ( D ) = ˆ R (out) 2 ( D ) was stated withou t complete proo f. W e can see that this equality can be obtaine d by Th eorem 2 , L e m ma 1 , an d the fact that the MI conditio n ho lds f or L = 2 . Next, we present a sufficient c o ndition for G ( D, r 0 , r L − 2 ) to satisfy the MI c o ndition. L e t { f ∗ j } L − 1 j =1 be a sequen ce of positive number s defined by the following recursion: f ∗ L − 1 = 1 σ 2 N L − 1 + 1 σ 2 N L , f ∗ l = f ∗ l +1 1+ σ 2 Z l +1 f ∗ l +1 + 1 σ 2 N l , L − 2 ≥ l ≥ 1 .    (7) By definition it is obvious that f l ( r L l ) ≤ f ∗ l . Then, we have the following prop osition. ❡ ✛ N 3 ❄ X 3 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 2 ❡ ✛ Y 2 Y 2 N 2 ❄ X 2 ❄ X 0 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s ❡ ✛ Z 1 ❡ ✛ Y 1 Y 1 N 1 ❄ X 1 Fig. 5. TS condition in the case of L = 3 . Pr oposition 1 : If L − 2 X k = l σ 2 Z k +1 σ 2 N l  1 + σ 2 Z k +1 f ∗ k +1  k Y j = l +1  1 + σ 2 Z j f ∗ j  2 ≤ 1 (8) hold for l = 1 , 2 , · · · , L − 2 , then, G ( D , r 0 , r L − 2 ) satisfies the MI condition. Proof of this p roposition will be g iv en in Appen d ix A. I t can be seen from this pro position that for L ≥ 3 , the MI con dition holds fo r relatively small values of σ Z l , l = 2 , · · · , L − 1 . In particular, wh en L = 3 , the sufficient con d ition given by (8) is σ 2 Z 2 σ 2 N 1  1 + σ 2 Z 2  1 σ 2 N 2 + 1 σ 2 N 3  ≤ 1 . Solving the above ineq u ality with respect to σ 2 Z 2 , we have σ 2 Z 2 ≤ 2 1 + s 1 + 4 σ 2 N 1  1 σ 2 N 2 + 1 σ 2 N 3  · σ 2 N 1 . The TS conditio n in the case o f L = 3 is shown in Fig. 5. C. Binary T ree S tructur e Condition As a co rrelation property o f Gaussian so u rce T avildar et al. [11] introduced a binary Gauss Markov tree structure condi- tion. They studied a full characterization of the rate distortion region for Gaussian sources with this binar y tree structure. In this su b section we describe their result and co mpare it with our results. W e first explain the binary tr ee structure introdu ced b y them. Let k be a po siti ve integer . W e consider the case whe re L = 2 k . Let N ( j ) i , 1 ≤ i ≤ 2 j , 1 ≤ j ≤ k , be zero mean independent Gau ssian random variables with variance σ 2 N ( j ) i . Those 2 k +1 − 2 rand om variables are indepe n dent of X 0 . Define the sequence o f Gaussian ra ndom variables { Y ( j ) i } 1 ≤ i ≤ 2 j , 0 ≤ j ≤ k by the following r ecursion: Y (0) 1 = X 0 , Y ( j ) i = Y ( j − 1) ⌈ i 2 ⌉ + N ( j ) i , for 1 ≤ i ≤ 2 j , 0 ≤ j ≤ k , X i = Y ( k ) i , f o r 1 ≤ i ≤ 2 k ,          (9) where ⌈ a ⌉ stands for the smallest integer n ot below a . W e say that for L = 2 k the Gau ssian source ( X 0 ,X 1 , · · · , X L ) 7 ❄ X 0 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s X 0    ✠ ❙ ❙ ❙ ✇ ❡ ✛ N (1) 1 Y (1) 1 ❡ ✛ N (2) 1 ❄ X 1 Y (1) 1 ❡ ✛ N (2) 2 ❄ X 2 X 0 ✓ ✓ ✓ ✴ ❅ ❅ ❅ ❘ ❡ ✛ N (1) 2 Y (1) 2 ❡ ✛ N (2) 3 ❄ X 3 Y (1) 2 ❡ ✛ N (2) 4 ❄ X 4 Fig. 6. BTS condition in the case of L = 4 . ❄ X 0 ✑ ✑ ✑ ✑ ✰ ◗ ◗ ◗ ◗ s X 0 ❄ X 2 ❡ ✛ N (1) 1 X 0    ✠ ❅ ❅ ❅ ❘ ❡ ✛ N (1) 2 Y (1) 2 ❡ ✛ N (2) 3 ❄ X 3 Y (1) 2 ❡ ✛ N (2) 4 ❄ X 4 Fig. 7. BTS cond ition in the case of L = 4 , σ N (2) 1 → ∞ and N (2) 2 = 0 is equi val ent to the TS condition in the case of L = 3 and Z 1 = 0 . satisfies the binary tree structure (BTS) co n dition when it satisfies (9). The bin ary tree stru cture in the case o f k = 2 and L = 2 k = 4 is shown in Fig. 6. In this example, let σ N (2) 1 → ∞ an d N (2) 2 = 0 . T hen, X 1 becomes ind ependen t of ( X 2 , X 3 , X 4 ) and ( X 2 , X 3 , X 4 ) has the same correlation proper ty as the T S condition in the case of L = 3 an d Z 1 = 0 . The BTS condition in th is case is shown in Fig. 7. In ge n eral the set of Gaussian sour c e s satisfying the TS c ondition and Z 1 = 0 can be embed ded into the set of Gaussian sources satisfying BTS condition . The communication system treated b y T avildar et al. is shown in Fig. 8. It can be seen f rom this figure that their problem set up is slightly different from our s. In their comm u- nication system th ere is no e ncoder that can directly access to the source X 0 . T avildar et al. studied a charac terization of the rate distor tion region R L ( D ) ∩ { R 0 = 0 } fo r Gaussian sources with the binary tree stru cture and succeeded in it. Their result is the following. Theor em 4 (T a vildar et al. [11]): When L = 2 k for some integer k and ( X 0 , X 1 , · · · , X L ) satisfies the BTS condition , we have R L ( D ) ∩ { R 0 = 0 } = ˜ R (in) L ( D ) ∩ { R 0 = 0 } . From the above theore m we have the f o llowing corollar y . Cor ollary 1 (T avilda r e t al. [1 1]): When ( X 0 , X 1 , · · · , X L ) satisfies the TS con dition and Z 1 = 0 , we have R L ( D ) ∩ { R 0 = 0 } = ˜ R (in) L ( D ) ∩ { R 0 = 0 } . The BTS condition differs f rom the TS condition in its symmetrical property , which plays an essential ro le in the proof of Theorem 4. W e think that the method of T avildar X 1 X 2 . . . X L X 1 ✲ X 2 ✲ . . . X L ✲ ϕ 1 ϕ 1 ( X 1 ) ϕ 2 ϕ 2 ( X 2 ) . . . . . . ϕ L ϕ L ( X L ) ❅ ❅ ❅ ❅ ❅ ❘ ✲ ✓ ✓ ✓ ✓ ✓ ✓ ✼ ψ ✲ ˆ X 0 Fig. 8. Communicat ion system that T avi ldar et al. treated. et al. [11] is applicable to the g eneral case where Z 1 is no t constant and R 0 > 0 and that R L ( D ) = ˜ R (in) L ( D ) still holds in this general case. Unfortu n ately , our approa ch developed in [10] and this pa- per can n o t establish R L ( D ) = ˜ R (in) L ( D ) for Gaussian sources satisfying the TS condition without req uiring th e con dition on the variances of Z i , 2 ≤ i ≤ L − 1 and N i , 1 ≤ i ≤ L, specified with (8) in Proposition 1. Howe ver , we think th a t our work in [ 1 0] had provid ed an impo rtant step toward the full char acterization of the rate d istortion re gion estab lish e d by T avildar et al. [1 1]. I V . S U M R A T E P A RT O F T H E R A T E D I S T O RT I O N R E G I O N In this section we state o ur result on th e rate sum part of R L ( D ) . Set R (l) sum ,L ( D , R 0 ) △ = min r L : f 0 ( r L ) ≥ g 0 ( D, R 0 ) J Λ ( D , R 0 , r L − 2 , r L ) , R (u) sum ,L ( D , R 0 ) △ = min r L : f 0 ( r L ) ≥ g 0 ( D, R 0 ) K Λ ( r L ) . Let ˆ R (l) sum ,L ( D , R 0 ) b e th e minim um sum rate fo r ˆ R (out) L ( D ) , that is, ˆ R (l) sum ,L ( D , R 0 ) △ = min ( R 0 ,R 1 , ··· ,R L ) ∈ ˆ R (out) L ( D ) ( L X i =1 R i ) . Then, it immediate ly follows f rom Theo r em 2 th a t we ha ve the following coro llar y . Cor ollary 2: For Gaussian sou rces with the TS conditio n R (l) sum ,L ( D , R 0 ) ≤ ˆ R (l) sum ,L ( D , R 0 ) ≤ R sum ,L ( D , R 0 ) ≤ R (u) sum ,L ( D , R 0 ) . On the other hand, we have the fo llowing lemma. Lemma 2: For Gaussian sou rces with th e TS condition, we have R (l) sum ,L ( D , R 0 ) ≥ R (u) sum ,L ( D , R 0 ) . Proof of this lemma will be given in Section V . Combining Corollary 2 and Lemm a 2, we have the fo llowing. 8 Theor em 5: For Gaussian sources with the TS con dition R sum ,L ( D , R 0 ) = R (u) sum ,L ( D , R 0 ) = ˆ R (l) sum ,L ( D , R 0 ) = R (l) sum ,L ( D , R 0 ) = − R 0 + 1 2 log σ 2 X 0 D + min r L : f 0 ( r L ) = g 0 ( D, R 0 ) " L X l =1 r l + 1 2 log F ( r L ) # . In [1 2], the author furth er derived an algorithm of compu t- ing R sum ,L ( D , R 0 ) . This algo rithm, howe ver , has a problem that it can not p r ovide R sum ,L ( D , R 0 ) for all D ∈ (0 , σ 2 X 0 ] . In fact, the fu nction R sum ,L ( D , R 0 ) is determined for relati vely small value of D . In the remaining part of this subsectio n we present the algorithm given by [1 2] and con cretely explain the above prob lem. The algorith m of co mputing R sum ,L ( D , R 0 ) given by the author [12] is as follows. For L ≥ l ≥ 1 , set σ 2 N l = σ 2 l , σ 2 Z l = ǫ l σ 2 l . Fu rthermo r e, set τ l = σ 2 l /σ 2 l − 1 for L ≥ l ≥ 2 . Let ω ∈ [0 , 1) . Define the seq uence of f unctions { θ l ( ω ) } L l =1 by the following recur sion: θ L ( ω ) = ω , θ L − 1 ( ω ) = 2 θ L ( ω ) − 1 τ L + 1 1 + ǫ L − 1 h 2 θ L ( ω ) − 1 τ L + 1 i , θ l − 1 ( ω ) = 1 τ l " 2 θ l ( ω ) − 1+ θ l +1 ( ω ) τ l +1 1+ ǫ l  1+ θ l +1 ( ω ) τ l +1  + τ l # 1 + ǫ l − 1 τ l " 2 θ l ( ω ) − 1+ θ l +1 ( ω ) τ l +1 1+ ǫ l  1+ θ l +1 ( ω ) τ l +1  + τ l # for L − 1 ≥ l ≥ 2 .                                  (10) Theor em 6 (Ooh ama [12]): Le t { θ l ( ω ) } L l =1 be a sequen ce of fu nctions defin e d by (10). Sup pose th a t the Gaussian sou rce satisfies the TS conditio n and the co ndition τ L ≥ 1 , f or l = L, τ l ≥ 1 1+ ǫ l , f o r L − 1 ≥ l ≥ 2 .  (11) Then, we have the f ollowing parame tric f orm of R sum ,L ( D , R 0 ) with the par ameter ω ∈ [0 , 1) : D = e − 2 R 0 σ 2 1 σ 2 X 0 σ 2 X 0 θ 1 ( ω ) + σ 2 1 , R sum ,L ( D , R 0 ) = − R 0 + 1 2 log σ 2 X 0 D +  − 1 2    L − 1 X l =1  log  1 − θ l ( ω ) 1 − ǫ l θ l ( ω ) + θ l +1 ( ω ) τ l +1  + log (1 − ǫ l θ l ( ω ))  + log(1 − ω )   . In the follo wing we s tate a problem that we have in the parametric expr ession of ( D , R sum ,L ( D , R 0 )) in the above theorem. W e consider the c a se of R 0 = 0 . Fro m (10), we can see that wh en τ l > 1 for L ≥ l ≥ 1 , θ 1 ( ω ) is strictly positive function of ω ∈ [0 , 1 ] . This imp lies that the p arametric expression in Theor em 6 can no t provide R sum ,L ( D , 0 ) for all D ∈ (0 , σ 2 X 0 ] . I n this paper, we solve th is prob lem to provide R sum ,L ( D , 0 ) for all D ∈ (0 , σ 2 X 0 ] . In the following argument, we consid e r the case of R 0 = 0 . In this ca se we set R sum ,L ( D ) = R sum ,L ( D , 0 ) . Furthermore, set g 0 ( D ) = g 0 ( D , 0 ) . The optimal sum rate R sum ,L ( D ) h as a f orm of op tim ization pro blem. In the rem aining par t o f this section we de a l with this op timization pr oblem. W e let ǫ L = 0 . Then the recursion (5) is f L ( r L ) = 1 σ 2 L  1 − e − 2 r L  , f l − 1 ( r L l − 1 ) = f l ( r L l ) 1+ ǫ l σ 2 l f l ( r L l ) + 1 σ 2 l − 1  1 − e − 2 r l − 1  for L ≥ l ≥ 2 , f 0 ( r L ) = f 1 ( r L ) 1+ ǫ 1 σ 2 1 f 1 ( r L ) .            (12) The optimization prob lem pr esenting R sum ,L ( D ) is R sum ,L ( D ) = 1 2 log + σ 2 X 0 D + min r L : f 0 ( r L ) = g 0 ( D ) " L X l =1 r l + L − 1 X l =1 1 2 log  1 + ǫ l σ 2 l f l ( r L l )  # . T o describe an algorith m of compu ting R sum ,L ( D ) , for 1 ≤ l ≤ L , defin e R ( l ) sum ( D ) by R ( l ) sum ( D ) = 1 2 log σ 2 X 0 D + min r L : r l > 0 ,r L l +1 =0 f 0 ( r L )= g 0 ( D ) " l X i =1 r i + l − 1 X i =1 1 2 log  1 + ǫ i σ 2 i f i ( r L i )  # . By the ab ove definition and an e le m entary computation we have that for ea ch 1 ≤ l ≤ L , R = R ( l ) sum ( D ) is monotone decreasing and conve x fun c tio n of D > 0 . R sum ,L ( D ) = min 1 ≤ l ≤ L R ( l ) sum ( D ) , (13) From (1 3), we can see that R sum , L ( D ) can be obtained by computin g R ( l ) sum ( D ) for 1 ≤ l ≤ L . In the fo llowing discus- sion we prop ose an algo rithm to co mpute { ( D , R ( l ) sum ( D )) } L l =1 . T o describe the a lg orithm, for each 1 ≤ l ≤ L , we define the sequ e nce θ ( l ) • ( ω ) = { θ ( l ) i ( ω ) } l i =1 which consists of l continuo us fu n ctions of ω . Con cretely , for each 1 ≤ l ≤ L and ω ∈ (0 , (1 + ǫ l ) − 1 ) , define θ ( l ) • ( ω ) = { θ ( l ) i ( ω ) } l i =1 by the following recur sion: θ ( l ) l ( ω ) = ω , θ ( l ) l − 1 ( ω ) = θ ( l ) l ( ω )+ { (1+ ǫ l ) θ ( l ) l ( ω ) }{ 1 − ǫ l θ ( l ) l ( ω ) } τ l + 1 1 + ǫ l − 1  θ ( l ) l ( ω )+ { (1+ ǫ l ) θ ( l ) l ( ω ) }{ 1 − ǫ l θ ( l ) l ( ω ) } τ l + 1  (14) 9 θ ( l ) i − 1 ( ω ) = 1 τ i    2 θ ( l ) i ( ω ) − 1+ θ ( l ) i +1 ( ω ) τ i +1 1+ ǫ i 1+ θ ( l ) i +1 ( ω ) τ i +1 ! + τ i    1 + ǫ i − 1 τ i    2 θ ( l ) i ( ω ) − 1+ θ ( l ) i +1 ( ω ) τ i +1 1+ ǫ i 1+ θ ( l ) i +1 ( ω ) τ l +1 ! + τ i    , l − 1 ≥ i ≥ 2 (15) Our main result is the fo llowing: Theor em 7: Let θ ( l ) • ( ω ) = { θ ( l ) i ( ω ) } l i =1 be a sequen ce of function s defined by (1 4) and (15). Suppose tha t the Gaussian source satisfies the TS con d ition and th e following condition : τ l = σ 2 l /σ 2 l − 1 ≥ 1 , L ≥ l ≥ 2 . (16) Under (1 6), we have the fo llowing param e tr ic fo r m of ( D , R sum ,l ( D )) using θ ( l ) • ( ω ) : D = σ 2 1 σ 2 X 0 σ 2 X 0 θ ( l ) 1 ( ω ) + σ 2 1 , R sum ,l ( D ) = 1 2 log σ 2 X 0 D +  − 1 2  " l − 1 X i =1 ( log 1 − θ ( l ) i ( ω ) 1 − ǫ i θ ( l ) i ( ω ) + θ ( l ) i +1 ( ω ) τ i +1 ! + log  1 − ǫ i θ ( l ) i ( ω )  ) + log 1 − θ ( l ) l ( ω ) 1 − ǫ l θ ( l ) l ( ω ) !# . Proof of Theo r em 7 is give in Section V. When ǫ l = 0 for L ≥ l ≥ 1 and τ l = 1 f or L ≥ l ≥ 2 , the recursion (14) and (14) defining θ ( L ) • ( ω ) b ecomes th e following: θ ( L ) L ( ω ) = ω , θ ( L ) L − 1 ( ω ) = 2 ω θ ( L ) l − 1 ( ω ) = 2 θ ( L ) l ( ω ) − θ ( L ) l +1 ( ω ) for L − 1 ≥ l ≥ 2 .      (17) Solving ( 1 7), we obtain θ ( L ) l ( ω ) = ( L − l + 1) ω . The para m etric form of R sum ,L ( D ) becomes σ 2 1 g 0 ( D ) = θ 1 ( ω ) = Lω , R sum ,L ( D ) =  − L 2  log(1 − ω ) + 1 2 log σ 2 X 0 D .      (18) From (18), we have R sum ,L ( D ) =  − L 2  log  1 − σ 2 1 L g 0 ( D )  + 1 2 log σ 2 X 0 D . (19) In particular, by letting L → ∞ in (1 9), we have lim L →∞ R sum ,L ( D ) = 1 2 σ 2 1 g 0 ( D ) + 1 2 log σ 2 X 0 D = σ 2 1 2 σ 2 X 0  σ 2 X 0 D − 1  + 1 2 log σ 2 X 0 D . The ab ove fo rmula coin c ides with the rate distortion fun ction for the qua dratic Gaussian CEO pro blem o btained by Ooha ma [7]. Hence, o ur so lution to R sum ,L ( D ) includes th e previous result on the Gaussian CEO pr oblem as a special case. V . P RO O F S O F T H E R E S U LT S In this section we pr ove Theorem 2 and Le m ma 1 stated in Section III and pr ove Lemma 2 stated in Section IV. Furthermo re, on the comp u tation o f R sum ,L ( D ) , we prove Theorem 7 stated in Section IV. A. Derivation of the Outer Boun d In this subsection we prove ˆ R (out) L ( D ) ⊆ R (out) L ( D ) stated in Theorem 2. Pr oof o f ˆ R (out) L ( D ) ⊆ R (out) L ( D ) : Set ˆ J S ( D , r 0 , r L − 2 , r S | r S c , R 0 ) △ = " 1 2 log +  G ( D , r 0 ,r L − 2 ) σ 2 X 0 F ( r S c ) n 1+ σ 2 X 0 f 0 ( r S c ) o D  + L X i =1 r i − R 0 # + . W e first observe that ˆ J S ( D , r 0 , r L − 2 , r S | r S c , r 0 ) = 1 2 " log +  G ( D , r 0 ,r L − 2 ) σ 2 X 0 F ( r S c ) n 1+ σ 2 X 0 f 0 ( r S c ) o D  + L X i =1 2 r i − 2 r 0 # + ≥ 1 2 " log  G ( D , r 0 ,r L − 2 ) σ 2 X 0 F ( r S c ) n 1+ σ 2 X 0 f 0 ( r S c ) o D  + L X i =1 2 r i − 2 r 0 # + = J S ( D , r 0 , r L − 2 , r S | r S c ) . (20) Then, we have the following. ˆ R (out) L ( D ) (a) ⊆ n ( R 0 , R L ) : Ther e exists a nonn egati ve vector ( r 0 , r L ) such that R 0 ≥ r 0 ≥ 1 2 log +  σ 2 X 0 n 1+ σ 2 X 0 f 0 ( r L ) o D  , X i ∈ S R i ≥ ˆ J S ( D , r 0 , r L − 2 , r S | r S c , R 0 ) for any S ⊆ Λ . o (b) = n ( R 0 , R L ) : There exists a nonnegative vector r L such that R 0 ≥ 1 2 log +  σ 2 X 0 n 1+ σ 2 X 0 f 0 ( r L ) o D  , X i ∈ S R i ≥ ˆ J S ( D , R 0 , r L − 2 , r S | r S c , R 0 ) for any S ⊆ Λ . o . (21) Step ( a) follows from the definition o f ˆ J S ( D , R 0 , r L − 2 , r S | r S c , R 0 ) and the no n negativ e prope r ty of R L . Step (b) fo l- lows from that ˆ J S ( D , r 0 , r L − 2 , r S | r S c , R 0 ) is a mon o tone decreasing function of r 0 . From (21), we contin u e to evaluate 10 outer boun ds of ˆ R (out) L ( D ) to obtain the following: ˆ R (out) L ( D ) ⊆ n ( R 0 , R L ) : There exists a no nnegative vector ( r 0 , r L ) such that R 0 ≥ r 0 ≥ 1 2 log  σ 2 X 0 n 1+ σ 2 X 0 f 0 ( r L ) o D  , X i ∈ S R i ≥ ˆ J S ( D , r 0 , r L − 2 , r S | r S c , r 0 ) for any S ⊆ Λ . o (c) ⊆ n ( R 0 , R L ) : Ther e exists a nonn egati ve vector ( r 0 , r L ) such that R 0 ≥ r 0 ≥ 1 2 log  σ 2 X 0 n 1+ σ 2 X 0 f 0 ( r L ) o D  , X i ∈ S R i ≥ J S ( D , r 0 , r L − 2 , r S | r S c ) for any S ⊆ Λ . o = R (out) L ( D ) . Step (c) follows f r om (20). T hus ˆ R (out) L ( D ) ⊆ R (out) L ( D ) is proved. B. Derivation of the Inn er Bo und In this sub section we pr ove R (in) L ( D ) ⊆ ˜ R (in) L ( D ) stated in Theorem 2. W e fir st derive a pr e lim inary result on a form of R (in) L ( D ) . Fix R 0 ≥ r 0 and set R (in) L ( r L 0 | R 0 ) △ = { ( R 1 , · · · , R L ) : ( R 0 , R 1 , · · · , R L ) ∈ R (in) L ( r L 0 ) } . Let (Λ , ˜ ρ ) , ˜ ρ = { ˜ ρ S ( r S | r S c ) } S ⊆ Λ be a co- polymatr oid defined in Proper ty 4. Exp ression o f R (in) L ( r L 0 | R 0 ) using (Λ , ˜ ρ ) is R (in) L ( r L 0 | R 0 ) = { ( R 1 , · · · , R L ) : X i ∈ S R i ≥ ˜ ρ S ( r S | r S c ) for any S ⊆ Λ . } . The set R (in) L ( r L 0 | R 0 ) for ms a kind o f p olytope , wh ich is called a co-polymatr oidal polytop e in the terminolo g y of matroid theory . It is well known a s a pro perty o f this k ind of polytope that the polytope R (in) L ( r L 0 | R 0 ) consists of L ! end-po ints who se comp onents are given by R π ( i ) = ˜ ρ { π ( i ) , ··· ,π ( L ) } ( r { π ( i ) , ··· ,π ( L ) } | r { π (1) , ··· ,π ( i − 1) } ) − ˜ ρ { π ( i +1) , ··· ,π ( L ) } ( r { π ( i +1) , ··· ,π ( L ) } | r { π (1) , ··· ,π ( i ) } ) for i = 1 , 2 , · · · , L − 1 , R π ( L ) = ˜ ρ { π ( L ) } ( r π ( L ) | r { π (1) , ··· ,π ( L − 1) } ) ,              (22) where π =  1 · · · i · · · L π (1) · · · π ( i ) · · · π ( L )  ∈ Π is an arbitrary perm u tation o n Λ . For each π ∈ Π and r L 0 ∈ B L ( D ) , let R (in) π ,L ( r L 0 ) be th e set of nonnegative vecto r s ( R 0 , R 1 , · · · , R L ) satisfying R 0 ≥ r 0 R π ( i ) ≥ ˜ ρ { π ( i ) , ··· ,π ( L ) } ( r { π ( i ) , ··· ,π ( L ) } | r { π (1) , ··· ,π ( i − 1) } ) − ˜ ρ { π ( i +1) , ··· ,π ( L ) } ( r { π ( i +1) , ··· ,π ( L ) } | r { π (1) , ··· ,π ( i ) } ) for i = 1 , 2 , · · · , L − 1 , R π ( L ) ≥ ˜ ρ { π ( L ) } ( r π ( L ) | r { π (1) , ··· ,π ( L − 1) } ) .                    (23) Then, we have R (in) ( r L 0 ) = conv ( [ π ∈ Π R (in) π ,L ( r L 0 ) ) . Pr oof of R (in) L ( D ) ⊆ ˜ R (in) L ( D ) : Fix π ∈ Π and r L 0 ∈ B L ( D ) arb itrary . By (23), it suffi ces to show that for r L 0 ∈ B L ( D ) , R (in) π ,L ( r L 0 ) ⊆ ˜ R (in) π ,L ( D ) to prove R (in) L ( D ) ⊆ ˜ R (in) L ( D ) . L et V i , i ∈ { 0 }∪ Λ b e ind ependen t Gaussian r andom variables with m ean 0 an d variance σ 2 V i . Su p pose th a t V L 0 is indep endent of X L 0 . Define the Gaussian rando m variables U i , i ∈ { 0 } ∪ Λ by U i △ = X i + V i , i ∈ { 0 } ∪ Λ . From the above definition it is obvio u s that U L → X L → X 0 → U 0 , U S → X S ∪{ 0 } → X S c → U S c , for a ny S ⊆ Λ .    (24) For given r i ≥ 0 , i ∈ S and D > 0 , set 1 σ 2 V i = e 2 r i − 1 σ 2 N i , when r i > 0 . When r i = 0 , we choo se U i so that U i takes the constant value zero. Defin e the sequence of ra ndom variables { Ω l } L l =0 by Ω L − 1 = 1 − e − 2 r L − 1 σ 2 N L − 1 · U L − 1 + 1 − e − 2 r L σ 2 N L · U L Ω l = 1 1+ σ 2 Z l +1 f l +1 ( r L l +1 ) · Ω l +1 + 1 − e − 2 r l σ 2 N l · U l for L − 2 ≥ l ≥ 1 Ω 0 = 1 1+ σ 2 Z 1 f 1 ( r L ) · Ω 1 .                (25) Note that Ω 0 = Ω 0 ( U L ) is a line ar fu n ction of U L . T hen, by an elementary compu ta tio n, we have X 0 = 1 1 σ 2 X 0 + 1 σ 2 V 0 + f 0 ( r L )  1 σ 2 V 0 · U 0 + Ω 0 ( U L )  + ˜ N 0 , (26) where ˜ N 0 is a zero mean Gaussian rando m variable with variance  1 σ 2 X 0 + 1 σ 2 V 0 + f 0 ( r L )  − 1 . ˜ N 0 is independ ent of ( U 0 , U L ) . Since r L 0 ∈ B L ( D ) , we have e − 2 r 0 D − 1 σ 2 X 0 ≤ f 0 ( r L ) . (27) W e p ut 1 σ 2 V 0 = 1 − e − 2 r 0 D . (28) 11 Then, from (27) and (28), we have  1 σ 2 X 0 + 1 σ 2 V 0 + f 0 ( r L )  − 1 =  1 σ 2 X 0 + 1 − e − 2 r 0 D + f 0 ( r L )  − 1 ≤ D . (29) Based o n (26), (2 8), and (2 9), d efine the linear functio n ˜ ψ of ( U 0 , U L ) by ˜ ψ ( U 0 , U L ) △ =  1 σ 2 X 0 + 1 − e − 2 r 0 D + f 0 ( r L )  − 1 × h 1 − e − 2 r 0 D · U 0 + Ω 0 ( U L ) i . Then, we obtain E h X 0 − ˜ ψ ( U 0 , U L ) i 2 = V ar h ˜ N 0 i =  1 σ 2 X 0 + 1 − e − 2 r 0 D + f 0 ( r L )  − 1 ≤ D . (30) From (24) and (3 0), we have ( U 0 , U L ) ∈ G ( D ) . By simple computatio ns, we can show that r 0 = I ( X 0 ; U 0 | U L ) , r i = I ( X i ; U i | X 0 Y L − 1 ) , for any i ∈ Λ , 1 2 log  F S ( r S ) · { 1 + σ 2 X 0 f 0 ( r S ) }  = I ( X 0 Y L − 1 ; U S ) , for any S ⊆ Λ .                  (31) Using (2 4) and (3 1), the L + 1 inequalities of (23) ar e rewritten as R 0 ≥ I ( X 0 ; U 0 | U L ) , R π ( i ) ≥ I ( X 0 Y L − 1 ; U π ( S i ) | U π ( S c i ) ) + I ( X π ( i ) ; U π ( i ) | X 0 Y L − 1 ) − I ( X 0 Y L − 1 ; U π ( S i +1 ) | U π ( S c i +1 ) ) = I ( X 0 Y L − 1 ; U π ( i ) ; | U π ( S c i ) ) + I ( X π ( i ) ; U π ( i ) | X 0 Y L − 1 U π ( S c i ) ) = I ( X 0 Y L − 1 X π ( i ) ; U π ( i ) | U π ( S c i ) ) = I ( X π ( i ) ; U π ( i ) | U π ( S c i ) ) for i = 1 , 2 , · · · , L . Thus, we conclu de that ( R 0 , R π (1) , · · · , R π ( L ) ) ∈ ˜ R (in) π ,L ( D ) . C. Pr oofs of Lemmas 1 an d 2 In this subsectio n we prove Lemmas 1 a nd 2 . W e first present a preliminar y observation on R (out) L ( D ) . Fix R 0 ≥ r 0 arbitrary and set R (out) L ( D , r L 0 | R 0 ) △ = { ( R 1 , · · · , R L ) : ( R 0 , R 1 , · · · , R L ) ∈ R (out) L ( D , r L 0 ) } . Let (Λ , ρ ) , ρ = { ρ S ( r S | r S c ) } S ⊆ Λ be a co- polymatr oid defined in Property 4. Expression of R (out) L ( D 0 , r L 0 | R 0 ) u sing (Λ , ρ ) is R (out) L ( D , r L 0 | R 0 ) = { ( R 1 , · · · , R L ) : X i ∈ S R i ≥ ρ S ( r S | r S c ) for any S ⊆ Λ . } . The set R (out) L ( D , r L 0 | R 0 ) fo rms a co- polymatr oidal poly tope. The p o lytope R (out) L ( D , r L 0 | R 0 ) consists of L ! end-p oints whose compon ents are given by R π ( i ) = ρ { π ( i ) , ··· ,π ( L ) } ( r { π ( i ) , ··· ,π ( L ) } | r { π (1) , ··· ,π ( i − 1) } ) − ρ { π ( i +1) , ··· ,π ( L ) } ( r { π ( i +1) , ··· ,π ( L ) } | r { π (1) , ··· ,π ( i ) } ) for i = 1 , 2 , · · · , L − 1 , R π ( L ) = ρ { π ( L ) } ( r π ( L ) | r { π (1) , ··· ,π ( L − 1) } ) ,              (32) where π =  1 · · · i · · · L π (1) · · · π ( i ) · · · π ( L )  ∈ Π . For each π ∈ Π an d l = 1 , 2 , · · · , L , set B π ,l ( D ) △ = { r L 0 : r L 0 ∈ B L ( D ) and r π ( i ) = 0 for i = l + 1 , · · · , L } , ∂ B π ,l ( D ) △ = { r L 0 : r L 0 ∈ ∂ B L ( D ) and r π ( i ) = 0 for i = l + 1 , · · · , L } . In particular, when π is the identity map, we omit π to write B l ( D ) and ∂ B l ( D ) . By Property 3, when r L 0 ∈ B π ,l ( D ) , the end-po int given by (32) becom es R π ( i ) = ρ { π ( i ) , ··· ,π ( l ) } ( r { π ( i ) , ··· ,π ( l ) } | r { π (1) , ··· ,π ( i − 1) } ) − ρ { π ( i +1) , ··· ,π ( l ) } ( r { π ( i +1) , ··· ,π ( l ) } | r { π (1) , ··· ,π ( i ) } ) for i = 1 , 2 , · · · , l − 1 , R π ( l ) = ρ { π ( l ) } ( r π ( l ) | r { π (1) , ··· ,π ( l − 1) } ) , R π ( i ) = 0 , f or i = l + 1 , · · · , L .                    (33) Next, we p resent a lemma o n a proper ty of G ( D , r 0 , r L − 1 ) . Lemma 3: For r L 0 ∈ B l ( D ) , G ( D, r 0 , r L − 2 ) is compu ted as G ( D, r 0 , r L − 2 )   r L l +1 = 0 = l Y k =1  1 + σ 2 Z k g k ( D , r 0 , r k − 1 )  . Pr oof: By Property 1 part c), for l + 1 ≤ k ≤ L 0 ≤ g k ( D , r 0 , r k − 1 ) ≤ f ( r L k ) = 0 . Hence, the result of Lemm a 3 follows. Pr oof of Lemma 1: Fix π ∈ Π an d r L 0 ∈ B L ( D ) arbitrary . Let ( R 0 , R L ) be a n onnegative rate vector such that R 0 ≥ r 0 and L comp onents of R L satisfy (32). T o prove Lemm a 1, it suffices to show that this nonnegative vecto r belong s to R (in) L ( D ) . For l = 1 , 2 , · · · , L , we prove th e c laim that under the MI condition, if r L 0 ∈ B π ,l ( D ) , then, the rate vector ( R 0 , R L ) satisfying R 0 ≥ r 0 and ( 33) belong s to R (in) L ( D ) . W e pr ove this claim by indu ction with respect to l . When l = 1 , fro m (33), we have R π (1) = ρ { π (1) } ( r π (1) ) , R π ( i ) = 0 , f or i = 2 , · · · , L . ) (34) 12 The function ρ { π (1) } ( r π (1) ) is computed as ρ { π (1) } ( r π (1) ) = J { π (1) } ( D , r 0 , r L − 2 , r π (1) | r { π (1) } c )   r { π (1) } c = 0 = 1 2 log + " G ( D , r 0 ,r L − 2 ) | r { π (1) } c = 0 σ 2 X 0 e − 2 r 0 e 2 r π (1) D # . (35) By the above form of ρ { π (1) } ( r π (1) ) and σ 2 X 0 e − 2 r 0 D ≥ σ 2 X 0 e − 2 R 0 D > 1 , ρ { π (1) } ( r π (1) ) is positiv e. Sinc e r L 0 ∈ B π ,l ( D ) , we can decrease r π (1) keeping r L 0 ∈ B π , 1 ( D ) so that it a r riv es at r ∗ π (1) = 0 or a positive r ∗ π (1) satisfying ( r 0 , r ∗ π (1) , r { π (1) } c ) = ( r 0 , r ∗ π (1) , 0 , · · · , 0 | {z } L − 1 ) ∈ ∂ B π , 1 ( D ) . (36) Let ( R 0 , R ∗ π (1) , · · · , R ∗ π ( L ) ) be a rate vector co rrespon ding to ( r 0 , r ∗ π (1) , r { π (1) } c ) . If r ∗ π (1) = 0 , then by Property 3 p art b), ρ { π (1) } ( r π (1) ) must b e zero . This contrad icts the fact that ρ { π (1) } ( r π (1) ) is po siti ve. Theref ore, r ∗ π (1) must be positi ve. Then, from (36), we have ( R 0 , R ∗ π (1) , · · · , R ∗ π ( L ) ) = ( R 0 , R ∗ π (1) , 0 , · · · , 0 | {z } L − 1 ) ∈ R (in) L ( D ) . On the other han d , by Lem ma 3, we have G ( D, r 0 , r L − 2 )   r { π (1) } c = 0 = G ( D , r 0 , r L − 2 )   r L π (1)+1 = 0 ,r π (1) − 1 = 0 = π (1) Y k =1  1 + σ 2 Z l g k ( D , r 0 , r k − 1 )        r π (1) − 1 = 0 . (37) From (35) and (37), we can see that G ( D, r 0 , r L − 2 )   r { π (1) } c = 0 does not d epend on r π (1) . Th is implies that ρ { π (1) } ( r π (1) ) is a monoto ne in creasing function of r π (1) . Then, we h av e R π (1) ≥ R ∗ π (1) . Hence, we have ( R 0 , R π (1) , · · · , R π ( L ) ) = ( R 0 , R π (1) , 0 , · · · , 0 | {z } L − 1 ) ∈ R (in) L ( D ) . Thus, the cla im holds for l = 1 . W e assume that the claim holds fo r l − 1 . Since f 0 ( r L 0 ) is a mono tone incr easing function of r π ( l ) on B π ,l ( D ) , we ca n decrease r π ( l ) keeping r L 0 ∈ B π ,l ( D ) so that it arrives at r ∗ π ( l ) = 0 or a positiv e r ∗ π ( l ) satisfying ( r 0 , r ∗ π ( l ) , r { π ( l ) } c ) ∈ ∂ B π ,l ( D ) . (38) Let ( R 0 , R ∗ π (1) , · · · , R ∗ π ( L ) ) be a rate vector co rrespon ding to ( r 0 , r ∗ π ( l ) , r { π ( l ) } c ) . By Property 4 part b) an d the MI condition , the l functions ρ { π ( i ) , ··· ,π ( l ) } ( r { π ( i ) , ··· ,π ( l ) } | r { π (1) , ··· ,π ( i − 1) } ) − ρ { π ( i +1) , ··· ,π ( l ) } ( r { π ( i +1) , ··· ,π ( l ) } | r { π (1) , ··· ,π ( i ) } ) for i = 1 , 2 , · · · , l − 1 , ρ { π ( l ) } ( r π ( l ) | r { π (1) , ··· ,π ( l − 1) } ) , appearin g in the right members o f (33) are monotone increas- ing function s of r π ( l ) . Then, from (33), we have R π ( i ) ≥ R ∗ π ( i ) for i = 1 , 2 , · · · , l , R π ( i ) = R ∗ π ( i ) = 0 for i = l + 1 , · · · , L . ) (39) When r ∗ π ( l ) = 0 , we have ( r 0 , r ∗ π ( l ) , r { π ( l ) } c ) ∈ B π ,l − 1 ( D ) . Then, by induction hypo thesis, we have ( R 0 , R ∗ π (1) , · · · , R ∗ π ( L ) ) ∈ R (in) L ( D ) . When r ∗ π ( l ) > 0 , from (38), we have ( R 0 , R ∗ π (1) , · · · , R ∗ π ( L ) ) ∈ R (in) L ( D ) . Hence, by (39) , we have ( R 0 , R π (1) , · · · , R π ( L ) ) = ( R 0 , R π (1) , · · · , R π ( l ) , 0 , · · · , 0 | {z } L − l ) ∈ R (in) L ( D ) . Thus, the claim holds for l . This comp letes the proof of Lemma 1. Pr oof o f Lemma 2: For R 0 > 0 and for 1 ≤ l ≤ L , set B l ( D | R 0 ) △ = { r l : ( R 0 , r L ) ∈ B l ( D ) } , ∂ B l ( D | R 0 ) △ = { r l : ( R 0 , r L ) ∈ ∂ B l ( D ) } . W e first observe that R (l) sum ,L ( D , R 0 ) = min 1 ≤ l ≤ L " min r l ∈B l ( D | R 0 ) J Λ ( D , R 0 , r L − 2 , r L )   r L l +1 = 0 # , R (u) sum ,L ( D , R 0 ) = min 1 ≤ l ≤ L  min r l ∈ ∂ B l ( D | R 0 ) K Λ ( r l )  . W e co mpute J Λ ( D , R 0 , r L − 2 , r L )   r L l +1 = 0 . By Lemma 3, for r l ∈ B l ( D | R 0 ) G ( D, R 0 , r L − 2 )   r L l +1 = 0 = l Y k =1  1 + σ 2 Z l g k ( D , R 0 , r k − 1 )  . From the above fo rmula, we can see that f or r l ∈ B l ( D | R 0 ) , G ( D, R 0 , r L − 2 ) | r L l +1 = 0 is a fu nction of r l − 1 . W e deno te this function by G ( D , R 0 , r l − 1 ) , that is, G ( D, R 0 , r l − 1 ) △ = l Y k =1  1 + σ 2 Z l g k ( D , R 0 , r k − 1 )  . Then, for r l ∈ B l ( D | R 0 ) , J Λ ( D , R 0 , r L − 2 , r L )   r L l +1 = 0 = 1 2 log + " G ( D, R 0 , r l − 1 ) · σ 2 X 0 D e − 2 R 0 l Y i =1 e 2 r i # . (40) W e denote the right member of (4 0) by J Λ ( D , R 0 , r l − 1 , r l ) . Using this function , R (l) sum ,L ( D , R 0 ) can be written as R (l) sum ,L ( D , R 0 ) = min 1 ≤ l ≤ L " min r l ∈B l ( D | R 0 ) J Λ ( D , R 0 , r l − 1 , r l ) # . 13 Note here that J Λ ( D , R 0 , r l − 1 , r l ) is a monotone increasing function of r l . T o prove R (l) sum ,L ( D , R 0 ) ≥ R (u) sum ,L ( D , R 0 ) , it suffices to show that f or 1 ≤ l ≤ L , min r l ∈B l ( D | R 0 ) J Λ ( D , R 0 , r l − 1 , r l ) ≥ min r l ∈ ∂ B l ( D | R 0 ) K Λ ( r l ) . W e pr ove this claim by indu ction with respect to l . When l = 1 , the f unction J Λ ( D , R 0 , r 1 ) is comp uted as J Λ ( D , R 0 , r 1 ) = 1 2 log +  { 1+ σ 2 Z 1 g 1 ( D, R 0 ) } σ 2 X 0 e − 2 R 0 e 2 r 1 D  = 1 2 log +  σ 2 X 0 e − 2 R 0 e 2 r 1 n 1 − σ 2 Z 1 g 0 ( D, R 0 ) o D  . Since σ 2 X 0 e − 2 R 0 D > 1 , J Λ ( D , R 0 , r 1 ) is positiv e. Since J Λ ( D , R 0 , r 1 ) is a monotone in c reasing function of r 1 , the minimum of this fun ction is attained by r ∗ 1 = 0 or a positive r ∗ 1 satisfying r ∗ 1 ∈ ∂ B 1 ( D | R 0 ) . If r ∗ 1 = 0 , then, by Property 3 part b), J Λ ( D , R 0 , r 1 ) must be zero. This co ntradicts th at J Λ ( D , R 0 , r 1 ) is p ositi ve. There f ore, r ∗ 1 must be po siti ve. Then, by r ∗ 1 ∈ ∂ B 1 ( D | R 0 ) , we have J Λ ( D , R 0 , r 1 ) ≥ J Λ ( D , R 0 , r ∗ 1 ) = K Λ ( r ∗ 1 ) ≥ min r 1 ∈ ∂ B 1 ( D | R 0 ) K Λ ( r 1 ) . Thus, the cla im holds for l = 1 . W e assume that the claim holds f or l − 1 . Since J Λ ( D , R 0 , r l − 1 , r l ) is a monoto ne increasing functio n of r l , the min im um of this function is attained by r ∗ l = 0 or a positi ve r ∗ l satisfying ( r l − 1 , r ∗ l ) ∈ ∂ B l ( D | R 0 ) . Whe n r ∗ l = 0 , we have r l − 1 ∈ B l − 1 ( D | R 0 ) and J Λ ( D , R 0 , r l − 1 , r l ) ≥ J Λ ( D , R 0 , r l − 1 , r l − 1 r ∗ l ) . (41) Computing J Λ ( D , R 0 , r l − 1 , r l − 1 r ∗ l ) , we obtain J Λ ( D , R 0 , r l − 1 , r l − 1 r ∗ l ) = J Λ ( D , R 0 , r L − 2 , r L )   r L l = 0 = 1 2 log + " G ( D, R 0 , r l − 2 ) · σ 2 X 0 D e − 2 R 0 l − 1 Y i =1 e 2 r i # = J Λ ( D , R 0 , r l − 2 , r l − 1 ) . (42) Combining (41) and (42), we have J Λ ( D , R 0 , r l − 1 , r l ) ≥ J Λ ( D , R 0 , r l − 2 , r l − 1 ) . (43) On the other han d , by ind uction hypoth esis, we have J Λ ( D , R 0 , r l − 2 , r l − 1 ) ≥ min r l − 1 ∈ ∂ B l − 1 ( D | R 0 ) K Λ ( r l − 1 ) . (44) Combining (43) and (44), we have J Λ ( D , R 0 , r l − 1 , r l ) ≥ min r l − 1 ∈ ∂ B l − 1 ( D | R 0 ) K Λ ( r l − 1 ) ≥ min r l ∈ ∂ B l ( D | R 0 ) K Λ ( r l ) . When r ∗ l > 0 , we have J Λ ( D , R 0 , r l − 1 , r l ) ≥ J Λ ( D , R 0 , r l − 1 , r l − 1 r ∗ l ) = K Λ ( r l − 1 r ∗ l ) ≥ min r l ∈ ∂ B l ( D | R 0 ) K Λ ( r l ) , where the seco nd equality f ollows from ( r l − 1 , r ∗ l ) ∈ ∂ B l ( D | R 0 ) . Thus, the claim holds fo r l , completin g the pro of. D. Computation of { R ( l ) sum ( D ) } L l =1 When r L l +1 = 0 , by (12), we can prove th e following: f i ( r L i ) = 0 , l + 1 ≤ i ≤ L, f ( r L l ) = 1 σ 2 l (1 − e − 2 r l ) , f i − 1 ( r L i − 1 ) = f i ( r L i ) 1+ ǫ i σ 2 i f ( r L i ) + 1 σ 2 i − 1  1 − e − 2 r i − 1  , for l ≥ i ≥ 2 , f 0 ( r L ) = f 1 ( r L ) 1+ ǫ 1 σ 2 1 f 1 ( r L ) .                (45) Define the sequence { f i ( r l i ) } l i =1 of l fun ctions and the func- tion f 0 ( r l ) by f i ( r l i ) , f i ( r l i , r L l +1 ) = f i ( r L i ) | r L l +1 =0 , f o r l ≥ i ≥ 1 f 0 ( r l ) , f 0 ( r L l ) | r L l +1 =0 . Then, by (45) a n d the de fin itions of f i ( r l i ) , l ≥ i ≥ 1 and f 0 ( r l ) , we have f l ( r l ) = 1 σ 2 l  1 − e − 2 r l  , f i − 1 ( r l i − 1 ) = f i ( r l i ) 1+ ǫ i σ 2 i f i ( r l i ) + 1 σ 2 i − 1  1 − e − 2 r i − 1  for l ≥ i ≥ 2 , f 0 ( r l ) = f 1 ( r l ) 1+ ǫ 1 σ 2 1 f 1 ( r l ) .            (46) W e de fine the transform ation of the vector r l into th e vector α l by α i = σ 2 i f i ( r l i ) 1 + ǫ i σ 2 i f i ( r l i ) , l ≥ i ≥ 1 . ( 47) From (47), we have f i = f i ( r l i ) = 1 σ 2 i · α i 1 − ǫ i α i , f o r l ≥ i ≥ 1 . (48) Note that fo r l ≥ i ≥ 1 , f i ≥ 0 . From (48), α i , l ≥ i ≥ 1 must satisfy 0 ≤ α i < ǫ − 1 i . For l ≥ i ≥ 2 , set τ i △ = σ 2 i /σ 2 i − 1 . Considering (46) and (48), we have e − 2 r l = 1 − α l 1 − ǫ l α l , (49) e − 2 r i − 1 = 1 − α i − 1 1 − ǫ i α i − 1 + α i τ i , l ≥ i ≥ 2 . (50) Since r l ≥ 0 and (49), α l must be 0 < α l < (1 + ǫ l ) − 1 . (51) Furthermo re, since r i − 1 ≥ 0 for l − 1 ≥ i ≥ 2 and (50), α i , l ≥ i ≥ 2 must satisfy th e f ollowing: 0 ≤ α i ≤ τ i α i − 1 1 − ǫ i − 1 α i − 1 , τ i  α i − 1 1 − ǫ i − 1 α i − 1 − 1  < α i < ǫ − 1 i .    (52) W e next express the objec tive function in the optimization problem defining R ( l ) sum ( D ) using α l . Set ζ ( l ) = ζ ( l ) ( α l 2 ) △ =  − 1 2  l − 1 X i =1  log  1 − α i 1 − ǫ i α i + α i +1 τ i +1  + lo g (1 − ǫ i α i )  + log  1 − α l 1 − ǫ l α l  . 14 Then we have L X l =1 r l + L − 1 X l =1 1 2 log  1 + ǫ l σ 2 l f l ( r L l )  = ζ ( l ) ( α l 2 ) . Let A l be a domain of th e objective function in the op timiza- tion problem defin ing R ( l ) sum ( D ) . Considering a form of the objective function and (52), th e domain A l is a set of α l such that for l − 1 ≥ i ≥ 2 , α l satisfies (52) and 0 ≤ α l ≤ τ l α l − 1 1 − ǫ l − 1 α l − 1 , τ l  α l − 1 1 − ǫ l − 1 α l − 1 − 1  < α l < (1 + ǫ l ) − 1 .    (53) Summarizin g th e above argumen ts, we can obtain an expres- sion of R ( l ) sum ( D ) using α l . This expression is gi ven b y R ( l ) sum ( D ) = 1 2 log σ 2 X 0 D + min α l 2 ∈A l ( α 1 ) , α 1 = σ 2 1 g 0 ( D ) ζ ( l ) ( α l 2 ) . Here we set A l ( α 1 ) △ = { α l 2 : α l = ( α 1 , α l 2 ) ∈ A l } . Then we have the following lem m a. Lemma 4: For α l 2 ∈ A l ( α 1 ) , ( − 2) ζ l ( α l 2 ) α l 2 is strictly concave with respect to α l 2 Proof of this lemma will be given in Appen dix C. The following lemma is a key result to establish recursiv e algorithm s of computin g R ( l ) sum ( D ) for 1 ≤ l ≤ L . Lemma 5: W e assume that τ l = σ 2 l /σ 2 l − 1 ≥ 1 , L ≥ l ≥ 2 . (54) Under this assumption , the sequ ence θ ( l ) • = { θ ( l ) i ( ω ) } l i =1 of l contin uous function s defined by (14) and ( 15) satisfies the following three pro perties: a) W e have 0 ≤ θ ( l ) l ( ω ) ≤ τ l θ ( l ) l − 1 ( ω ) 1 − ǫ l − 1 θ ( l ) l − 1 ( ω ) , τ l  θ ( l ) l − 1 ( ω ) 1 − ǫ l − 1 θ ( l ) l − 1 ( ω ) − 1  < θ ( l ) l ( ω ) < (1 + ǫ l ) − 1        . (55) Furthermo re, for l − 1 ≥ i ≥ 2 , we h av e 0 ≤ θ ( l ) i ( ω ) ≤ τ i θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) , τ i  θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) − 1  < θ ( l ) i ( ω ) < ǫ − 1 i .        (56) The con ditions (55) and (5 6) imply ( θ ( l ) i ) l i =2 ( ω ) ∈ A l ( θ ( l ) 1 ( ω )) . b) ∇ ζ l | α l 2 =( θ ( l ) i ( ω )) l i =2 = 0 . c) For each l − 1 ≥ i ≥ 1 , θ ( l ) i ( ω ) is differentiab le with respect to ω ∈ [0 , (1 + ǫ l ) − 1 ) and satisfies the f o llowing: d θ ( l ) i d ω ≥ ( l − i + 1) · σ 2 i σ 2 l × l Y j = i +1 1 ( 1+ ǫ j − 1 θ ( l ) j ( ω ) τ j +1 !) 2 > 0 . This implies that for e ach l ≥ i ≥ 1 , the map ping ω ∈ [0 , 1) 7→ θ ( l ) i ( ω ) is an injec tio n. Proof of this lemma is gi ven in Appendix D. From this lemma, we immediately obtain Theor em 7. V I . C O N C L U S I O N S W e have consider e d the Gau ssian many-h elp-one problem and giv en a par tial so lution to this pro blem by deriving explicit outer bound of the rate distortion region fo r the case where informa tio n sou r ces satisfy th e TS con dition. Furthermo re, we established a sufficient condition u nder whic h this outer bou nd is tight. W e have determined the sum ra te part of the rate distortion r egio n for the case where inform ation sources satisfy the TS condition . For the case that in formation sources do n ot satisfy the TS condition we can not d erive an outer b o und ha ving a similar for m of R (out) ( D ) since the proof of the converse coding theorem dep ends h eavily on this p roperty of inform a- tion sources. Hence th e complete solution is still la c king for Gaussian informatio n sources with gen eral cor relation. A P P E N D I X A. Pr oof of Pr oposition 1 In this app endix we prove Proposition 1. T o prove this propo sition we give some p reparatio ns. For 0 ≤ l ≤ L − 2 , we set η l = η l ( D , r 0 , r l ) △ = ( g 0 ( D , r 0 ) , for l = 0 , g l ( D , r 0 , r l − 1 ) − 1 σ 2 N l  1 − e − 2 r l  , f o r 1 ≤ l ≤ L − 2 . For 1 ≤ l ≤ L − 2 , and a < 1 σ 2 Z l , define τ l ( a ) △ = [ a ] + 1 − σ 2 Z l [ a ] + − 1 σ 2 N l  1 − e − 2 r l  . Then, { η l } L − 2 l =0 satisfies the following: η l ( D , r 0 , r l ) = τ l  η l − 1 ( D , r 0 , r l − 1 )  for 1 ≤ l ≤ L − 2 . (57) Fix a < 1 σ 2 Z k +1 and set p k ( a ) △ = sup  p : log 1 − σ 2 Z k +1 [ a ] + 1 − σ 2 Z k +1 [ b ] + ≥ p ( b − a ) for any b < 1 σ 2 Z k +1  . By a simple compu tation we have p k ( a ) =    σ 2 Z k +1 1 − σ 2 Z k +1 a , for 0 ≤ a < 1 σ 2 Z k +1 , 0 , for a < 0 ≤ σ 2 Z k +1 1 − σ 2 Z k +1 a for a < 1 σ 2 Z k +1 . (58) 15 Fix a < 1 σ 2 Z j and set q j ( a ) △ = sup  q : τ j ( b ) − τ j ( a ) ≥ q ( b − a ) for any b < 1 σ 2 Z j  . By a simple compu tation we have q j ( a ) = ( 1 (1 − σ 2 Z j a ) 2 , for 0 ≤ a < 1 σ 2 Z j , 0 , for a < 0 ≤ 1 (1 − σ 2 Z j a ) 2 , f o r a < 1 σ 2 Z j . (59) Pr oof o f P r oposition 1: Let L be a s et o f integers l such that η l ( D , r 0 , r l ) is positive fo r some r L 0 ∈ B L ( D ) . Fro m (5 7), there exists a uniq u e integer 1 ≤ L ∗ ≤ L − 2 such that L = { 0 , 1 , · · · , L ∗ } . Using { η l } L − 2 l =1 and L ∗ , log G ( D , r 0 , r L − 2 ) can be rewritten as log G ( D, r 0 , r L − 2 ) = L − 1 X k =1 log  1 + σ 2 Z k g l ( D , r 0 , r k − 1 )  = L − 1 X k =1 log  1 1 − σ 2 Z k [ η k − 1 ( D, r 0 ,r k − 1 )] +  = L − 2 X k =0 log  1 1 − σ 2 Z k +1 [ η k ( D, r 0 ,r k )] +  = L ∗ X k =0 log  1 1 − σ 2 Z k +1 [ η k ( D, r 0 ,r k )] +  . ( 60) Fix n onnegative vector r L . For each s l ≥ r l , 1 ≤ l ≤ L − 2 , let G ( s l ) b e a functio n obtained by rep lacing r l in G ( D , r 0 , r L − 2 ) with s l , that is G ( s l ) △ = G ( D , r 0 , r l − 1 , s l , r L − 2 l +1 ) . It is obvious that when s l = r l , G ( r l ) = G ( D , r 0 , r l − 1 , r l , r L − 2 l +1 ) = G ( D , r 0 , r L − 2 ) . By Prop erty 2 part b ), we h av e G ( s l ) ≤ G ( r l ) fo r 1 ≤ l ≤ L − 2 . For ea c h s k ≥ r k , l ≤ k ≤ L − 2 , let η k ( s l ) b e a function obtained by replacing r l in η k ( D , r 0 , r k ) with s l , th at is η k ( s l ) △ = η k ( D , r 0 , r l − 1 , s l , r k l +1 ) . It is obvious that when s l = r l , η k ( r l ) = η k ( D , r 0 , r l − 1 , r l , r k l +1 ) = η k ( D , r 0 , r k ) . By Property 1 part b), we have η k ( s l ) ≤ η k ( r l ) f or l ≤ k ≤ L − 2 . For eac h l = 1 , · · · , L ∗ , we e valuate an upp er bou nd of log G ( s l ) − log G ( r l ) . Using (60), we have log G ( s l ) G ( r l ) = L ∗ X k =0 log  1 − σ 2 Z k +1 [ η k ( r l )] + 1 − σ 2 Z k +1 [ η k ( s l )] +  = L ∗ X k = l log  1 − σ 2 Z k +1 [ η k ( r l )] + 1 − σ 2 Z k +1 [ η k ( s l )] +  . (61) By definition of p k ( · ) , we have log  1 − σ 2 Z k +1 [ η k ( r l )] + 1 − σ 2 Z k +1 [ η k ( s l )] +  ≥ p k ( η k ( r l )) [ η k ( s l ) − η k ( r l )] ≥ σ 2 Z k +1 1 − σ 2 Z k +1 η k ( r l ) [ η k ( s l ) − η k ( r l )] , (62) where th e last ineq uality fo llows from η k ( s l ) ≤ η k ( r l ) and (58). From (61) and (62), we have log G ( s l ) G ( r l ) ≥ L ∗ X k = l σ 2 Z k +1 1 − σ 2 Z k +1 η k ( r l ) ( η k ( s l ) − η k ( r l )) . (6 3) By definition of q j ( · ) and (57), for l + 1 ≤ j ≤ k , we h av e η j ( s l ) − η j ( r l ) ≥ q j ( η j − 1 ( r l )) [ η j − 1 ( s l ) − η j − 1 ( r l )] ≥ 1  1 − σ 2 Z j η j − 1 ( r l )  2 [ η j − 1 ( s l ) − η j − 1 ( r l )] , (64) where the last inequality follows f rom η j − 1 ( s l ) ≤ η j − 1 ( r l ) and (59). Using (64) iteratively for l + 1 ≤ j ≤ k , we ob tain η k ( s l ) − η k ( r l ) ≥ ( η l ( s l ) − η l ( r l )) k Y j = l +1 1  1 − σ 2 Z j η j − 1  2 . (65) Observe that η l ( s l ) − η l ( r l ) = 1 σ 2 N l  e − 2 s l − e − 2 r l  ≥ − 2e − 2 r l σ 2 N l ( s l − r l ) . (66) From (65) and (66), we have η k ( s l ) − η k ( r l ) ≥ − 2e − 2 r l σ 2 N l ( s l − r l ) k Y j = l +1 1  1 − σ 2 Z j η j − 1  2 ≥ − 2 σ 2 N l ( s l − r l ) k Y j = l +1 1  1 − σ 2 Z j η j − 1  2 . (67) From (63) and (67), we have 1 2 log G ( s l ) G ( r l ) ≥ − ( s l − r l ) L ∗ X k = l σ 2 Z k +1 σ 2 N l 1 1 − σ 2 Z k +1 η k k Y j = l +1 1  1 − σ 2 Z j η j − 1  2 . (68) By Property 1 part b) and the defin ition o f η j , we have η j ≤ f j − 1 σ 2 N j (1 − e − 2 r j ) = f j +1 1+ σ 2 Z j +1 f j +1 , from which we have 1 1 − σ 2 Z j +1 η j ≤ 1 + σ 2 Z j +1 f j +1 ≤ 1 + σ 2 Z j +1 f ∗ j +1 . (69) From (68) and (69), we have 1 2 log G ( s l ) G ( r l ) ≥ − ( s l − r l ) L ∗ X k = l σ 2 Z k +1 σ 2 N l  1 + σ 2 Z k +1 f ∗ k  × k Y j = l +1  1 + σ 2 Z j f ∗ j  2 . (70) 16 If L − 2 X k = l σ 2 Z k +1 σ 2 N l  1 + σ 2 Z k +1 f ∗ k +1  k Y j = l +1  1 + σ 2 Z j f ∗ j  2 ≤ 1 (71) hold for l = 1 , 2 , · · · , L − 2 , then, by (70), we have 1 2 log G ( s l ) G ( r l ) ≥ − ( s l − r l ) or equiv alent to s l + 1 2 log G ( s l ) ≥ r l + 1 2 log G ( r l ) for l = 1 , 2 , · · · , L − 2 . Hence, ( 71) is a su fficient condition for the MI cond ition. B. Pr oof of R L ( D ) ⊆ R (out) L ( D ) In this ap pendix we prove R L ( D ) ⊆ R (out) L ( D ) stated in Theorem 2. W e first p r esent a lemma necessary for the proof of this inclusion. Lemma 6: I ( X 0 ; ˆ X 0 ) ≥ n 2 log  σ 2 X 0 ∆( X 0 , ˆ X 0 )  . Pr oof: See the proo f of Lem ma 1 in Oohama [ 7]. Next, we pre sen t an impo r tant lemma wh ich is a mathe - matical core of the converse c oding theorem. Let the encoded outputs of X i , i = 0 , 1 , · · · , L by encoder functions ϕ i be denoted by ϕ i ( X i ) = W i . Set r 0 △ = 1 n I ( X 0 ; W 0 | W L ) , r i △ = 1 n I ( X i ; W i | Y L − 1 ) =      1 n I ( X i ; W i | Y i ) , for 1 ≤ i ≤ L − 1 , 1 n I ( X L ; W L | Y L − 1 ) , for i = L, ξ △ = σ 2 X 0 e − 2 n I ( X 0 ; W 0 W L ) . Then, we have the following lem ma. Lemma 7: I ( X 0 ; W L ) ≤ n 2 log  1 + σ 2 X 0 f 0 ( r L )  . For 1 ≤ l ≤ L − 1 , we ha ve n 2 log  1 + σ 2 Z l g l ( r 0 , r l − 1 , ξ )  ≤ I ( Y l ; W L l | Y l − 1 ) ≤ n 2 log  1 + σ 2 Z l f l ( r L l )  . From the above lemm a we imm ediately obtain th e fo llow- ing. Lemma 8: I ( X 0 ; W S ) ≤ n 2 log  1 + σ 2 X 0 f 0 ( r S )  , I ( Y L − 1 ; W S | X 0 ) ≤ n 2 log F ( r S ) , S ⊆ Λ , I ( Y L − 1 ; W L | X 0 ) ≥ n 2 log G ( ξ , r 0 , r L − 2 ) . W e prove R L ( D ) ⊆ R (out) L ( D ) by Lemma s 6 and 8 and standard arguments fo r the p roof of the converse co ding theorem. Pr oof of R L ( D ) ⊆ R (out) L ( D ) : W e first observe that by virtue of the TS cond itio n, W S → X S → ( X 0 , Z L − 1 ) → X S c → W S c (72) hold f or any subset S of Λ . Assume ( R 0 , R 1 , · · · , R L ) ∈ R L ( D ) . Th en, f o r any δ > 0 , th ere exists an integer n 0 ( δ ) such that f or n ≥ n 0 ( δ ) and fo r i ∈ Λ , we obtain th e following chain of inequalities: n ( R 0 + δ ) ≥ log M 0 ≥ H ( W 0 ) ≥ H ( W 0 | W L ) = I ( X 0 ; W 0 | W L ) = nr 0 . (73) Furthermo re, for any subset S ⊆ Λ , we ob tain nr 0 + X i ∈ S n ( R i + δ ) ≥ I ( X 0 ; W 0 | W L ) + X i ∈ S H ( W i ) = H ( W 0 | W L ) + X i ∈ S H ( W i ) ≥ H ( W 0 | W S W S c ) + H ( W S | W S c ) = H ( W 0 W S | W S c ) = I ( X 0 Z L − 1 ; W 0 W S | W S c ) + H ( W 0 W S | W S c X 0 Z L − 1 ) (a) = I ( X 0 Z L − 1 ; W 0 W S | W S c ) + X i ∈ S H ( W i | X 0 Z L − 1 ) = I ( X 0 Y L − 1 ; W 0 W S | W S c ) + X i ∈ S I ( X i ; W i | Y L − 1 ) . (7 4) Step (a) follows fro m (72). On the other han d, by Lem ma 6, we have for n ≥ n 0 ( δ ) , I ( X 0 ; W 0 W L ) = n 2 log  σ 2 X 0 ξ  ≥ I ( X 0 ; ˆ X 0 ) ≥ n 2 log  σ 2 X 0 D + δ  , which togethe r w ith (73), (74), and Lemm a 8 y ie ld s the following lo wer bou nds of I ( X 0 ; W 0 | W L ) an d I ( X 0 Y L − 1 ; W 0 W S | W S c ) : I ( X 0 ; W 0 | W L ) = I ( X 0 ; W 0 W L ) − I ( X 0 ; W L ) ≥ n 2 log  σ 2 X 0 n 1+ σ 2 X 0 f 0 ( r L ) o ξ  ≥ n 2 log  σ 2 X 0 n 1+ σ 2 X 0 f 0 ( r L ) o ( D + δ )  , (75) I ( X 0 Y L − 1 ; W 0 W S | W S c ) = I ( X 0 Y L − 1 ; W 0 W S W S c ) − I ( X 0 Y L − 1 ; W S c ) = I ( X 0 ; W 0 W L ) + I ( Y L − 1 ; W L | X 0 ) − I ( X 0 ; W S c ) − I ( Y L − 1 ; W S c | X 0 ) ≥ n 2 log  σ 2 X 0 G ( ξ ,r 0 ,r L − 2 ) F ( r S c ) n 1+ σ 2 X 0 f 0 ( r S c ) o ξ  ≥ n 2 log  σ 2 X 0 G ( D + δ,r 0 ,r L − 2 ) F ( r S c ) n 1+ σ 2 X 0 f 0 ( r S c ) o ( D + δ )  . (76) 17 From (74) and (76), we have X i ∈ S ( R i + δ ) ≥ 1 2 log  σ 2 X 0 G ( D + δ,r 0 ,r L − 2 ) F ( r S c )( D + δ ) n 1+ σ 2 X 0 f 0 ( r S c ) o  + X i ∈ S r i − r 0 . (77) Note he r e that P i ∈ S ( R i + δ ) are no nnegative. Hence, f rom (73), (75) and (77), we obtain R 0 + δ ≥ r 0 ≥ 1 2 log  σ 2 X 0 n 1+ σ 2 X 0 f 0 ( r L ) o ( D + δ )  (78) and for S ⊆ Λ X i ∈ S ( R i + δ ) ≥ J S ( D + δ, r 0 , r L − 2 , r S | r S c ) . The inequality ( 78) imp lies th at r L 0 ∈ B L ( D + δ ) . Thus, b y letting δ → 0 , we obtain ( R 0 , R 1 , · · · , R L ) ∈ R (out) L ( D ) . Finally , we p rove Lemm a 7. For n dimen sio n al r andom vector U w ith density , let h ( U ) be a differential en tr opy o f U . The following two lemmas ar e some variants of the entropy power inequality . Lemma 9: Let U i , i = 1 , 2 , 3 be n dim ensional r andom vectors with densities and let T be a random variable taking values in a fin ite set. W e assume that U 3 is indepen dent of U 1 , U 2 , and T . Then, we have 1 2 π e e 2 n h ( U 2 + U 3 | U 1 T ) ≥ 1 2 π e e 2 n h ( U 2 | U 1 T ) + 1 2 π e e 2 n h ( U 3 ) . Lemma 10: Let U i , i = 1 , 2 , 3 be n ran dom vectors with densities. Let T 1 , T 2 be random variables taking values in finite sets. W e assume that those fiv e random variables f o rm a Markov chain T 1 → U 1 → U 3 → U 2 → T 2 in this order . Then, we have 1 2 π e e 2 n h ( U 1 + U 2 | U 3 T 1 T 2 ) ≥ 1 2 π e e 2 n h ( U 1 | U 3 T 1 ) + 1 2 π e e 2 n h ( U 2 | U 3 T 2 ) . Pr oof of Lemma 7: Define th e sequence of n d im ensional random vectors { S l } L − 1 l =1 by S l = 1 σ 2 N l X l + 1 σ 2 Z l +1 Y l +1 , 1 ≤ l ≤ L − 1 . (79) By an elementary compu tation, we ob tain X 0 = σ 2 ˆ N 0 σ 2 Z 1 Y 1 + ˆ N 0 , Y l = σ 2 ˆ N l σ 2 Z l Y l − 1 + σ 2 ˆ N l S l + ˆ N l , 1 ≤ l ≤ L − 1 .      (80) where ˆ N l , 0 ≤ l ≤ L − 1 is an n d imensional ran dom vector whose com ponen ts are n indep e ndent cop ies of a Gaussian random variable with mean 0 and variance σ 2 ˆ N l . ˆ N 0 is indepen dent of Y 1 . For each 1 ≤ l ≤ L − 1 , ˆ N l is indepen d ent o f Y l − 1 and S l . T he variance σ 2 ˆ N l , 0 ≤ l ≤ L − 1 have the following form : 1 σ 2 ˆ N 0 = 1 σ 2 X 0 + 1 σ 2 Z 1 , 1 σ 2 ˆ N l = 1 σ 2 Z l + 1 σ 2 N l + 1 σ 2 Z l +1 , 1 ≤ l ≤ L − 1 .    (81) Set λ 0 △ = 1 2 π e e 2 n h ( X 0 | W L ) , ˜ µ 0 △ = 1 2 π e e 2 n h ( Y 1 | W L ) , µ 0 △ = 1 2 π e e 2 n h ( Y 1 | X 0 W L ) , λ l △ = 1 2 π e e 2 n h ( Y l | Y l − 1 W L l ) , 1 ≤ l ≤ L , ˜ µ l △ = 1 2 π e e 2 n h ( S l | Y l − 1 W L l ) , µ l △ = 1 2 π e e 2 n h ( S l | Y l W L l ) , 1 ≤ l ≤ L − 1 . W e ca n easily verify that ˜ µ 0 = µ 0 λ 0 1 σ 2 ˆ N 0 , ˜ µ l = µ l λ l 1 σ 2 ˆ N l , 1 ≤ l ≤ L − 1 . ( 8 2) Applying Lemma 9 to (80), we obtain λ 0 ≥ σ 4 ˆ N 0 σ 4 Z 1 ˜ µ 0 + σ 2 ˆ N 0 , λ l ≥ σ 4 ˆ N l ˜ µ l + σ 2 ˆ N l , 1 ≤ l ≤ L − 1 .    (83) From (82) and (83), we obtain λ − 1 0 ≤ 1 σ 2 X 0 + 1 σ 2 Z 1  1 − λ 1 σ 2 Z 1  , λ − 1 l ≤ 1 σ 2 Z l + 1 σ 2 N l + 1 σ 2 Z l +1 − µ l , 1 ≤ l ≤ L − 1 .      (84) On the other hand , we note that for ea c h 1 ≤ l ≤ L − 1 , the fi ve rando m variables W l , X l , Y l , Y l +1 , and W L l +1 form a Markov c hain W l → X l → Y l → Y l +1 → W L l +1 in th is order . Then , applying L emma 10 to (79), we ob tain µ l ≥ 1 σ 2 N l e − 2 r l + 1 σ 4 Z l +1 λ l +1 , 1 ≤ l ≤ L − 1 . (85) Combining (84) and (85), we ob ta in for 1 ≤ l ≤ L − 1 , λ − 1 l ≤ 1 σ 2 Z l + 1 σ 2 N l (1 − e − 2 r l ) + 1 σ 2 Z l +1  1 − λ l +1 σ 2 Z l +1  . (86) Set ν 0 △ = λ − 1 0 − 1 σ 2 X 0 , ν l △ = λ − 1 l − 1 σ 2 Z l , 1 ≤ l ≤ L − 1 . Then, we have I ( X 0 , W L ) = n 2 log(1 + σ 2 X 0 ν 0 ) , I ( Y l , W L l | Y l − 1 ) = n 2 log(1 + σ 2 Z l ν l ) , 1 ≤ l ≤ L − 1 , I ( Y L , W L | Y L − 1 ) = n 2 log(1 + σ 2 Z L ν L ) = nr L Note that ν l , 0 ≤ l ≤ L − 1 are no nnegative. From (84) and (86), { ν l } L l =0 satisfies the following recur sion: ν L = 1 σ 2 Z L  e 2 r L − 1  , (87) ν L − 1 ≤ ν L 1+ σ 2 Z L ν L + 1 − e − 2 r L − 1 σ 2 N L − 1 = 1 − e − 2 r L σ 2 N L + 1 − e − 2 r L − 1 σ 2 N L − 1 (88) ν l ≤ ν l +1 1+ σ 2 Z l +1 ν l +1 + 1 − e − 2 r l σ 2 N l , L − 2 ≥ l ≥ 1 , (89 ) ν 0 ≤ ν 1 1+ σ 2 Z 1 ν 1 , ν 0 = e − 2 r 0 ξ − 1 σ 2 X 0 . (90) From ( 87)-(90), we obtain the up per bou nds o f I ( X 0 ; W L ) and I ( Y l ; W L l | Y l − 1 ) , 1 ≤ l ≤ L − 1 in Lemma 7. On the 18 other hand, from (89), (90), and the n onnegative prop erty of ν l , 0 ≤ l ≤ L − 1 , we have ν 0 =  e − 2 r 0 ξ − 1 σ 2 X 0  + , ν 1 ≥ ν 0 1 − σ 2 Z 1 ν 0 , (91) ν l +1 ≥ " ν l − 1 σ 2 N l ( 1 − e − 2 r l ) # + 1 − σ 2 Z l +1 " ν l − 1 σ 2 N l ( 1 − e − 2 r l ) # + , 1 ≤ l ≤ L − 1 . (92 ) From (91) and ( 92), we obtain the lower bound of I ( Y l ; W L l | Y l − 1 ) , 1 ≤ l ≤ L − 1 in Lem ma 7. C. Pr oof of Lemma 4 Let α L 2 , β L 2 ∈ A L ( α 1 ) . Let t ∈ [0 , 1] and ¯ t = 1 − t . Th en, we have the following chain of ineq ualities: t ( − 2) ζ ( l ) ( α l 2 ) + ¯ t ( − 2) ζ ( l ) ( β L 2 ) = l − 1 X i =1  t lo g  1 − α i i − ǫ i α i + α i +1 τ i +1  + ¯ t lo g  1 − β i 1 − ǫ i β i + β i +1 τ i +1   + l − 1 X i =1 { t log (1 − ǫ i α i ) + ¯ t log (1 − ǫ i β i ) } + t log  1 − α l 1 − ǫ l α l  + ¯ t lo g  1 − β l 1 − ǫ l β l  (a) ≤ l − 1 X i =1 log  1 − t α i 1 − ǫ i α i + t α i +1 τ i +1 − ¯ t β i 1 − ǫ i β i + ¯ t β i +1 τ i +1  + l − 1 X i =1 log (1 − ǫ i [ tα i + ¯ tβ i ]) + log  1 − tα l 1 − ǫ l α l − ¯ tβ l 1 − ǫ l β l  (b) ≤ l − 1 X i =1 log  1 − tα i + ¯ tβ i 1 − ǫ i [ tα i + ¯ tβ i ] + tα i +1 + ¯ tβ i +1 τ i +1  + l − 1 X i =1 log { 1 − ǫ i [ tα i + ¯ tβ i ] } + log  1 − tα l + ¯ tβ l 1 − ǫ l [ tα l + ¯ tβ l ]  = ( − 2) ζ  tα l 2 + ¯ tβ l 2  . Step ( a) follows fr om the strict con cavity of the logarith m function . Step (b) f ollows from the strict conca vity of − a 1 − ǫa for a > 0 . D. Pr oof of Lemma 5 Pr oof of Lemma 5 pa rt a): For the proof we use the following inequ ality: 1 + a 1 + ǫ (1 + a ) − a 1 + ǫa ≤ 1 1 + ǫ . (9 3) The recursion (15) is equ ivalent to τ i θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) = 2 θ ( l ) i ( ω ) − 1 + θ ( l ) i +1 ( ω ) τ i +1 1 + ǫ i  1 + θ ( l ) i +1 ( ω ) τ i +1  + τ i (94) for l − 1 ≥ i ≥ 2 . Applyin g (9 3) to th e secon d term in the right members of (94) and co nsidering th e a ssumption τ l ≥ 1 1+ ǫ l for L − 1 ≥ l ≥ 2 , we have τ i θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) ≥ 2 θ ( l ) i ( ω ) − θ ( l ) i +1 ( ω ) τ i +1 1 + ǫ i θ ( l ) i +1 ( ω ) τ i +1 or equiv alent to τ i θ ( l ) i − 1 ( ω ) 1 + ǫ i − 1 θ ( l ) i − 1 ( ω ) − θ ( l ) i ( ω ) ≥ θ ( l ) i ( ω ) − θ ( l ) i +1 ( ω ) τ i +1 1 + ǫ i θ ( l ) i +1 ( ω ) τ i +1 (95) for l − 1 ≥ i ≥ 2 . W e fir st prove (55) for i = l . Th e e q uality θ ( l ) l − 1 ( ω ) = θ ( l ) l ( ω )+ { (1+ ǫ l ) θ ( l ) l ( ω ) }{ 1 − ǫ l θ ( l ) l ( ω ) } τ l + 1 1 + ǫ l − 1  θ ( l ) l ( ω )+ { (1+ ǫ l ) θ ( l ) l ( ω ) }{ 1 − ǫ l θ ( l ) l ( ω ) } τ l + 1  (96) is equiv alent to the fo llowing two equalities: τ l θ ( l ) l − 1 ( ω ) 1 − ǫ l − 1 θ l − 1 ( l ) ( ω ) − 1 ! − θ ( l ) l ( ω ) = { (1 + ǫ l ) θ ( l ) l ( ω ) }{ 1 − ǫ l θ ( l ) l ( ω ) } (97) = θ ( l ) l ( ω ) − 1 + ǫ l θ ( l ) l ( ω ) { 2 − (1 + ǫ l ) θ ( l ) l ( ω ) } (98) From (97), we have τ l θ ( l ) l − 1 ( ω ) 1 − ǫ l − 1 θ ( l ) l − 1 ( ω ) − 1 ! − θ ( l ) l ( ω ) = { (1 + ǫ l ) θ ( l ) l ( ω ) }{ 1 − ǫ l θ ( l ) l ( ω ) } (a) < 0 . (99) Step ( a) follows fro m the o riginal a ssum ption θ ( l ) l ( ω ) ∈ (0 , (1 + ǫ l ) − 1 ) . From (98), we have τ l θ ( l ) l − 1 ( ω ) 1 − ǫ l − 1 θ ( l ) l − 1 ( ω ) − θ ( l ) l ( ω ) = τ l − 1 + θ ( l ) l ( ω ) + ǫ l θ ( l ) l ( ω ) { 2 − (1 + ǫ l ) θ ( l ) l ( ω ) } ≥ τ l − 1 (b) ≥ 0 . (100) Step (b ) fo llows from the assumption τ l ≥ 1 . From (99) and (100), we have 0 ≤ θ ( l ) l ( ω ) ≤ τ l θ ( l ) l − 1 ( ω ) 1 − ǫ l − 1 θ ( l ) l − 1 ( ω ) , τ l  θ ( l ) l − 1 ( ω ) 1 − ǫ l − 1 θ ( l ) l − 1 ( ω ) − 1  < θ ( l ) l ( ω ) < (1 + ǫ l ) − 1        (101) Thus, (5 6) hold s for i = l . W e next assume that (56) holds for some i + 1 with l ≥ i + 1 , that is, 0 ≤ θ ( l ) i +1 ( ω ) ≤ τ i +1 θ ( l ) i ( ω ) 1 − ǫ i θ ( l ) i ( ω ) , τ i +1  θ ( l ) i ( ω ) 1 − ǫ i θ ( l ) i ( ω ) − 1  < θ ( l ) i +1 ( ω ) < ǫ − 1 i +1 .        (102) 19 Then, from (102), we obtain ǫ − 1 i > θ ( l ) i ( ω ) ≥ θ ( l ) i +1 ( ω ) τ i +1 1+ ǫ i θ ( l ) i +1 ( ω ) τ i +1 > 0 , θ ( l ) i ( ω ) < 1+ θ ( l ) i +1 ( ω ) τ i +1 1+ ǫ i 1+ θ ( l ) i +1 ( ω ) τ i +1 ! .                (103) Using (94), we have τ i θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) − θ ( l ) i ( ω ) = θ ( l ) i ( ω ) − 1 + θ ( l ) i +1 ( ω ) τ i +1 1 + ǫ i  1 + θ ( l ) i +1 ( ω ) τ i +1  + τ i (a) < τ i . Step (a) follows fro m the secon d ine quality of (103). Using (95), we have τ i θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) − θ ( l ) i ( ω ) ≥ θ ( l ) i ( ω ) − θ ( l ) i +1 ( ω ) τ i +1 1 + ǫ i θ ( l ) i +1 ( ω ) τ i +1 (a) ≥ 0 . Step (a) follows from the first inequality of (1 03). Th us, (56) holds for i , comple tin g the pro of. Pr oof o f Lemma 5 part b) : W e first o bserve that ( − 2) ζ ( l ) ( α l 2 ) = l − 1 X i =1  log  1 − α i 1 − ǫ i α i + α i +1 τ i +1  + log (1 − ǫ i α i )  + log  1 − α l 1 − ǫ l α l  = l X i =2 log  1 − ǫ i − 1 α i − 1 − α i − 1 + (1 − ǫ i − 1 α i − 1 ) α i τ i  + log  1 − α l 1 − ǫ l α l  = l − 1 X i =1 log  1 + α i +1 τ i +1 −  1 + ǫ i  1 + α i +1 τ i +1  α i  + log  1 − α l 1 − ǫ l α l  . (104) Computing ( − 2) ∂ ∂ α i ζ ( l ) ( α l 2 ) , we obtain ( − 2) ∂ ∂ α l ζ ( l ) ( α l 2 ) = 1 α l − τ l  α l − 1 1 − ǫ l − 1 α l − 1 − 1  − 1 { 1 − (1+ ǫ l ) α l } (1 − ǫ l α l ) , ( − 2) ∂ ∂ α i ζ ( l ) ( α l 2 ) = 1 α i − τ i  α i − 1 1 − ǫ i − 1 α i − 1 − 1  − 1 1+ α i +1 τ i +1 1+ ǫ i  1+ α i +1 τ i +1  − α i for l − 1 ≥ i ≥ 2 .                            (105) From (105), when ∇ ζ ( l ) ( α l 2 ) = 0 , α l 2 must satisfy α l − τ l  α l − 1 1 − ǫ l − 1 α l − 1 − 1  = { 1 − (1 + ǫ l ) α l } (1 − ǫ l α l ) , 1+ α i +1 τ i +1 1+ ǫ i h 1+ α i +1 τ i +1 i − 2 α i + τ i  α i − 1 1 − ǫ i − 1 α i − 1 − 1  = 0 , for l − 1 ≥ i ≥ 2 .            (106) From (106), we obtain α l − 1 = α l + { 1 − (1+ ǫ l ) α l } (1 − ǫ l α l ) τ l + 1 1 + ǫ l − 1 h α l + { 1 − (1+ ǫ l ) α l } (1 − ǫ l α l ) τ l + 1 i α i − 1 = 1 τ i " 2 α i − 1+ α i +1 τ i +1 1+ ǫ i  1+ α i +1 τ i +1  + τ i # 1 + ǫ i − 1 τ i " 2 α i − 1+ α i +1 τ i +1 1+ ǫ i  1+ α i +1 τ i +1  + τ i # for l − 1 ≥ i ≥ 2 .                              (107) The relation (107) implies that ∇ ζ ( l )   α L 2 =( θ ( l ) i ( ω )) l i =2 = 0 Pr oof of Lemma 5 part c): For th e proo f we u se the following recur sion fo r l ≥ i ≥ 2 : τ i θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) = 2 θ ( l ) i ( ω ) − 1 + θ ( l ) i +1 ( ω ) τ i +1 1 + ǫ i  1 + θ ( l ) i +1 ( ω ) τ i +1  . (108) T aking the d eriv ativ e of both sides of (115) with respect to ω , we obtain 1 n 1 − ǫ l − 1 θ ( l ) i − 1 ( ω ) o 2 d θ ( l ) i − 1 d ω · τ i = 2 d θ ( l ) i d ω − 1  1 + ǫ l  θ ( l ) i +1 ( ω ) τ i +1 + 1  2 d θ ( l ) i +1 d ω · τ − 1 i +1 . (109 ) Since θ l 2 ( ω ) ∈ A l  θ ( l ) 1 ( ω )  , we have τ i θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) − 1 ! < θ ( l ) i ( ω ) . The above inequ ality is equivalent to 1 + ǫ i − 1 θ ( l ) i ( ω ) τ i + 1 ! > 1 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) . (110) From (116) and (11 7) we have ( 1 + ǫ i − 1 θ ( l ) i ( ω ) τ i + 1 !) 2 d θ ( l ) i − 1 d ω · τ l ≥ 2 d θ ( l ) i d ω − 1  1 + ǫ i  θ ( l ) i +1 ( ω ) τ l +1 + 1  2 d θ ( l ) i +1 d ω · τ − 1 i +1 . (1 1 1) 20 The above inequ ality is eq uiv alent to ( 1 + ǫ i − 1 θ ( l ) i ( ω ) τ i + 1 !) 2 1 σ 2 i − 1 d θ ( l ) i − 1 d ω ! ≥ 2 1 σ 2 i d θ ( l ) i d ω ! − 1  1 + ǫ i  θ ( l ) i +1 ( ω ) τ i +1 + 1  2 × 1 σ 2 i +1 d θ ( l ) i +1 d ω ! . (112) For l ≥ i ≥ 1 , set Φ ( l ) i ( ω ) △ =              1 σ 2 i d θ ( l ) i d ω ! i Y j =2 1 ( 1+ ǫ j − 1 θ ( l ) j ( ω ) τ j +1 !) 2 , l ≥ i ≥ 2 , 1 σ 2 1 d θ ( l ) 1 d ω , i = 1 . Then, by (119), we have Φ ( l ) i − 1 ( ω ) ≥ 2Φ ( l ) i ( ω ) − Φ ( l ) i +1 ( ω ) for l − 1 ≥ i ≥ 2 . (113) From (120) we have Φ ( l ) i − 1 ( ω ) − Φ ( l ) i ( ω ) ≥ Φ ( l ) i ( ω ) − Φ ( l ) i +1 ( ω ) ≥ Φ ( l ) l − 1 ( ω ) − Φ ( l ) l ( ω ) = " τ l · d θ ( l ) l − 1 d ω − 1 n 1+ ǫ l − 1  ω τ l +1 o 2 # 1 σ 2 l × l − 1 Y j =2 1 n 1+ ǫ j − 1  θ j ( ω ) τ j +1 o 2 (a) = " 2(1+ ǫ l )(1 − ǫ l ω )) n 1+ ǫ l − 1 h ω + { (1+ ǫ l ) ω − 1 } (1 − ǫ l ω ) τ l +1 io 2 − 1 n 1+ ǫ l − 1 ( ω τ l +1) o 2 # 1 σ 2 l · l − 1 Y j =2 1 n 1+ ǫ j − 1  θ j ( ω ) τ j +1 o 2 (b) ≥ 1 n 1+ ǫ l − 1 ( ω τ l +1) o 2 1 σ 2 l · l − 1 Y j =2 1 n 1+ ǫ j − 1  θ j ( ω ) τ j +1 o 2 = Φ ( l ) l ( ω ) . (114) Step (a) follows fro m θ ( l ) l ( ω ) = ω an d θ ( l ) l − 1 ( ω ) = ǫ l − 1 h ω + { (1+ ǫ l ) ω − 1 } (1 − ǫ l ω ) τ l +1 i 1+ ǫ l − 1 h ω + { (1+ ǫ l ) ω − 1 } (1 − ǫ l ω ) τ l +1 i . Step (b) follows fr om th at for ω ∈ [0 , (1 + ǫ ) − 1 ) , we have 2(1 + ǫ l )(1 − ǫ l ω )) > 2 , ω + { (1 + ǫ l ) ω − 1 } (1 − ǫ l ω ) < ω . By (121), we have Φ ( l ) i ( ω ) ≥ Φ ( l ) l ( ω ) + ( l − i )Φ ( l ) l ( ω ) = ( l − i + 1)Φ ( l ) l ( ω ) , from which we obtain d θ ( l ) i d ω ≥ ( l − i + 1) σ 2 i σ 2 l · l Y j = i +1 1 n 1+ ǫ j − 1  θ j ( ω ) τ j +1 o 2 , completing the proo f . Pr oof of Lemma 5 part c): For th e proo f we u se the following recur sion fo r l ≥ i ≥ 2 : τ i θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) = 2 θ ( l ) i ( ω ) − 1 + θ ( l ) i +1 ( ω ) τ i +1 1 + ǫ i  1 + θ ( l ) i +1 ( ω ) τ i +1  . (115) T aking the d eriv ativ e of both sides of (115) with respect to ω , we obtain 1 n 1 − ǫ l − 1 θ ( l ) i − 1 ( ω ) o 2 d θ ( l ) i − 1 d ω · τ i = 2 d θ ( l ) i d ω − 1  1 + ǫ l  θ ( l ) i +1 ( ω ) τ i +1 + 1  2 d θ ( l ) i +1 d ω · τ − 1 i +1 . (116 ) Since θ l 2 ( ω ) ∈ A l  θ ( l ) 1 ( ω )  , we have τ i θ ( l ) i − 1 ( ω ) 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) − 1 ! < θ ( l ) i ( ω ) . The above inequ ality is equivalent to 1 + ǫ i − 1 θ ( l ) i ( ω ) τ i + 1 ! > 1 1 − ǫ i − 1 θ ( l ) i − 1 ( ω ) . (117) From (116) and (11 7) we have ( 1 + ǫ i − 1 θ ( l ) i ( ω ) τ i + 1 !) 2 d θ ( l ) i − 1 d ω · τ l ≥ 2 d θ ( l ) i d ω − 1  1 + ǫ i  θ ( l ) i +1 ( ω ) τ l +1 + 1  2 d θ ( l ) i +1 d ω · τ − 1 i +1 . (1 1 8) The above inequ ality is equivalent to ( 1 + ǫ i − 1 θ ( l ) i ( ω ) τ i + 1 !) 2 1 σ 2 i − 1 d θ ( l ) i − 1 d ω ! ≥ 2 1 σ 2 i d θ ( l ) i d ω ! − 1  1 + ǫ i  θ ( l ) i +1 ( ω ) τ i +1 + 1  2 × 1 σ 2 i +1 d θ ( l ) i +1 d ω ! . (119) For l ≥ i ≥ 1 , set Φ ( l ) i ( ω ) △ =              1 σ 2 i d θ ( l ) i d ω ! i Y j =2 1 ( 1+ ǫ j − 1 θ ( l ) j ( ω ) τ j +1 !) 2 , l ≥ i ≥ 2 , 1 σ 2 1 d θ ( l ) 1 d ω , i = 1 . Then, by (119), we have Φ ( l ) i − 1 ( ω ) ≥ 2Φ ( l ) i ( ω ) − Φ ( l ) i +1 ( ω ) for l − 1 ≥ i ≥ 2 . ( 120) 21 From (120) we have Φ ( l ) i − 1 ( ω ) − Φ ( l ) i ( ω ) ≥ Φ ( l ) i ( ω ) − Φ ( l ) i +1 ( ω ) ≥ Φ ( l ) l − 1 ( ω ) − Φ ( l ) l ( ω ) = " τ l · d θ ( l ) l − 1 d ω − 1 n 1+ ǫ l − 1  ω τ l +1 o 2 # 1 σ 2 l × l − 1 Y j =2 1 n 1+ ǫ j − 1  θ j ( ω ) τ j +1 o 2 (a) = " 2(1+ ǫ l )(1 − ǫ l ω )) n 1+ ǫ l − 1 h ω + { (1+ ǫ l ) ω − 1 } (1 − ǫ l ω ) τ l +1 io 2 − 1 n 1+ ǫ l − 1 ( ω τ l +1) o 2 # 1 σ 2 l · l − 1 Y j =2 1 n 1+ ǫ j − 1  θ j ( ω ) τ j +1 o 2 (b) ≥ 1 n 1+ ǫ l − 1 ( ω τ l +1) o 2 1 σ 2 l · l − 1 Y j =2 1 n 1+ ǫ j − 1  θ j ( ω ) τ j +1 o 2 = Φ ( l ) l ( ω ) . (121) Step (a) follows fro m θ ( l ) l ( ω ) = ω an d θ ( l ) l − 1 ( ω ) = ǫ l − 1 h ω + { (1+ ǫ l ) ω − 1 } (1 − ǫ l ω ) τ l +1 i 1+ ǫ l − 1 h ω + { (1+ ǫ l ) ω − 1 } (1 − ǫ l ω ) τ l +1 i . Step (b) follows fr om th at for ω ∈ [0 , (1 + ǫ ) − 1 ) , we have 2(1 + ǫ l )(1 − ǫ l ω )) > 2 , ω + { (1 + ǫ l ) ω − 1 } (1 − ǫ l ω ) < ω . By (121), we have Φ ( l ) i ( ω ) ≥ Φ ( l ) l ( ω ) + ( l − i )Φ ( l ) l ( ω ) = ( l − i + 1)Φ ( l ) l ( ω ) , from which we obtain d θ ( l ) i d ω ≥ ( l − i + 1) σ 2 i σ 2 l · l Y j = i +1 1 n 1+ ǫ j − 1  θ j ( ω ) τ j +1 o 2 , completing the proo f . R E F E R E N C E S [1] D. Slepia n and J. K. W olf, “Noiseless coding of correlate d information sources, ” IEEE T rans. Inform. Theory , vol. IT -19, pp. 471-480, July 1973. [2] A. D. W yner , “On source coding with s ide information at the decoder , ” IEEE T rans. Inform. Theory , vol . IT -21, pp. 294-300, May 1975. [3] R. F . Ahlswede and J . K ¨ orner , “Source coding with side informat ion and a con verse for de graded broadc ast chann els, ” IEEE T rans. Inform. Theory , vol. IT -21, pp. 629-637, Nov . 1975. [4] J. K ¨ orner and K . Marton, “Ho w to encode the m odule-t wo s um of binary sources, ” IEEE T rans. Inform. Theory , vol. IT -25, pp. 219-221, Mar . 1979. [5] S. I. Gelf and and M. S. Pinsker , “Source codi ng with incomplete side informati on, ”(in Russian) Probl . P er ed. Inform. , vol. 15, no. 2, pp. 45-57, 1979. [6] Y . Oohama, “Gaussian multiterminal source coding , ” IEEE T ran s. In- form. Theory , vol. 43, pp. 1912-1923, Nov . 1997. [7] Y . Oohama, “The rate-disto rtion fu nction for the quadratic Gaussian CEO problem, ” IEEE T ran s. Inform. Theory , vol. 44, pp. 1057-1070, May 1998. [8] Y . Oohama, “Rate -distortion theory for Gaussian multitermina l source coding systems with se veral s ide informations at the decoder , ” IEE E T rans. Inform. Theory , v ol. 51, pp. 2577-2593, July 2005. [9] A. Pandya, A. Kansal, G. Pottie and M. Sriv asta v a, ”Loss y source coding of multiple Gaussian sources: m -helper probl em” Pr oceedings of IEEE Informatio n Theory W orkshop, San A ntonio,TX , pp. 34-38, Oct. 2004. [10] Y . Oohama, “Gaussian multi terminal source coding with sev eral side informati ons at the decode r , ” Pro ceedings of IEE E Internati onal Sym- posium on Information Theory , Seattl e, USA, July 9-14, pp. 1409-1413, July 2006. [11] S. T avil dar , P . V iswana th, and A. B. W agne r , ”The Gaussian many-help - one distrib uted source coding problem, ” Pr oceedings of IEEE Informa- tion Theory W orkshop , pp. 596-600, Oct. 2006, pr eprint ; ava ilable at http:/ /arxi v .org/ PS − cache /arxi v/pdf/0805/0805.1857v1 .pdf . [12] Y . Oohama, ”Sum rate characteriz ation for the Gaussian many-he lp-one problem, ” P r oceedin gs of IEEE Information Theory W orkshop , pp. 323- 327, Oct. 2009. [13] T . Berge r , “Multite rminal s ource coding, ” in the Information Theory Appr oach to Communication s (CISM Courses and Lectures, no. 229), G. Longo, Ed. V ienna and Ne w Y ork : Springer-V erlag, 1978, pp. 171- 231. [14] S. Y . Tung , “Multite rminal source coding, ” Ph.D. dissertation, School of Electrica l Engineering , Cornell Univ ersity , Ithaca, NY , May 1978. [15] H. V iswanath an and T . Berger , “The quadratic Gaussian CEO problem, ” IEEE T rans. Inform. Theory , vol . 43, pp. 1549-1559, Sept. 1997. [16] H. Y amamoto and K. Itoh, “Source coding theory for multitermina l communicat ion systems with a remote s ource”, T rans. of the IECE of J apan , vol. E63, no.10, pp. 700-706, Oct. 1980. [17] T . J. Flynn and R. M. Gray , “Encoding of correlat ed observ ations, ” IEEE T rans. Inform. Theory , v ol. IT -33, pp. 773-787, Nov . 1987.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment