Variations on Information Embedding in Multiple Access and Broadcast Channels
Information embedding (IE) is the transmission of information within a host signal subject to a distortion constraint. There are two types of embedding methods, namely irreversible IE and reversible IE, depending upon whether or not the host, as well…
Authors: ShivaPrasad Kotagiri, J.Nicholas Laneman
1 V ariati ons on Information Embedding in Multiple Access and Broadcast Channels Shi v aprasad K otagiri Student Member , IEEE , and J. Nicholas La neman, Senior Member , IEEE Abstract Inform ation embedd ing (I E) is the transmission of info rmation with in a host signal sub ject to a distortion constraint. There are two types of embedding methods, namely irrev ersible IE and reversible IE, depend ing upon whether or n ot the host, as well a s the message, is r ecovered at the decod er . In irreversible IE, only th e emb edded message is recovered at the d ecoder , an d in re versible IE, both the message an d the h ost are r ecovered at the dec oder . This paper co nsiders combination s of irreversible and reversible IE in multiple access chan nels (MA C) an d physically degraded b roadcast chan nels (BC). This pap er first consid ers MA C IE in which sep arate encod ers emb ed th eir messages into their h ost signals subject to distor tion co nstraints. The embedd ed signals from th e two enco ders ar e transmitted to a single dec oder across a MAC. This pap er study the ca pacity region in three cases: A) no host recovery at the deco der , B) lossless rec ov ery of on e host at the decoder, and C) lossless recovery o f both hosts at the deco der . For th e cases A an d B, inner bound s on the r especti ve ca pacity regions are dev eloped. For the case C, in ner and o uter boun ds on the c apacity region are d e veloped and th e cap acity region is obtained if the ho sts are independ ent. This paper also considers BC IE in which two mess ages intended for s eparate decoders are embedded into a gi ven host sequence b y a single encoder su bject to a distortion constraint. This pa per study the capacity region for degraded BC in four c ases: A ′ ) lossless recovery of the host seque nce at neithe r of the decoder s, B ′ ) lossless recovery of the ho st sequen ce at on ly th e better deco der , C ′ ) lossless Manuscript receiv ed October 28, 2018. This work has been supported in part by NSF Career Grant and the State of Indiana through t he T wenty-First Century Research and T echnology F und . Parts of this work were presented at Allerton 2005 and IEEE ISIT 2006. Shiv aprasad Kotag iri and J. Nicholas Laneman are with Department of Electrical Engineering, Univ ersity of Notre Dame, Notre Dame, IN 46556, Email: { skotagi r, jnl } @nd.edu October 28, 2018 DRAFT 2 recovery of the host sequence at both deco ders, an d D ′ ) lossless recovery of the host sequence at only the worse d ecoder . For the c ases A ′ and B ′ , in ner and ou ter bou nds on the respective capacity regions are d e veloped. For the cases C ′ and D ′ , the respecti ve capacity regions ar e ob tained. Index T erms Inform ation Embeddin g, Rev ersible In formation Embedd ing, M ultiple Access Channels, Bro adcast Channels I . I N T RO D U C T I O N Information em bedding (IE) is the reliable transmissio n of inform ation within a hos t signal subject to a distortion constraint. IE is a recent area of d igital media research with many applications including activ e and p assi ve copyright protection (digi tal watermarking); steganog- raphy; embedd ing important control , descriptive refere nce information into a given signal; digital upgrades of comm unication infrastructure; and cov ert communications [1], [2], [3], [4]. Th e main idea of IE is that the h ost signal can carry differ ent mess ages at the same time by all o wing a small amount of distortion that can be tolerated at t he intended recei ver for the host signal. It has been observed that IE is closely related to state-dependent channel models with state known non-causally at the encoder [5], [6] [1], [2], [7]. A. F orms of IE In IE, a message W is embedded into a host signal S n such that the embedded signal X n is close to S n under some prescribed distortion measure d ( · , · ) , i.e., E d ( X n , S n ) ≤ ∆ . The decoder recei ves Y n , wh ich is dra wn according a probabi lity law p ( y n | x n , s n ) for giv en X n and S n . Throughout the p aper , we focus on the discrete memoryless case without feedback and denote the channel l a w by p ( y | x , s ) . Based upon whether or not the decoder recovers the host signal in the sense o f p robability of error going to zero, there are two imp ortant types of IE, namely irr ev ersible and r ever sible IE. In irrev ersible IE, the decoder is only concerned with reliable decoding of the message embedded in the host from the receiv ed sequence Y n [1], [2], [7], [8]. The irrev ersible IE capacity of a single-user m odel is given by C (∆) = max p ( u , x | s ): E d ( X , S ) ≤ ∆ [ I ( U ; Y ) − I ( U ; S )] , October 28, 2018 DRAFT 3 where U is an auxi liary random variable wit h | U | ≤ | X || S | . T o achiev e the capacity , Gel’fand- Pinsker coding [5] is used at the encoder such that the dist ortion between X n and S n satisfies the const raint ∆ . In rev ersible IE, the decoder is con cerned with lo ssless recovery of the h ost as well as reliable dec oding of the embedded message in the host fr om t he receiv ed sequence Y n [9], [10]. Rev ersible IE is useful for cases in which littl e or no degradation of the hos t sign al is allowed, with applications in mil itary and medical im agery , and m ultimedia archives of v aluable original works. The re versible IE capacity is gi ven by C (∆) = max p ( x | s ): E d ( X , S ) ≤ ∆ [ I ( X , S ; Y ) − H ( S )] . T o achiev e the above capacity expression, sup erposition coding is used at t he encoder such that the di stortion constraint is satisfied, i .e., E [ d ( X , S )] ≤ ∆ . This paper focuses on IE in multi-user channels such as mul tiple access channels (M A C) and broadcast channels (BC). W e focus on MA C IE with lossless recovery of some host sequ ences at the decoder and BC IE with lossless host recover y at some decoders, b u t the techniqu es can also be applied to ot her mult i-user scenarios. In single-user IE, su bstantial results have been dev eloped, b ut multi-user IE scenarios hav e not been as extensi vely stu died. Informati on theoretic study of single-user public and priv at e watermarking sys tems is studied in [11], [12], [13]. Joint IE and lossy comp ression is studied in [14], [15] and joint watermarking and encryption is studied in [16]. Mult i-user models with state av ailable at th e encoders are studied in [17], [18], [19], [20], [21], [22], [23 ], [24], [25], [26], [27]. As in single-user case, there i s a clos e relationshi p between mu lti-user mo dels with non-causal st ate at th e encoders and mult i-user IE. B. Summary of R esults 1) MA C IE: In Section II, we consider a two-user MA C IE model shown in Figu re 1, but the results can be extended to any n umber of users. Encoder i embeds its information W i into a host signal S n i , generated by a host source i , such that the per -l etter dist ortion between S n i and X n i is l ess than ∆ i , i = 1 , 2 . For this model, we consider the following three cases in recover ing, in the sense of probabilit y of error goi ng to zero, the messages and th e host sequences at th e decoder from the recei ved sequence Y n : October 28, 2018 DRAFT 4 Host Source 1 Host Source 2 P S f r a g r e p l a c e m e n t s Encoder 1 Encoder 2 H o s t s o u r c e 1 H o s t s o u r c e 2 Decoder MA C p ( y | x 1 , s 1 , x 2 , s 2 ) S n 1 S n 2 X n 1 X n 2 Y n W 1 W 2 g ( Y n ) Fig. 1. Block diagram of multiple access channel information embedding model. • Case A, Recove ry of Neither Host: The decoder recovers ( W 1 , W 2 ) from Y n . • Case B, Reco very o f One Host: The decoder recovers ( W 1 , W 2 ) along wit h the on e hos t from Y n . W it hout loss of generality , we can assume that the host sequence S n 2 of Encoder 2 is recovered at the decoder . • Case C, Recovery of Both Hosts : The decoder recovers ( W 1 , W 2 ) and ( S n 1 , S n 2 ) from Y n . Our general MA C IE model considers scenarios in whi ch the MA C output potent ially depends on both t he emb edded s ignals and the host s ignals. For Cases A and B, we de velop inner bo unds on the respective capacity regions in Sections II-A and II-B, respectively . F or Case C, we deriv e inner and outer b ounds on the capacity region if the hosts are correlated in Section II-C, and we show th at t here is no gap b etween the inner and the outer bounds if the hosts are in dependent. 2) BC IE: In Section III, we consider IE in a broadcast scenario as sho wn in Figure 2, which illustrates only two decoders; i n princip le the model and result s can be e xtended to an y number of decoders. In this model, th e encoder embeds two independent messages ( W 1 , W 2 ) into a single host sequence S n such that the disto rtion between t he embedded sig nal X n and S n satisfies a give n distortion constraint ∆ . In this paper , we focus on the case of a degraded broadcast channel, i.e., p ( y , z | x , s ) = p ( y | x , s ) p ( z | y ) . D ecoder 1, or the better decoder , recei ves the channel out put Y n which i s drawn according to a memoryless probabilit y l a w p ( y | x , s ) for giv en X n and S n . Decoder 2 , or the worse decoder , recei ves the sequence Z n which is corrupted version of Y n . For t his model, we consider th e fol lo wing four cases i n recove ring, in th e s ense of probability October 28, 2018 DRAFT 5 Encoder Host Source Decoder 1 Decoder 2 P S f r a g r e p l a c e m e n t s E n c o d e r H o s t s o u r c e D e c o d e r 2 D e c o d e r 1 Broadcast Channel p ( y , z | x , s ) S n X n Y n Z n E d ( S n , X n ) ≤ ∆ ( W 1 , W 2 ) g 1 ( Y n ) g 2 ( Z n ) Fig. 2. Block diagram of the broadcast information embedding model. of error going to zero, the messages and the host sequences at the decoders: • Case A ′ , No Hos t Recovery: Decoder 1 recov ers ( W 1 , W 2 ) from Y n ; Decoder 2 recovers W 2 from Z n . • Case B ′ , H ost Recover y at the Better Decoder: Decoder 1 recovers ( W 1 , W 2 ) and S n from Y n ; Decoder 2 recovers W 2 from Z n . • Case C ′ , Hos t Recovery at Both Decoders: Decoder 1 recovers ( W 1 , W 2 ) and S n from Y n ; Decoder 2 recovers W 2 and S n from Z n . • Case D ′ , Host R eco very at the W orse Decoder: Decoder 1 reco vers ( W 1 , W 2 ) from Y n ; Decoder 2 reco vers W 2 and S n from Z n . Inner and outer boun ds for the BC IE capacity re gion in Case A ′ without an encoder d istortion constraint are deriv ed in [21]; in this paper , we extend the results to incorporate an encoder distortion constraint in Section III-A. F or Case B ′ , we de velop inner and outer bounds for the BC IE capacity region in Section III-B, and for cases C ′ and D ′ we derive the BC IE capacity region in Section III-C and Section III-D, resp ecti vely . It t urns out that the capacity regions in Cases C ′ and D ′ are id entical because th e channel out put Z n is a degraded version of Y n . The capacity region for the model considered in Case C ′ if compressed hosts are av ailable at t he decoders is obtained in [28]. C. Notation Throughout the paper , random v ariables and sample va lues are denoted in a special font, e.g., random va riable X and sample v alue x . Alph abets are denoted in calligraphic font, e.g., X , and are all dis crete. The sho rthand X n 1 represents the sequence X 1 , 1 , X 1 , 2 , . . . , X 1 ,n , and X n 1 ,i represents the October 28, 2018 DRAFT 6 sequence X 1 ,i , X 1 ,i +1 , . . . , X 1 ,n . Finally , H ( · ) and I ( · ; · ) denote the standard information-theoretic quantities of (ensemble a verage) entropy and mutual i nformation, respective ly . I I . M AC I E In this section, let u s formally discuss t he m odel shown i n Figure 1. Host so urce i generates a sequence S n i = S i 1 S i 2 . . . S in of sy mbols from the discrete alp habet S i , i = 1 , 2 . W e assume that the host s equence pair ( S n 1 , S n 2 ) is generated by repeated independent drawings o f a pai r of discrete rando m variables ( S 1 , S 2 ) from a giv en joint distribution p ( s 1 , s 2 ) . The host sequence S n i is non-causally kno wn at Encoder i for i = 1 , 2 . The message s ource at Encoder i p roduces the message index W i ∈ W i = { 1 , 2 , . . . , M i } with equal probabili ty 1 / M i , for i = 1 , 2 . The message index at any encoder is independent o f all host sequ ences and als o independent of t he messages at all ot her encoders. The rate at Encoder i , in bit s per channel use, i s defined as R i = (1 /n ) lo g 2 ( M i ) . Definition 1: A ( M 1 , M 2 , D ( n ) 1 , D ( n ) 2 , n ) MAC IE code consis ts of sequences of encoding functions at Encoder 1 and Encoder 2, f n 1 : W 1 × S n 1 → X n 1 , and f n 2 : W 2 × S n 2 → X n 2 , respectiv ely , and a sequence of decodin g fun ctions, • Recove ry of Neither Host g n A : Y n → ( W 1 , W 2 ) • Recove ry of One Host g n B : Y n → ( W 1 , W 2 , S n 2 ) • Recove ry of Both Hosts g n C : Y n → ( W 1 , S n 1 , W 2 , S n 2 ) The dis tortions asso ciated with MA C IE code are defined as D ( n ) i = E d i ( S n i , X n i ) for the addi ti ve distortion functio n d i ( S n i , X n i ) = 1 n n X j =1 d i ( S ij , X ij ) for som e non -ne gativ e bounded distortion functions d i ( S ij , X ij ) , where i = 1 , 2 . The embedded signals X n 1 and X n 2 from Encoder 1 and Encoder 2, respectively are trans- mitted across a M A C p ( y | x 1 , s 1 , x 2 , s 2 ) without feedback modeled as a memoryless conditional probability dist rib ution Pr( y n | x n 1 , s n 1 , x n 2 , s n 2 ) = n Y j =1 p ( y j | x 1 j , s 1 j , x 2 j , s 2 j ) . (1) October 28, 2018 DRAFT 7 Definition 2: A rate pair ( R 1 , R 2 ) for a giv en distorti on pair (∆ 1 , ∆ 2 ) is said to be MA C IE achievable if there exists a sequence of ( ⌈ 2 nR 1 ⌉ , ⌈ 2 nR 2 ⌉ , D ( n ) 1 , D ( n ) 2 , n ) MA C IE codes wit h lim n →∞ D ( n ) i ≤ ∆ i , for i = 1 , 2 , and lim n →∞ P n e = 0 , where P n e is the probability of error defined appropriately for each case in the sequ el of this section. Definition 3: For give n p ( s 1 , s 2 ) and p ( y | x 1 , s 1 , x 2 , s 2 ) , let P i MAC (∆ 1 , ∆ 2 ) be t he set of all random va riable tuples ( Q , S 1 , S 2 , ( U 1 , X 1 ) , ( U 2 , X 2 ) , Y ) taking values in finite alphabets Q , S , U 1 × X 1 , U 2 × X 2 , and Y , respective ly , with joint di strib ution satisfying condi tions a) P q , ( u 1 , x 1 ) , ( u 2 , x 2 ) , y p ( q , s 1 , s 2 , ( u 1 , x 1 ) , ( u 2 , x 2 ) , y ) = p ( s 1 , s 2 ) , b) p ( q , s 1 , s 2 , ( u 1 , x 1 ) , ( u 2 , x 2 ) , y ) = p ( q ) p ( s 1 , s 2 ) p ( u 1 , x 1 | s 1 , q ) p ( u 2 , x 2 | s 2 , q ) p ( y | x 1 , s 1 , x 2 , s 2 ) c) E d i ( S i , X i ) ≤ ∆ i , for i = 1 , 2 . Definition 4: For give n p ( s 1 , s 2 ) and p ( y | x 1 , s 1 , x 2 , x 2 ) , let P o MAC (∆ 1 , ∆ 2 ) be the set of all random variable tuples ( Q , S 1 , S 2 , X 1 , X 2 , Y ) taking v alues in finite alphabets Q , S , X 1 , X 2 , and Y , respectively , with joint d istribution satisfyi ng the conditions a). P q , x 1 , x 2 , y p ( q , s 1 , s 2 , x 1 , x 2 , y ) = p ( s 1 , s 2 ) , b). p ( q , s 1 , s 2 , x 1 , x 2 , y ) = p ( q ) p ( s 1 , s 2 ) p ( x 1 , x 2 | s 1 , s 2 , q ) p ( y | x 1 , s 1 , x 2 , s 2 ) , c). E d i ( S i , X i ) ≤ ∆ i , for i = 1 , 2 . A. Recovery of Neither Host In t his section, we derive an inner bound on the MA C IE capacity region for Case A, in which the decoder recov ers only ( W 1 , W 2 ) from Y n . W e define the MA C IE capacity region C MAC , A (∆ 1 , ∆ 2 ) as the clos ure of th e set of all MA C IE achie va ble rates ( R 1 , R 2 ) with P ( n ) e := P [( g n A ( Y n ) 6 = ( W 1 , W 2 )] → 0 as n → ∞ . Th e following theorem provides an inner bo und on the capacity re gion. Pr o position 1: Let R i MAC , A (∆ 1 , ∆ 2 ) be the closure of the set of all rate pairs ( R 1 , R 2 ) such that R 1 ≤ I ( U 1 ; U 2 , Y | Q ) − I ( U 1 ; S 1 | Q ) , (2a) R 2 ≤ I ( U 2 ; U 1 , Y | Q ) − I ( U 2 ; S 2 | Q ) , (2b) R 1 + R 2 ≤ I ( U 1 , U 2 ; Y | Q ) − I ( U 1 , U 2 ; S 1 , S 2 | Q ) (2c) for so me ( Q , S 1 , S 2 , ( U 1 , X 1 ) , ( U 2 , X 2 ) , Y ) ∈ P i MAC (∆ 1 , ∆ 2 ) , wher e U 1 and U 2 are auxiliary random variables. Then, R i MAC , A (∆) ⊆ C MAC , A . October 28, 2018 DRAFT 8 Remarks • The inner b ound in Propositi on 1 is sim ilar to that in [29], which considers a Gaussian MA C with no host recovery , but the result here i s for the discrete memoryl ess case. Because th e coding procedures, and error events in [29] apply , we do not provide a proof here. • T o achieve the inner bou nd, distortion-cons trained Gel’fand-Pinsker codes can be u sed to embed W 1 and W 2 into the host sequences S n 1 and S n 2 such that the distortion constraints ∆ 1 and ∆ 2 are met, respectively . B. Recovery of One Host In this section, we deriv e inner and outer bound s on the MA C IE capacity region for Case B, in which the d ecoder recov ers ( W 1 , W 2 , S n 2 ) from Y n . W e define the M A C IE capacity region C MAC , B (∆ 1 , ∆ 2 ) as the closure of the set of all M A C IE achiev abl e rates ( R 1 , R 2 ) with P ( n ) e := P [( g n B ( Y n ) 6 = ( W 1 , W 2 , S n 2 )] → 0 as n → ∞ . The following theorem provides an inner bo und for the capacity region. Pr o position 2: Let R i MAC , B (∆ 1 , ∆ 2 ) be the closure of the set of all rate pairs ( R 1 , R 2 ) such that R 1 ≤ I ( U 1 ; Y | X 2 , S 2 , Q ) − I ( U 1 ; S 1 | X 2 , S 2 , Q ) , (3a) R 2 ≤ I ( X 2 , S 2 ; Y | U 1 , Q ) − H ( S 2 | U 1 , Q ) , (3b) R 1 + R 2 ≤ I ( U 1 , X 2 , S 2 ; Y | Q ) − H ( S 2 ) − I ( U 1 ; S 1 | X 2 , S 2 , Q ) (3c) for so me ( Q , S 1 , S 2 , ( U 1 , X 1 ) , ( X 2 , X 2 ) , Y ) ∈ P i MAC (∆ 1 , ∆ 2 ) , where U 1 and Q are auxiliary random variables. Then, R i MAC , B (∆ 1 , ∆ 2 ) ⊆ C MAC , B (∆ 1 , ∆ 2 ) Remarks • The inner bound in Proposit ion 2 is a special case of an inner bound in [24], whi ch considers the state-dependent M A C with st ate kn o wn at one encod er and recove ry of only messages at the decoder . T o o btain the inner bound in Proposit ion 2 , substitute ( X 2 , S 2 ) in place of X 2 into the inner bound in [24]. • T o achieve the inner bound, di stortion constrained Gel’fand-Pinsker coding is us ed to embed W 1 into the host sequence S n 1 , and distortion-cons trained superposition codi ng i s used to embed W 2 into the h ost sequence S n 2 . October 28, 2018 DRAFT 9 • If we choose U 2 = ( X 2 , S 2 ) int Proposition 1, we o btain the in ner bou nd i n Proposition 2. Thus, R i MAC , B (∆ 1 , ∆ 2 ) ⊆ R i MAC , A (∆ 1 , ∆ 2 ) . C. Recovery of Both Hosts In t his section, we deriv e inner and outer bounds on the MA C IE capacity region for Case C in which th e decoder recovers ( W 1 , S n 1 , W 2 , S n 2 ) from Y n . W e define the M A C IE capacity re gion C MAC , C (∆ 1 , ∆ 2 ) as th e closure of all MA C IE achiev able rates ( R 1 , R 2 ) with P ( n ) e := P [( g ( Y n ) 6 = ( W 1 , S n 1 , W 2 , S n 2 )] → 0 as n → ∞ . The follo wing theorem obt ains an inner bound for t he capacity region. Theor em 1: Let R i MAC , C (∆ 1 , ∆ 2 ) be t he set o f all rate pairs ( R 1 , R 2 ) such that R 1 < [ I ( X 1 , S 1 ; Y | X 2 , S 2 , Q ) − H ( S 1 | S 2 )] , (4a) R 2 < [ I ( X 2 , S 2 ; Y | X 1 , S 1 , Q ) − H ( S 2 | S 1 )] , (4b) R 1 + R 2 < [ I ( X 1 , S 1 , X 2 , S 2 ; Y | Q ) − H ( S 1 , S 2 )] , (4c) for som e ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) ∈ P i MAC (∆ 1 , ∆ 2 ) . Then, R i MAC , C (∆ 1 , ∆ 2 ) ⊆ C MAC , C (∆ 1 , ∆ 2 ) . Pr oof: See Appendix A The following theorem give s an ou ter bound for th e capacity region if S 1 and S 2 are correlated. Theor em 2: Let R o MAC , C (∆ 1 , ∆ 2 ) be t he set o f all rate pairs ( R 1 , R 2 ) such that R 1 < [ I ( X 1 , S 1 ; Y | X 2 , S 2 , Q ) − H ( S 1 | S 2 )] , (5a) R 2 < [ I ( X 2 , S 2 ; Y | X 1 , S 1 , Q ) − H ( S 2 | S 1 )] , (5b) R 1 + R 2 < [ I ( X 1 , S 1 , X 2 , S 2 ; Y | Q ) − H ( S 1 , S 2 )] , (5c) for so me ( Q , S 1 , S 2 , X 1 , X 2 , Y ) ∈ P o MAC (∆ 1 , ∆ 2 ) . If th e host random variables S 1 and S 2 are correlated, then C MAC , C (∆ 1 , ∆ 2 ) ⊆ R o MAC , C (∆ 1 , ∆ 2 ) . If the host random variables S 1 and S 2 are in dependent, then C MAC , C (∆ 1 , ∆ 2 ) ⊆ R i MAC , C (∆ 1 , ∆ 2 ) . October 28, 2018 DRAFT 10 Pr oof: See Appendix B The following corollary of Theorem 1 and Theorem 2 states the MA C IE capacity region for a given pair of disto rtion constraints (∆ 1 , ∆ 2 ) if th e host random variables S 1 and S 2 are independent. Cor oll ary 1: If the host random variables S 1 and S 2 are in dependent, t hen the capacity region C MAC , C (∆ 1 , ∆ 2 ) is the closure of the set of all rate pairs ( R 1 , R 2 ) such that R 1 < [ I ( X 1 , S 1 ; Y | X 2 , S 2 , Q ) − H ( S 1 | S 2 )] , (6a) R 2 < [ I ( X 2 , S 2 ; Y | X 1 , S 1 , Q ) − H ( S 2 | S 1 )] , (6b) R 1 + R 2 < [ I ( X 1 , S 1 , X 2 , S 2 ; Y | Q ) − H ( S 1 , S 2 )] , (6c) for som e ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) ∈ P i MAC (∆ 1 , ∆ 2 ) . Remarks • T o compute either (4) or (5), it i s suffic ient to consider time-sharing random variable Q with | Q | ≤ 4 by Caratheodory’ s t heorem [30 ]. • In most communication scenarios, m essage transm ission rates of zero are achie v able. Ho w- e ver , in this model, message transmissio n rates of zero can be un achie v able if the host source pair p ( s 1 , s 2 ) is such that the upper bou nds on R 1 , R 2 and R 1 + R 2 in (6) are negativ e. This is because we require host recov ery at t he decoder as well. I I I . D E G R A D E D B C I E In this section, let us formally define the BC IE mo del shown in Figure 2. A host sequence S n = ( S 1 , S 2 , . . . , S n ) is an independent and identically distributed (i.i.d.) discrete random sequence whose elements are drawn wi th probabi lity mass function p ( s ) , s ∈ S . Al l alphabets are discrete. W e ass ume that the hos t sequence S n is non-causally k no wn at the encoder . The encoder embeds a message pair ( W 1 , W 2 ) into the host sequence S n such that t he a verage distortion between S n and the embedded sequence X n satisfies a giv en distorti on constraint ∆ . The messages W 1 ∈ { 1 , 2 , . . . , M 1 } and W 2 ∈ { 1 , 2 , . . . , M 2 } are drawn equall y likely with probabilities 1 / M 1 and 1 / M 2 , respectively . Then the rate of message W i is g i ven by R i = (1 /n ) lo g 2 M i bits per channel use, for i = 1 , 2 . It is also assumed that the message W i is independ ent of the ot her message and the host sequence for i = 1 , 2 . October 28, 2018 DRAFT 11 Definition 5: A ( M 1 , M 2 , D ( n ) , n ) BC IE code consists of a sequence of encoding functions at the encoder f n : W 1 × W 2 × S n → X n , and a sequence of decodi ng functions at Decoder 1 and Decoder 2 • No Host Recov ery g n 1 ,A ′ : Y n → ( W 1 , W 2 ) and g n 2 ,A ′ : Z n → W 2 • Host Recovery at the Better Decoder g n 1 ,B ′ : Y n → ( W 1 , W 2 , S n ) and g n 2 ,B ′ : Z n → W 2 • Host Recovery at Both Decoders g n 1 ,C ′ : Y n → ( W 1 , W 2 , S n ) and g n 2 ,C ′ : Z n → ( W 2 , S n ) • Host Reco very at the W orse Decoder g n 1 ,D ′ : Y n → ( W 1 , W 2 ) and g n 2 ,D ′ : Z n → ( W 2 , S n ) , respectiv ely . The associated distortion is defined as D ( n ) = E d ( S n , X n ) , where d ( S n , X n ) = (1 /n ) P n j =1 d ( S j , X j ) for gi ven non-negative bounded distorti on measure d ( · , · ) . The embedded signal X n is transmitted across a discrete memoryless degraded broadcast channel (DMDBC) with state, p ( y | x, s ) p ( z | y ) , modeled as a memoryless con ditional probability distribution Pr( Y n = y n , Z n = z n | x n , s n ) = n Y j =1 p ( y j | x j , s j ) p ( z j | y j ) . (7) Definition 6: A rate pair ( R 1 , R 2 ) for a given disto rtion ∆ is said to be BC IE achievable i f there exists a sequence of ( ⌈ 2 nR 1 ⌉ , ⌈ 2 nR 2 ⌉ , D ( n ) , n ) BC IE cod es wit h lim n →∞ D ( n ) ≤ ∆ and lim n →∞ P n e = 0 , where P n e is the probability of error defined approp riately for each case i n the sequel of the paper . Definition 7: For a giv en p ( s ) and p ( y | x , s ) p ( z | y ) , let P (∆) be t he collection of random var iables ( T , S , X , Y , Z ) with join t probabili ty mass function s atisfying t he fol lo wing conditions a) p ( t , s , x , y , z ) = p ( t , s , x ) p ( y | x , s ) p ( z | y ) b) P t ∈ T , x ∈ X p ( t , x , s ) = p ( s ) c) E d ( S , X ) ≤ ∆ , where T i s an auxi liary random variable. A. No Host Recovery In this section, we st ate inner and outer bounds for the BC IE capacity region i n Case A ′ , in which Decoder 1 recover s ( W 1 , W 2 ) from Y n and Decoder 2 recov ers W 2 from Z n . The BC IE capacity region C A ′ (∆) is th e closure of all BC IE achiev able rates ( R 1 , R 2 ) with P ( n ) e := Pr[( g n 1 ,A ′ ( Y n ) 6 = ( W 1 , W 2 ) or g n 2 ,A ′ ( Z n ) 6 = W 2 ] → 0 as n → ∞ . October 28, 2018 DRAFT 12 Pr o position 3: Let R i A ′ (∆) be the closure of t he set o f all rate pairs ( R 1 , R 2 ) such that R 1 ≤ I ( V ; Y | U ) − I ( V ; S | U ) , (8a) R 2 ≤ I ( U ; Z ) − I ( U ; S ) , (8b) for some (( U , V ) , S , X , Y , Z ) ∈ P (∆) , where U and V are auxiliary random v ariables with alphabet sizes satisfying | U | ≤ | X || S | + 1 and | V | ≤ | X || S | ( | X || S | + 1) , respectiv ely . Let R o A ′ (∆) be the closure of the set of all rate pairs ( R 1 , R 2 ) such that R 1 ≤ I ( V ; Y | U , W ) − I ( V ; S | U , W ) , (9a) R 2 ≤ I ( U ; Z ) − I ( U ; S ) , (9b) R 1 + R 2 ≤ I ( U , V , W ; Y ) − I ( U , V , W ; S ) , (9c) for some (( U , V , W ) , S , X , Y , Z ) ∈ P (∆) , where U , W , and W are auxiliary random variables with alphabet sizes satis fying | U | ≤ | X || S | + 2 , | V | ≤ | X || S | ( | X || S | + 2) + 1 , and W ≤ ( | X || S | ( | X || S | + 2) + 1 )( | X || S | + 2) | X || S | + 1 , respecti vely . Then, R i A ′ (∆) ⊆ C A ′ (∆) ⊆ R o A ′ (∆) . Remarks The inner and outer bou nds in Propos ition 3 are slightly d if ferent from those in [21], which does not consider an encoder d istortion constraint. Al though essentially the same proofs in [21] apply , here there is an additional constraint on the joint probability mass functions P (∆) to l imit the av erage distort ion between t he ho st S and the channel input X to be at most ∆ . T o achiev e the inner bound, Gel’fand-Pinsker codes can be used to embed t he messages ( W 1 , W 2 ) into the host sequence S n . B. Host Reco very at the Better Decoder In this section, we deriv e i nner and o uter bounds on the BC IE capacity region in Case B ′ , in which Decoder 1 recover s ( W 1 , W 2 ) and S n from Y n and Decoder 2 recovers only W 2 from Z n . W e define the BC IE capacity region C B ′ (∆) as th e closure of all BC IE achieva ble rates ( R 1 , R 2 ) with P ( n ) e := Pr[( g n 1 ,B ′ ( Y n ) 6 = ( W 1 , W 2 , ˆ S n ) or g n 2 ,B ′ ( Z n ) 6 = W 2 ] → 0 as n → ∞ . The following two theorems gi ve i nner and outer boun ds for the capacity region i n t his case. Theor em 3: Let R i B ′ (∆) be t he closure of the set o f all rate p airs ( R 1 , R 2 ) such that R 1 ≤ I ( X , S ; Y | U ) − H ( S | U ) , (10 a) R 2 ≤ I ( U ; Z ) − I ( U ; S ) , (10b) October 28, 2018 DRAFT 13 for some ( U , S , X , Y , Z ) ∈ P (∆) , wh ere U i s an auxili ary random variable with alphabet size satisfying | U | ≤ | X || S | + 1 . Th en R i B ′ (∆) ⊆ C B ′ (∆) . Pr oof: See C . Theor em 4: Let R o B ′ (∆) be t he closure of the set o f all rate p airs ( R 1 , R 2 ) such that R 1 ≤ I ( X , S ; Y | U ) − H ( S | U ) , (11a) R 2 ≤ I ( U , V ; Z ) − I ( U , V ; S ) , (11b) for some (( U , V ) , S , X , Y , Z ) ∈ P (∆) , where U and V are auxiliary random v ariables with alphabet sizes satisfying | U | ≤ | X || S | + 1 and | V | ≤ | X || S | ( | X || S | + 1) , respecti vely . Then C B ′ (∆) ⊆ R o B ′ (∆) . Pr oof: See Appendix D. Remarks T o obtain the abov e inner boun d, t he m essage W 2 is embedded into the ho st sequence S n using Gel’f and-Pinsker coding, and t he message W 1 is em bedded int o the host sequence using superposition codi ng such that the dist ortion constraint is sat isfied. The above inner and outer bounds are already con vex regions. So, there is no need to introduce time-sharing auxiliary random var iables. L et us write the constraint o n R 2 in the outer bo und given in (11) as follows I ( U , V ; Z ) − I ( U , V ; S ) = I ( U ; Z ) − I ( U ; S ) + { I ( V ; Z | U ) − I ( V ; S | U ) } . This term I ( V ; Z | U ) − I ( V ; S | U ) is the dif ference between the inner and outer bounds. If V is a determinist ic funct ion of U , both inn er and outer bo unds coincide. This clearly sh o ws that R i B ′ (∆) ⊆ R o B ′ (∆) . C. Host Recovery at Both Decoders This section deriv es the BC IE capacity r egion in Case C ′ , in which Decoder 1 reco vers ( W 1 , W 2 ) and S n from Y n and Decoder 2 recov ers W 2 and S n from Z n . W e define the BC IE capacity region C C ′ (∆) as th e closure of all BC IE achiev able rates ( R 1 , R 2 ) with P ( n ) e := Pr[( g n 1 ,C ′ ( Y n ) 6 = ( W 1 , W 2 , S n ) or g n 2 ,C ′ ( Z n ) 6 = ( W 2 , S n )] → 0 as n → ∞ . Theor em 5: C C ′ (∆) is the closure of the set of all rate pairs ( R 1 , R 2 ) such that R 1 ≤ I ( X ; Y | U , S ) , (12a) R 2 ≤ I ( X , S ; Z ) − H ( S ) , (12b) October 28, 2018 DRAFT 14 for som e ( U , S , X , Y , Z ) ∈ P (∆) , where U i s an auxiliary random v ariable with | U | ≤ | X || S | . Pr oof: See Appendix E Remarks T o achie ve the BC IE capacity region, the messages ( W 1 , W 2 ) are embedded into the host sequence using distortion-const rained superposition coding as in the previous cases because lossless recovery , i .e., re versible emb edding, of th e host sequence S n is required in Case C ′ . D. Host Reco very at the W orse Decoder This section deri ves the BC IE c apacity region i n Case D ′ , in which Decoder 1 recovers ( W 1 , W 2 ) from Y n and Decoder 2 recovers W 2 and S n from Z n . W e define the broadcast IE capacity region C D ′ (∆) as t he closure of all BC IE achiev able rates ( R 1 , R 2 ) with P ( n ) e := Pr[( g n 1 ,D ′ ( Y n ) 6 = ( W 1 , W 2 ) or g n 2 ,D ′ ( Z n ) 6 = ( W 2 , S n )] → 0 as n → ∞ . Cor oll ary 2: C D ′ (∆) = C C ′ (∆) . Pr oof: Since Z n is a de graded version of Y n , and ( W 2 , S n ) mu st be reliably decoded from Z n , ( W 2 , S n ) can also be decoded from Y n . This implies t hat the BC IE capacity region in Case D ′ is the same as in Case C ′ . A P P E N D I X W e present definitions related to strong t ypicality [30], [31], [3 2 ] and imp ortant theorems based on strong typicality which will be used t hroughout the section . Definition 8: A sequence x n ∈ X n is s aid to be ǫ -str ongly typical wi th respect to a distribution p ( x ) on X or x n ∈ T n ǫ ( X ) if 1 n N ( a | x n ) − p ( a ) < ǫ | X | , for all a ∈ X with p ( a ) > 0 , and N ( a | x n ) = 0 for all a ∈ X wit h p ( a ) = 0 , where N ( a | x n ) is the num ber o f occurrences of t he symbol a in the s equence X n . Definition 9: A pair of sequences ( x n , y n ) ∈ X n × Y n is said to be jointly ǫ -str ongl y typical with respect to a distri b ution p ( x , y ) o n X × Y or ( x n , y n ) ∈ T n ǫ ( x , y ) i f 1 n N ( a , b | x n , y n ) − p ( a , b ) < ǫ | X || Y | , October 28, 2018 DRAFT 15 for all ( a , b ) ∈ X × Y with p ( a , b ) > 0 , and N ( a , b | x n , y n ) = 0 for all ( a , b ) ∈ X × Y with p ( a , b ) = 0 , where N ( a , b | x n , y n ) is the number of occurrences of t he symbol ( a , b ) in the pair of sequences ( x n , y n ) . For completeness, we recall theorems on strong typi cality [30], [31], [32] which wi ll be used throughout this section. Lemma 1: Supp ose X n is generated from a di screte memoryless source(DMS) p ( x ) and X n ∈ T n ǫ ( X ) . Then, we ha ve the following 2 − n [ H ( X )+ ǫ 1 ] < P n ( x n ) < 2 − n [ H ( X ) − ǫ 1 ] (13) (1 − ǫ 2 ) 2 n [ H ( X ) − ǫ 1 ] < | T n ǫ ( X ) | < 2 n [ H ( X )+ ǫ 1 ] (14) (1 − ǫ 2 ) ≤ Pr [ X n ∈ T n ǫ ( X )] ≤ 1 (15) where ǫ 1 → 0 as ǫ → 0 , and ǫ 2 → 0 as n → ∞ for fixed ǫ . Lemma 2: Supp ose ( X n , Y n ) is generated from a discrete memoryless source (DMS) p ( x , y ) and ( x n , y n ) ∈ T n ǫ ( X , Y ) and Then, we h a ve t he fol lo wing 2 − n [ H ( X , Y )+ ǫ ′ 1 ] < P n ( x n , y n ) < 2 − n [ H ( X , Y ) − ǫ ′ 1 ] (16) (1 − ǫ ′ 2 ) 2 n [ H ( X , Y ) − ǫ ′ 1 ] < | T n ǫ ( X , Y ) | < 2 n [ H ( X , Y )+ ǫ ′ 1 ] (17) (1 − ǫ ′ 2 ) ≤ Pr[( X n , Y n ) ∈ T n ǫ ( X , Y )] ≤ 1 (18) where ǫ ′ 1 → 0 as ǫ → 0 , and ǫ ′ 2 → 0 as n → ∞ for fixed ǫ . Lemma 3: Supp ose ( X n , Y n ) i s generated from a discrete memoryless s ource(DMS) p ( x , y ) and ( X n , Y n ) ∈ T n ǫ ( X , Y ) . Then, we hav e the fol lo wing 2 − n [ H ( Y | X )+ ǫ ′′ 1 ] < P n ( y n | x n ) < 2 − n [ H ( Y | X ) − ǫ ′′ 1 ] (19) (1 − ǫ ′′ 2 ) 2 n [ H ( Y | X ) − ǫ ′ 1 ] < | T n ǫ ( X , Y | x n ) | < 2 n [ H ( Y | X )+ ǫ ′′ 1 ] (20) (1 − ǫ ′′ 2 ) ≤ Pr[( x n , Y n ) ∈ T n ǫ ( X , Y )] ≤ 1 (21) where ǫ ′′ 1 → 0 as ǫ → 0 , and ǫ ′′ 2 → 0 as n → ∞ for fixed ǫ , and T n ǫ ( X , Y | x n ) = { y n : ( x n , y n ) ∈ T n ǫ ( X , Y ) } . October 28, 2018 DRAFT 16 A. Pr oof of Theor em 1 In this s ection, we demonst rate existence of a sequence of MA C IE codes ( ⌈ 2 nR 1 ⌉ , ⌈ 2 nR 2 ⌉ , D ( n ) 1 , D ( n ) 2 , n ) with lim n →∞ P n e = 0 , and lim n →∞ D ( n ) i ≤ ∆ i for i = 1 , 2 if the rate pair ( R 1 , R 2 ) satis fying (4). Fix ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) ∈ P i MAC (∆ 1 , ∆ 2 ) and n . W e con struct a MA C IE code ( ⌈ 2 nR 1 ⌉ , ⌈ 2 nR 2 ⌉ , D ( n ) 1 , D ( n ) 2 , n ) as follows. • Code construction: Thro ughout the achiev ability proof, let i ∈ I = { 1 , 2 } . Generate time sharing sequence Q n = ( Q 1 , Q 2 , . . . , Q n ) whose elements are i.i.d. with d istrib ution p ( q ) . At Encoder i , for each s n i ∈ S n i , generate ⌈ 2 nR i ⌉ X n i sequence drawn acc ording to Q n j =1 p ( x ij | s ij , q j ) . Call these sequences X n i ( Q n , S n i , m i ) where m i ∈ { 1 , 2 , . . . , 2 nR i } , i = 1 , 2 . In this way , the codebooks are generated at each encod er and re vealed to the decoder . Since th e sequence Q n serves as time sh aring sequence, it can be assu med that the s equence Q n is known at bot h the encoders and at the decoder without loss of generality . • Encoding: Encoder i , upon observing S n i at t he output of host source i and time sharing random sequence Q n , sends message W i ∈ { 1 , 2 , . . . , ⌈ 2 nR i ⌉} by t ransmitting t he codew ord X n i ( Q n , S n i , W i ) . In this way , the codew ord X n i is chosen and transmitt ed from En coder i for a gi ven time sharing sequence Q n , a gi ven host sequence S n i , and a message W i . • Decoding: Fix 0 < ǫ 1 < ǫ . Since the decoder knows the ti me sharing s equence Q n = q n , the decoder , upon receiving the channel outp ut Y n , looks for a tuple ( X n 1 ( q n , s n 1 , m 1 ) , X n 2 ( q n , s n 2 , m 2 )) such that ( X n 1 ( q n , s n 1 , m 1 ) , X n 2 ( q n , s n 2 , m 2 ) , Y n ) ∈ T n ǫ [ Q , S 1 , S 2 , X 1 , X 2 , Y | q n , s n 1 , s n 2 ] for all ( s n 1 , s n 2 ) ∈ T n ǫ 1 [ S 1 , S 2 ] . If a unique vector of sequences exists, the decoder declares that ( ˆ W 1 , ˆ W 2 , ˆ S n 1 , ˆ S n 2 ) = ( m 1 , m 2 , s n 1 , s n 2 ) . Otherwise, the decoder declares an error . In th is way , the messages and the host sequences are decoded at the decoder . • Pr obability of err or: The av erage probability of error is gi ven by the following P n e = X ( s n 1 , s n 2 , q n ) ∈ S n 1 × S n 2 × Q n p ( q n ) p ( s n 1 , s n 2 )Pr[error | ( s n 1 , s n 2 , q n )] ≤ X ( q n , s n 1 , s n 2 ) 6∈ T n ǫ 1 [ Q , S 1 , S 2 ] p ( q n ) p ( s n 1 , s n 2 ) + X ( q n , s n 1 , s n 2 ) ∈ T n ǫ 1 [ Q , S 1 , S 2 ] p ( s n 1 , s n 2 ) p ( q n )Pr[error | ( s n 1 , s n 2 , q n )] (22) October 28, 2018 DRAFT 17 The first term, Pr[( q n , s n 1 , s n 2 ) 6∈ T n ǫ 1 [ Q , S 1 , S 2 ]] , in t he right hand side expression of (22) goes to zero as n → ∞ b y Lemma 2. W it hout los s of g enerality , it can be assumed th at th e ti me-sharing sequence is q n , the output of the host source i is ˜ s n i , and W i = 1 is being transmitted from Encoder i . Hence, the codeword X n i ( q n , ˜ s n i , 1) is transmi tted from Encod er i . It is also assum ed that the time- sharing random sequence Q n = q n is k no wn at both the encoders and th e decoder . Let F be th e event that ( ˜ s n 1 , ˜ s n 2 ) and q n are the out put of the host source pair and time sharing sequence, respectively and ( q n , s n 1 , s n 2 ) ∈ T n ǫ 1 [ Q , S 1 , S 2 ] . The following error e vents are considered to compute Pr[error | F ] and can be m ade to approach zero as n → ∞ . 1) E 1 : ( X n 1 ( q n , ˜ s n 1 , 1) , X n 2 ( q n , ˜ s n 2 , 1) , Y n ) 6∈ T n ǫ [ Q , S 1 , S 2 , X 1 , X 2 , Y | q n , ˜ s n 1 , ˜ s n 2 ] under th e e vent F . By using Lemma 2, we can show that Pr [ E 1 | F ] → 0 as n → ∞ . 2) E 2 : ( X n 1 ( q n , ˜ s n 1 , m 1 ) , X n 2 ( q n , ˜ s n 2 , 1) , Y n ) ∈ T n ǫ [ Q , S 1 , S 2 , X 1 , X 2 , Y | q n , ˜ s n 1 , ˜ s n 2 ] under the e vent F for all m 1 6 = 1 . It can be shown that Pr( E 2 | F ) → 0 as n → ∞ by using Lemma 2 and Lemma 3 if 0 ≤ R 1 < I ( X 1 ; Y | S 1 , S 2 , X 2 , Q ) . 3) E 3 : ( X n 1 ( q n , s n 1 , m 1 ) , X n 2 ( q n , ˜ s n 2 , 1) , Y n ) ∈ T n ǫ [ Q , S 1 , S 2 , X 1 , X 2 , Y | q n , s n 1 , ˜ s n 2 ] u nder the e vent F for all m 1 ∈ M 1 and for all s n 1 6 = ˜ s n 1 and s n 1 ∈ T n ǫ 1 [ S 1 , S 2 | ˜ s n 2 ] . It can be shown t hat Pr ( E 3 | F ) → 0 as n → ∞ b y using Lem ma 2 and Lemma 3 i f 0 ≤ R 1 < I ( S 1 , X 1 ; Y | S 2 , X 2 , Q ) − H ( S 1 | S 2 ) . 4) E 4 : ( X n 1 ( q n , ˜ s n 1 , 1) , X n 2 ( q n , ˜ s n 2 , m 2 ) , Y n ) ∈ T n ǫ [ Q , S 1 , S 2 , X 1 , X 2 , Y | q n , ˜ s n 1 , ˜ s n 2 ] under t he e vent F for all m 2 6 = 1 . It can be shown that Pr( E 4 | F ) → 0 as n → ∞ by using Lemma 2 and Lemma 3 if 0 ≤ R 2 < I ( X 2 ; Y | S 1 , X 1 , S 2 , Q ) . 5) E 5 : ( X n 1 ( q n , ˜ s n 1 , 1) , X n 2 ( q n , s n 2 , m 2 ) , Y n ) ∈ T n ǫ [ Q , S 1 , S 2 , X 1 , X 2 , Y | q n , ˜ s n 1 , s n 2 ] under the e vent F for all m 2 ∈ M 2 , s n 2 6 = ˜ s n 2 , and s n 2 ∈ T n ǫ 1 [ S 1 , S 2 | ˜ s n 1 ] . It can be s ho wn that Pr ( E 5 | F ) → 0 as n → ∞ by usin g L emma 2 a nd Lemma 3 if 0 ≤ R 2 < I ( X 2 , S 2 ; Y | S 1 , X 1 , S 2 , Q ) − H ( S 2 | S 1 ) . 6) E 6 : ( X n 1 ( q n , ˜ s n 1 , m 1 ) , X n 2 ( q n , s n 2 , m 2 ) , Y n ) ∈ T n ǫ [ Q , S 1 , S 2 , X 1 , X 2 , Y | q n , ˜ s n 1 , s n 2 ] under the e vent F for all m 1 ∈ M 1 , m 2 ∈ M 2 , s n 2 6 = ˜ s n 2 and s n 2 ∈ T n ǫ 1 [ S 1 , S 2 | ˜ s n 1 ] . It can be shown that Pr( E 6 | F ) → 0 as n → ∞ by using Lemma 2 and Lemma 3 if R 1 + R 2 < I ( X 1 , S 2 , X 2 ; Y | S 1 , Q ) − H ( S 2 | S 1 ) . October 28, 2018 DRAFT 18 7) E 7 : ( X n 1 ( q n , s n 1 , m 1 ) , X n 2 ( q n , s n 2 , m 2 ) , Y n ) ∈ T n ǫ [ Q , S 1 , S 2 , X 1 , X 2 , Y | q n , ˜ s n 1 , s n 2 ] under the e vent F for all m 1 ∈ M 1 , m 2 ∈ M 2 , ( s n 1 , s n 2 ) 6 = ( ˜ s n 1 , ˜ s n 2 ) , and ( s n 1 , s n 2 ) ∈ T n ǫ 1 [ S 1 , S 2 ] . It can be sh o wn that Pr( E 7 | F ) → 0 as n → ∞ by using Lemma 2 and Lemma 3 if 0 ≤ R 1 + R 2 < I ( S 1 , X 1 , S 2 , X 2 ; Y | Q ) − H ( S 1 , S 2 ) . 8) E 8 : ( X n 1 ( q n , s n 1 , m 1 ) , X n 2 ( q n , ˜ s n 2 , m 2 ) , Y n ) ∈ T n ǫ [ Q , S 1 , X 1 , S 2 , X 2 , Y | q n , s n 1 , ˜ s n 2 ] under the e vent F for all m 1 6 = 1 , m 2 ∈ M 2 , s n 1 6 = ˜ s n 1 , and s n 1 ∈ T n ǫ 1 [ S 1 , S 2 | ˜ s n 2 ] . It can be shown that Pr( E 8 | F ) → 0 as n → ∞ by using Lemma 2 and Lemma 3 if 0 ≤ R 1 + R 2 < I ( S 1 , X 1 , X 2 ; Y | S 2 , Q ) − H ( S 1 | S 2 ) . 9) E 9 : ( X n 1 ( q n , ˜ s n 1 , m 1 ) , X n 2 ( q n , ˜ s n 2 , m 2 ) , Y n ) ∈ T n ǫ [ Q , S 1 , S 2 , X 1 , X 2 , Y | q n , ˜ s n 1 , ˜ s n 2 ] under the e vent F for all m 1 6 = 1 , and m 2 6 = M 2 . It can be s ho wn t hat Pr ( E 9 | F ) → 0 as n → ∞ by us ing Lemm a 2 and Lemma 3 if 0 ≤ R 1 + R 2 < I ( X 1 , X 2 ; Y | S 1 , S 2 , Q ) . Then b y us ing the union bound, Pr[error | F ] ≤ P 9 j =1 Pr[ E j | F ] . Pr[error | F ] goes to zero as n → ∞ since Pr( E j ) → 0 , where j = 1 to 9 , as n → ∞ if rate pair ( R 1 , R 2 ) satisfies (4). It can be concluded that P n e → 0 as n → 0 if rate pai r ( R 1 , R 2 ) satisfies (4). • A ve rage distortions: W e consi der two cases in calculating the av erage distortion between the hos t sequence S n i and the codew ord X n i for an y given message m i and q n ∈ T n ǫ [ Q ] . If X n i ( q n , S n i , m i )) ∈ T n ǫ ( X i | q n , S n i ) for any ( q n , S n 1 , S n 2 ) ∈ T n ǫ 1 [ Q , S 1 , S 2 ] , then the dis tortion between S n i and X n i is given by d i ( S n i , X n i ) = 1 n X x i , s i N ( x i , s i | S n i , X n i ) d i ( s i , x i ) , ≤ X x i , s i p ( s i , x i ) d i ( s i , x i ) + ǫd i,max ≤ ∆ + ǫd i, max (23) where d i, max is th e maximum distortion over the set S i × X i . If X n i ( q n , S n i , m i )) ∈ T n ǫ ( X i | q n , s n i ) for any ( q n , S n 1 , S n 2 ) ∈ T n ǫ 1 [ Q , S 1 , S 2 ] , the distortion d i ( S n i , X n i ) can be upp er boun ded by d i,max . From error event E 1 giv en F , we can sh o w that Pr[ X n i ( q n , S n i , m i )) ∈ T n ǫ ( X i | q n , S n i )] goes t o zero as n → ∞ . W e can then conclude that lim n →∞ E d i ( S n i , f n ( S n i , W i )) ≤ ∆ i by letting ǫ → 0 and n → ∞ . This concludes that R i MAC , C (∆ 1 , ∆ 2 ) ⊆ C MAC , C (∆ 1 , ∆ 2 ) . October 28, 2018 DRAFT 19 B. Pr oof of Theor em 2 W e p ro ve the following l emmas which will be used in the proof of Theorem 2. Lemma 4: L et ( Q j , S 1 , S 2 , ( X 1 j , X 1 j ) , ( X 2 j , X 2 j ) , Y j ) ∈ P i MAC (∆ 1 j , ∆ 2 j ) , l et P n j =1 λ j = 1 , λ j > 0 for j ∈ { 1 , 2 , . . . , n } , and let ∆ i = P n j =1 λ j ∆ ij for i ∈ { 1 , 2 } . Then, there exists ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) ∈ P i MAC (∆ 1 , ∆ 2 ) such that n X j =1 λ j [ I ( S 1 , X 1 j ; Y j | X 2 j , S 2 , Q j )] = I ( S 1 , X 1 ; Y | X 2 , S 2 , Q ) (24a) n X j =1 λ j [ I ( S 2 , X 2 j ; Y j | S 1 , X 1 j , Q j )] = I ( S 2 , X 2 ; Y | X 1 , S 1 , Q ) (24b) n X j =1 λ j [ I ( S 1 , X 1 j , S 2 , X 2 j ; Y j | Q j )] = I ( S 1 , X 1 , S 2 , X 2 ; Y | Q ) (24c) Pr oof: If we prove the lemma for n = 2 , then we can easily e xtend it to any value of n . Let n = 2 and l et λ 1 + λ 2 = 1 , λ j > 0 for j = 1 , 2 . Let β be a bi nary random variable such that Pr( Z = j ) = λ j for j = 1 , 2 . Let ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) = (( Z , Q z ) , S 1 , S 2 , ( X 1 z , X 1 z ) , ( X 2 z , X 2 z ) , Y z ) . ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) = (( Q 1 , 1) , S 1 , S 2 , ( X 11 , X 11 ) , ( X 21 , X 21 ) , Y 1 ) , if Z = 1 ; (( Q 2 , 2) , S 1 , S 2 , ( X 12 , X 12 ) , ( X 22 , X 22 ) , Y 2 ) if Z = 2 ; T o show that ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) ∈ P i MAC (∆ 1 , ∆ 2 ) , we ha ve t o check the condit ions in Definition (3). W e can easily sho w that ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) satisfies the first condition. T o check the second condi tion, we observe th at the X 1 ↔ ( S 1 , S 2 , Q ) ↔ X 2 follows as consequence of I ( X 1 , X 2 | S 1 , S 2 , Q ) = λ 1 I ( X 11 , X 21 | S 1 , S 2 , Q 1 ) + λ 2 I ( X 12 , X 22 | S 1 , S 2 , Q 2 ) = 0 Similarly , X 1 ↔ ( S 1 , Q ) ↔ S 2 and S 1 ↔ ( S 2 , Q ) ↔ X 2 . W e can easily verify t hat E d i ( S i , X i ) < λ 1 ∆ i 1 + λ 2 ∆ i 2 , for i = 1 , 2 using the di strib ution on ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) . Since the distribution on ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) satis fies the conditi ons in Definitio n (3), we can conclude t hat ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) ∈ P i M AC (∆ 1 , ∆ 2 ) . W e can easily derive the equations (24) by using the d istribution on ( Q , S 1 , S 2 , ( X 1 , X 1 ) , ( X 2 , X 2 ) , Y ) . This completes the proof of Lemma. October 28, 2018 DRAFT 20 Lemma 5: L et ( Q j , S 1 , S 2 , X 1 j , X 2 j , Y j ) ∈ P o MAC (∆ 1 j , ∆ 2 j ) , let P n j =1 λ j = 1 , λ j > 0 for j ∈ { 1 , 2 , . . . , n } , and let ∆ i = P n j =1 λ j ∆ ij for i ∈ { 1 , 2 } . Then, there exists ( Q , S 1 , S 2 , X 1 , X 2 , Y ) ∈ P o MAC (∆ 1 , ∆ 2 ) such that n X j =1 λ j [ I ( X 1 j , S 1 ; Y j | X 2 j , S 2 , Q j )] = I ( X 1 , S 1 ; Y | X 2 , S 2 , Q ) (25a) n X j =1 λ j [ I ( X 2 j , S 2 j ; Y j | X 1 j , S 1 j , Q j )] = I ( X 2 , S 2 ; Y | X 1 , S 1 , Q ) (25b) n X j =1 λ j [ I ( X 1 j , S 1 j , X 2 j , S 2 j ; Y j | Q j )] = [ I ( X 1 , S 1 , X 2 , S 2 ; Y | Q )] (25c) Pr oof: W e do not prove the lem ma b ecause proof is similar to the proo f of Lemm a 4. Lemma 6: R i MAC , C (∆ 1 , ∆ 2 ) ⊆ R i MAC , C (∆ ′ 1 , ∆ ′ 2 ) and R o MAC , C (∆ 1 , ∆ 2 ) ⊆ R o MAC , C (∆ ′ 1 , ∆ ′ 2 ) for any ∆ 1 ≤ ∆ ′ 1 and ∆ ′ 2 ≤ ∆ ′ 2 . Pr oof: This lemma can be directly proved from the fact that P i MAC (∆ 1 , ∆ 2 ) ⊆ P i MAC (∆ ′ 1 , ∆ ′ 2 ) and P o MAC (∆ 1 , ∆ 2 ) ⊆ P o MAC (∆ ′ 1 , ∆ ′ 2 ) . W e are now ready t o prove th e Th eorem 2, i .e., prove th at for any sequ ence of M A C IE codes ( ⌈ 2 nR 1 ⌉ , ⌈ 2 nR 2 ⌉ , D ( n ) 1 , D ( n ) 2 , n ) wi th lim n →∞ P n e = 0 and lim n →∞ D ( n ) i ≤ ∆ i , for i = 1 , 2 , the rates mus t satisfy (6). Consider a given code of block length n . T he joint distribution on W 1 × W 2 × S n 1 × S n 2 × X n 1 × X n 2 × Y n is g i ven by p ( w 1 , w 2 , s n 1 , s n 2 , x n 1 , x n 2 , y n ) = 1 2 nR 1 1 2 nR 2 n Y j =1 p ( s 1 j , s 2 j ) ! p ( x n 1 | w 1 , s n 1 ) p ( x n 2 | w 2 , s n 2 ) n Y i =1 p ( y j | x 1 j , x 2 j , s 1 j , s 2 j ) , where, p ( x n i | w i , s n i ) is 1 if x n i = f n i ( w i , s n i ) and 0 otherwise, for i = 1 , 2 . By Fano’ s inequality [30], the conditional entropy of ( W 1 , W 2 , S n 1 , S n 2 ) given Y n is bounded as H ( W 1 , W 2 , S n 1 , S n 2 | Y n ) ≤ n ( R 1 + R 2 + log 2 ( | S 1 || S 2 | )) P n e + 1 △ = nǫ n , (26) for i = 1 , 2 , where ǫ n → 0 as P n e → 0 . W e can no w bound t he rate R 1 as nR 1 ≤ H ( W 1 ) = H ( W 1 | W 2 ) ( a ) = H ( W 1 , S n 1 | W 2 , S n 2 ) − H ( S n 1 | S n 2 ) = H ( W 1 , S n 1 | W 2 , S n 2 ) − H ( W 1 , S n 1 | W 2 , S n 2 , Y n ) October 28, 2018 DRAFT 21 + H ( W 1 , S n 1 | W 2 , S n 2 Y n ) − H ( S n 1 | S n 2 ) ( b ) ≤ H ( W 1 , S n 1 | W 2 , S n 2 ) − H ( W 1 , S n 1 | W 2 , S n 2 , Y n ) − H ( S n 1 | S n 2 ) + nǫ n ( c ) = H ( W 1 , S n 1 | W 2 , X n 2 , S n 2 ) − H ( W 1 , S n 1 | Y n , W 2 , X n 2 , S n 2 ) − H ( S n 1 | S n 2 ) + nǫ n = I ( W 1 , S n 1 ; Y n | W 2 , X n 2 , S n 2 ) − H ( S n 1 | S n 2 ) + nǫ n = H ( Y n | W 2 , X n 2 , S n 2 ) − H ( Y n | W 2 , X n 2 , S n 2 , W 1 , S n 1 ) − H ( S n 1 | S n 2 ) + nǫ n ( d ) = H ( Y n | W 2 , X n 2 , S n 2 ) − H ( Y n | W 2 , X n 2 , S n 2 , W 1 , S n 1 , X n 1 ) − H ( S n 1 | S n 2 ) + nǫ n ( e ) = n X j =1 [ H ( Y j | W 2 , X n 2 , S n 2 , Y j − 1 ) − H ( Y j | W 2 , X n 2 , S n 2 , W 1 , S n 1 , X n 1 , Y j − 1 ) − H ( S 1 j | S n 2 , S j − 1 1 )] + nǫ n ( f ) = n X j =1 [ H ( Y j | W 2 , X n 2 , S n 2 , Y j − 1 ) − H ( Y j | X 1 j , S 1 j , X 2 j , S 2 j ) − H ( S 1 j | S 2 j )] + nǫ n ( g ) ≤ n X j =1 [ H ( Y j | X 2 j , S 2 j ) − H ( Y j | X 1 j , S 1 j , X 2 j , S 2 j ) − H ( S 1 j | S 2 j )] + nǫ n = n X j =1 [ I ( X 1 j , S 1 j ; Y j | X 2 j , S 2 j ) − H ( S 1 j | S 2 j )] + nǫ n , where: ( a ) fol lo ws from t he fact that W 1 is independ ent of each other; and ( W 1 , W 2 ) i s independent of ( S n 1 , S n 2 ) . ( b ) follows from Fano’ s inequali ty , ( c ) foll o ws from the fact that X n 2 is a function of ( W 1 , S n 1 ) , ( d ) follows from the fac t t hat X n 1 is a function of ( W 1 , S n 1 ) , ( e ) foll o ws from the chain rule of mu tual inform ation and entropy , ( f ) follows from the fact that Y j depends on ly on X 1 j , X 2 j , S 1 j , and S 2 j by the m emoryless property of the channel and S 1 j ↔ S 2 j ↔ ( S j − 1 1 , S j − 1 2 , S n 2 ,j +1 ) , ( g ) foll o ws from removing conditioning. Hence, we ha ve R 1 ≤ 1 n n X j =1 [ I ( X 1 j , S 1 ; Y j | X 2 j , S 2 )] − H ( S 1 | S 2 )] + ǫ n October 28, 2018 DRAFT 22 Similarly , we can bound R 2 and R 1 + R 2 as R 2 ≤ 1 n n X j =1 [ I ( X 2 j , S 2 ; Y j | X 1 j , S 1 )] − H ( S 1 | S 2 ) + ǫ n , R 1 + R 2 ≤ 1 n n X j =1 [ I ( X 1 j , S 1 j , X 2 j , S 2 ; Y j )] − H ( S 1 | S 2 ) + ǫ n . If t he host random v ariables S 1 and S 2 are correlated, we can clearly see t hat the random vector ( Q j , S 1 , S 2 , X 1 j , X 2 j , Y j ) wit h p ( q j = j ) = 1 belongs to set P o MAC ( E [ d 1 ( S 1 j , X 1 j )] , E [ d 2 ( S 2 j , X 1 j ])) for j ∈ { 1 , 2 , . . . , n } . According to Lemma 5, th ere exists a random vector ( Q , S 1 , S 2 , ˜ X 1 , ˜ X 2 , ˜ Y ) ∈ P o MAC ( 1 n P n j =1 E [ d 1 ( S 1 j , X 1 j )] , 1 n P n j =1 E [ d 2 ( S 1 j , X 1 j )]) such that the following is true 1 n n X j =1 [ I ( X 1 j , S 1 ; Y j | X 2 j , S 2 )] = I ( ˜ X 1 , S 1 ; ˜ Y | ˜ X 2 , S 2 , Q ) 1 n n X j =1 [ I ( X 2 j , S 2 ; Y j | X 1 j , S 1 )] = I ( ˜ X 2 , S 2 ; ˜ Y | ˜ X 1 , S 1 , Q ) 1 n n X j =1 [ I ( X 1 j , S 1 j , X 2 j , S 2 ; Y j )] = I ( ˜ X 1 , S 1 , ˜ X 2 , S 2 ; ˜ Y | Q ) As n → ∞ , we can conclude the following C MAC , C (∆ 1 , ∆ 2 ) ⊆ R o MAC , C lim n →∞ 1 n n X j =1 E [ d 1 ( S 1 j , X 1 j )] , lim n →∞ 1 n n X j =1 E [ d 2 ( S 1 j , X 1 j )] ! ( a ) ⊆ R o MAC , C (∆ 1 , ∆ 2 ) (29) where ( a ) follo ws from th e Lemma 6. If the host random variables S 1 and S 1 are independent, we can obtain the foll o wing from the condition that the messages W 1 and W 2 are in dependent. p ( x 1 j , x 2 j | s 1 j , s 2 j ) = p ( x 1 j | s 1 j ) p ( x 2 j | s 2 j ) . Then, we can clearly see that the random variable tuple ( Q j , S 1 , S 2 , ( X 1 j , X 1 j ) , ( X 2 j , X 2 j ) , Y j ) with p ( q j = j ) = 1 belongs to set P i MAC ( E [ d 1 ( S 1 j , X 1 j )] , E [ d 2 ( S 2 j , X 1 j )]) for j ∈ { 1 , 2 , . . . , n } . According to Lemma 4, t here exists a random vector ( Q , S 1 , S 2 , ( ˜ X 1 , ˜ X 1 ) , ( ˜ X 2 , ˜ X 2 ) , ˜ Y ) ∈ P i MAC ( 1 n n X j =1 E [ d 1 ( S 1 j , X 1 j )] , 1 n n X j =1 E [ d 2 ( S 1 j , X 1 j )]) October 28, 2018 DRAFT 23 such that (28) is true. As n → ∞ , we can conclude the following C MAC , C (∆ 1 , ∆ 2 ) ⊆ R i MAC , C lim n →∞ 1 n n X j =1 E [ d 1 ( S 1 j , X 1 j )] , lim n →∞ 1 n n X j =1 E [ d 2 ( S 1 j , X 1 j ) ! ] ( a ) ⊆ R i MAC , C (∆ 1 , ∆ 2 ) (30) where ( a ) follo ws from th e Lemma 6. Thi s completes the proof of Theorem 2. C. Pr oof of Theor em 3 In this section, we sho w that R i B ′ (∆) ⊆ C B ′ (∆) . Fix the random v ector ( U , S , X , Y , Z ) ∈ P (∆) . For each n , we construct a ( ⌈ 2 nR 1 ⌉ , ⌈ 2 nR 2 ⌉ , D ( n ) , n ) BC IE code as follows. • Code construction : Generate ⌈ 2 nR 2 ⌉ 2 n ( I ( U ; S )+ ǫ ) U n sequences drawn ac cording to Q n j =1 p ( u j ) . Distribute th ese sequ ences random ly in to ⌈ 2 nR 2 ⌉ bi ns such that each bin has 2 n ( I ( U ; S )+ ǫ ) sequences. Label all sequ ences U n 1 in bi n m 2 ∈ { 1 , 2 , . . . , ⌈ 2 nR 2 ⌉} as U n 1 ( m 2 ) . For each ( S n , U n ) ∈ T n ǫ [ S , U ] , generate ⌈ 2 nR 1 ⌉ X n sequences according t o Q n j =1 p ( x j | u j , s j ) . Label these sequences as X n ( S n , U n , m 1 ) , where ( S n , U n ) ∈ T n ǫ [ S , U ] and m 1 ∈ { 1 , 2 , . . . , ⌈ 2 nR 1 ⌉} . These codebooks are re vealed t o the encoder and both the decoders. • Encoder : The encoder , u pon observing S n ∈ T n ǫ [ S ] at the out put of the hos t source, embeds message W 2 ∈ { 1 , 2 , . . . , ⌈ 2 nR 2 ⌉} into the host sequence by looking for a U n in bin W 2 such that U n ( W 2 ) ∈ T n ǫ [ S , U | S n ] . If such a sequence U n ( W 2 ) does not exist, the encoder declares an error; ot herwise, the encoder embeds message W 1 ∈ { 1 , 2 , . . . , ⌈ 2 nR 1 ⌉} into the host sequence S n by choosing the codew ord X n ( S n , U n ( W 2 ) , W 1 ) . • Decoder 1: Dec oder 1, upon rec eiving Y n , which is a dist orted or attacked v ersion of the em- bedded sequence X n , looks for U n ( m 2 ) , m 2 ∈ { 1 , 2 , . . . , ⌈ 2 nR 2 ⌉} such that ( U n ( m 2 ) , Y n ) ∈ T n ǫ [ U , Y ] . If a unique codeword U n ( m 2 ) does not exist, Decoder 1 declares an error; o th- erwise, Decoder 1 declares t hat ˆ W 2 = m 2 . Upon decoding the sequ ence U n ( ˆ W 2 ) , Decoder 1 looks for X n ( s n , U n ( ˆ W 2 ) , m 1 ) such that ( X n ( s n , U n ( ˆ W 2 ) , m 1 ) , Y n ) ∈ T n ǫ [ S , U , X , Y | s n , U n ( ˆ W 2 )] for each s n ∈ T n ǫ [ U , S | U n ( ˆ W 2 )] and m 1 ∈ { 1 , 2 , . . . , ⌈ 2 nR 1 ⌉} . If a un ique codew ord X n ( s n , U n ( ˆ W 2 ) , m 1 ) exists, Decoder 1 declares that ( ˆ W 1 , ˆ S n 2 ) = ( m 1 , s n ) ; otherwise, it declares an error . • Decoder 2: Decoder 2, up on recei ving Z n , which is a degraded version of Y n , looks for U n ( m 2 ) , m 2 ∈ { 1 , 2 , . . . , ⌈ 2 nR 2 ⌉} such t hat ( U n ( m 2 ) , Z n ) ∈ T n ǫ [ U , Z ] . If a unique codew ord October 28, 2018 DRAFT 24 U n ( m 2 ) exists, Decoder 2 declares th at ˆ W 2 = m 2 ; otherwise, Decoder 2 declares an error . • Pr obability of err or: The av erage probability of error is gi ven by P n e = X s n ∈ S n p ( s n )Pr[error | s n ] ≤ X s n 6∈ T n ǫ [ S ] p ( s n ) + X s n ∈ T n ǫ [ S ] p ( s n )Pr[error | s n ] , (31) where the first term, Pr[ s n 6∈ T n ǫ [ S ]] , goes to zero as n → ∞ by the strong asym ptotic equipartition property (AEP). Without loss of generality , it can be assum ed t hat the ou tput of the hos t source is ˜ s n , and the mess age pair ( W 1 , W 2 ) = (1 , 1) is to be embedded in to the host sequence ˜ s n . Let F be th e ev ent th at the host source output is ˜ s n . T o comput e Pr[error | F ] , let us write the error e vent as E 0 ∪ E 1 ∪ E 2 ∪ E 3 , where: 1) E 0 is th e ev ent th at there is no U n (1) such that U n (1) ∈ T n ǫ [ U , S | ˜ s n ] . Using well- known rate-dist ortion arguments, the probability of this eve nt approaches zero as n goes t o i nfinity since each bin has 2 n ( I ( U ; S )+ ǫ ) U n sequences. Conditioned on the e vent F ∩ E c 0 , it can al so be assumed that ˜ U n (1) is joi ntly s trongly typical wi th the hos t sequence ˜ s n . Hence, the embedded sequence X n ( ˜ s n , ˜ U n (1) , 1) is generated and transmitted from t he encoder . 2) E 1 is the e vent that ( ˜ U n (1) , X n ( ˜ s n , ˜ U n (1) , 1) , Y n , Z n ) 6∈ T n ǫ [ S , U , X , Y , Z | ˜ s n ] . By the strong AEP , we can sho w that Pr[ E 1 | F ∩ E c 0 ] → 0 as n → ∞ . 3) E 2 := E 2 , 1 ∪ ( E c 2 , 1 ∩ E 2 , 2 ) , where E 2 , 1 is the event that ( U n , Y n ) ∈ T n ǫ [ U , Y ] for U n 6 = ˜ U n (1) , and E 2 , 2 is the event t hat ( X n ( s n , ˜ U n (1) , m 1 ) , Y n ) ∈ T n ǫ [ S , U , X , Y | S n , ˜ U n (1)] for m 1 6 = 1 or s n ∈ { s n : s n 6 = ˜ s n , s n ∈ T n ǫ [ U , S | ˜ U n (1)] } . It can be sho wn that Pr[ E 2 , 1 | F ∩ E c 0 ] → 0 as n → ∞ if R 2 ≤ I ( U ; Y ) − I ( U ; S ) and that Pr( E 2 , 2 | F ∩ E c 0 ∩ E c 2 , 1 ) → 0 as n → ∞ if R 1 ≤ I ( S , X ; Y | U ) − H ( S | U ) . 4) E 3 is th e e vent that ( U n , Z n ) ∈ T n ǫ [ U , Z ] for U n 6 = ˜ U n (1) . Us ing Gel’fand-Pinsker ar guments, it can be shown that Pr[ E 3 | F ∩ E c 0 ] → 0 as n → ∞ if R 2 ≤ I ( U ; Z ) − I ( U ; S ) . Because the broadcast channel is degraded, this constraint on R 2 is more restrictiv e t han the previous constraint. Thus, by the union bound, it can be sho wn that P n e goes to zero as n → ∞ i f ( R 1 , R 2 ) ∈ R i B ′ . October 28, 2018 DRAFT 25 • A ve rage distortion: Since ( X n , ˜ s n ) is jo intly strongly typical with high probability and the distribution belongs to P ( ∆) , it can be shown that the average distort ion D ( n ) associated with the g enerated code satisfies t he di stortion constraint ∆ as n → ∞ as i n t he Proof of Theorem 1. D. Pr oof of Theor em 4 In this secti on, we sho w that C B ′ (∆) ⊆ R o B ′ (∆) . If we a re gi ven a sequence of ( ⌈ 2 nR 1 ⌉ , ⌈ 2 nR 2 ⌉ , D ( n ) , n ) BC IE codes, i.e., X n = f ( W 1 , W 2 , S n ) , g n 1 ,B ′ ( Y n ) = ( ˆ W 1 , ˆ W 2 , ˆ S n ) , and g n 2 ,B ′ ( Z n ) = ˆ W 2 , wi th lim n →∞ P n e = 0 and lim n →∞ D ( n ) ≤ ∆ , then we show that the rate p air ( R 1 , R 2 ) must satisfy (11) for some (( U , V ) , S , X , Y , Z ) ∈ P (∆) . Consider a gi ven code of block length n . The joint distribution on W 1 × W 2 × S n × X n × Y n × Z n induced by t he code is gi ven by p ( w 1 , w 2 , s n , x n , y n , z n ) = 1 ⌈ 2 nR 1 ⌉⌈ 2 nR 2 ⌉ p ( s n ) p ( x n | w 1 , w 2 , s n ) × n Y i =1 p ( y j | x j , s j ) p ( z j | y j ) , where, p ( x n | w 1 , w 2 , s n ) is 1 if x n = f n ( w 1 , w 2 , s n ) and 0 oth erwise. W e can bound t he rate R 1 as follows: nR 1 ≤ H ( W 1 ) ( a ) = H ( W 1 , S n | W 2 ) − H ( S n | W 2 ) = H ( W 1 , S n | W 2 ) − H ( W 1 , S n | W 2 , Y n ) + H ( W 1 , S n | W 2 , Y n ) − H ( S n | W 2 ) ( b ) ≤ I ( W 1 , S n ; Y n | W 2 ) − H ( S n | W 2 ) + nǫ n ( c ) = n X j =1 [ I ( W 1 , S n ; Y j | W 2 , Y j − 1 ) − H ( S j | W 2 )] + nǫ n ( d ) = n X j =1 [ H ( Y j | W 2 , Y j − 1 ) − H ( Y j | W 2 , Y j − 1 , W 1 , S n , X n ) − H ( S j | W 2 )] + nǫ n October 28, 2018 DRAFT 26 ( e ) = n X j =1 [ H ( Y j | W 2 , Y j − 1 , Z j − 1 ) − H ( Y j | S j , X j ) − H ( S j | W 2 )] + nǫ n ( f ) ≤ n X j =1 [ H ( Y j | W 2 , Z j − 1 ) − H ( Y j | S j , X j , W 2 , Z j − 1 ) − H ( S j | W 2 , Z j − 1 )] + nǫ n = n X j =1 I ( S j , X j ; Y j | W 2 , Z j − 1 ) − H ( S j | W 2 , Z j − 1 ) + nǫ n (32) where, ǫ n → 0 as n → ∞ , and (a) follows from the fa ct th at W 1 , W 2 and S n are mutuall y independent , (b) follows from Fano’ s inequali ty , (c) follows from the chain rule and the fact that S n is i.i.d. and i ndependent of W 2 , (d) follows from the f act that X n is a deterministic function o f ( W 1 , W 2 , S n ) , (e) follows from degraded and m emoryless properties of the broadcast channel, and (f) foll o ws from removing conditioning i n the positive term and introd ucing conditioning in the negati ve term. W e can also bound th e rate R 2 as follows: nR 2 ≤ H ( W 2 ) ( a ) ≤ I ( W 2 ; Z n ) + nǫ n = n X j =1 [ I ( W 2 , S n j +1 ; Z j ) − I ( W 2 , S n j ; Z j − 1 )] + nǫ n ( b ) ≤ n X j =1 [ I ( W 2 , S n j +1 ; Z j − 1 ) + I ( W 2 , S n j +1 ; Z j | Z j − 1 ) − I ( W 2 , S n j +1 ; Z j − 1 ) − I ( S j ; Z j − 1 | W 2 , S n j +1 )] + nǫ n = n X j =1 [ I ( W 2 , S n j +1 ; Z j | Z j − 1 ) − I ( S j ; Z j − 1 | W 2 , S n j +1 )] + nǫ n = n X j =1 [ H ( Z j | Z j − 1 ) − H ( Z j | W 2 , Z j − 1 , S n j +1 ) October 28, 2018 DRAFT 27 − H ( S j | W 2 , S n j +1 ) + H ( S j | W 2 , Z j − 1 , S n j +1 )] + nǫ n ( c ) ≤ n X j =1 [ H ( Z j ) − H ( Z j | W 2 , Z j − 1 , S n j +1 ) − H ( S j ) + H ( S j | W 2 , Z j − 1 , S n j +1 )] + nǫ n = n X j =1 [ I ( W 2 , Z j − 1 , S n j +1 ; Z j ) − I ( W 2 , Z j − 1 , S n j +1 ; S j )] + nǫ n (33) where, ǫ n → 0 as n → ∞ , and (a) follows from Fano’ s inequalit y , (b) fol lo ws from app lying the chain rule on ( Z j − 1 , Z j ) and ( S n j +1 , S j ) in th e first and second mutual in formation expressions, respectively , and (c) follows from removing conditio ning and the fact that S n is i .i.d. and independent of W 2 . Let ˜ U j := { W 2 , Z j − 1 } and V j := { S n j +1 } for j = 1 , 2 , . . . , n . W e can then writ e (32 ) and (33) as R 1 ≤ I ( S , X ; Y | Q , ˜ U ) − H ( S | Q , ˜ U ) + ǫ n , (34a) R 2 ≤ I ( ˜ U , V ; Z | Q ) − I ( ˜ U , V ; S | Q ) ] + ǫ n , (34b) where Q takes values in the set Q ∈ { 1 , 2 , . . . , n } with equal p robability and the joint prob ability distribution on ( S , Q , ˜ U , V , X , Y , Z ) is p ( S = s , Q = q , ˜ U = ˜ u , V = v , X = x ) p ( y | x , s ) p ( z | y ) , with p ( S = s , Q = q , ˜ U = ˜ u , V = v , X = x ) = p ( s ) p ( q ) p ( U q = ˜ u , V q = v | s , q ) p ( X q = x | s , q , ˜ u , v ) . Finally , we can write (34) as R 1 ≤ I ( S , X ; Y | U ) − H ( S | U ) + nǫ n , R 2 ≤ I ( U , V ; Z ) − I ( U , V ; S ) + nǫ n , where U := ( Q , ˜ U ) , since I ( ˜ U , V ; Z | Q ) ≤ I ( Q , ˜ U , V ; Z ) and I ( Q ; S ) = 0 . Giv en any δ > 0 , the associated distortion D ( n ) , for suf ficiently large n , satis fies ∆ + δ ≥ D ( n ) = E d ( X n , S n ) October 28, 2018 DRAFT 28 = 1 n n X j =1 X x , s p ( X j = x , S j = s ) d ( x , s ) = X x , s p ( X = x , S = s ) d ( x , s ) = E d ( X , S ) . As n → ∞ and δ → 0 , (( U , V ) , S , X , Y , Z ) ∈ P ( ∆) and ( R 1 , R 2 ) ∈ R o B ′ . Thus , C B ′ (∆) ⊆ R o B ′ . E. Pr oof of Theor em 5 1) Achiev ability: In thi s section, we show that R i C ′ (∆) ⊆ C C ′ (∆) . Fix the random vector ( U , S , X , Y , Z ) ∈ P (∆) . For each n , we construct a ( ⌈ 2 nR 1 ⌉ , ⌈ 2 nR 2 ⌉ , D ( n ) , n ) BC IE code as follows. • Code construction: At Encod er , for each s n ∈ S n , generate 2 nR 2 U n sequences drawn ac- cording to Q n j =1 p ( u j | s j ) . Denote these sequences as U n ( s n , m 2 ) , where m 2 ∈ { 1 , 2 , . . . , 2 nR 2 } For each pair ( s n , U n ) , generate 2 nR 1 X n 1 sequences drawn according to Q n j =1 p ( x j | u j , s j ) . Call these sequences X n ( S n , m 1 , m 2 ) where m 1 ∈ { 1 , 2 , . . . , 2 nR 1 } . In t his way , the code- book is generated at the encoder and revealed to both the decoders. • Encoding: Encoder , upon o bserving s n at the output of host source, sends messages W 1 ∈ { 1 , 2 , . . . , 2 nR 1 } and W 2 ∈ { 1 , 2 , . . . , 2 nR 2 } by transmit ting the codeword X n ( s n , W 1 , W 2 ) . In thi s way , the code word X n is chosen and transmitt ed from the encoder for a given host sequence S n , and a given message pair ( W 1 , W 2 ) . • Decoder 1: Decoder 1, up on recei ving the channel output Y n , looks for U n ( s n , m 2 ) such that ( U n ( s n , m 2 ) , Y n ) ∈ T n ǫ [ U , Y | s n ] for all s n ∈ T n ǫ 1 [ S ] . If a unique codeword U n ( s n , m 2 ) exists, Decoder 1 again look s for X n ( s n , m 1 , m 2 ) such that ( X n ( s n , m 1 , m 2 ) , Y n ) ∈ T n ǫ [ X , Y | s n , U n ( s n , m 2 )] . If a unique code word X n ( s n , m 1 , m 2 ) exists, Decoder 1 declares that ( ˆ W 1 , ˆ S n 2 ) = ( m 1 , s n ) . In this way , t he message intended for Decoder 1 and the host sequences are decoded at Decoder 1. • Decoder 2: Decoder 2, up on receiving t he channel output Z n , looks for U n ( s n , m 2 ) such that ( U n ( s n , m 2 ) , Z n ) ∈ T n ǫ [ U , Z | s n ] for al l s n ∈ T n ǫ 1 [ S ] . If a u nique codew ord U n ( s n , m 2 ) code word exists, Decoder 2 declares that ( ˆ W 2 , ˆ S n 1 ) = ( m 2 , s n ) . Otherwise, Decoder 2 declares an error . In this way , the m essage i ntended for Decoder 2 and the host sequences are decoded at Decoder 2. October 28, 2018 DRAFT 29 • Pr obability of err or: The av erage probability of error is gi ven by the following P n e = X ( s n ) ∈ S n p ( s n )Pr[error | s n ] ≤ X s n 6∈ T n ǫ 1 [ S ] p ( s n ) + X s n ∈ T n ǫ 1 [ S ] p ( s n )Pr[error | s n ] , = X s n 6∈ T n ǫ 1 [ S ] p ( s n ) + X s n ∈ T n ǫ 1 [ S ] p ( s n )Pr[E(1) ∪ E((2) | s n ] , (35) where E ( i ) is the eve nt that t he error is made at Decoder i , for i = 1 , 2 . The first term, Pr[ s n 6∈ T n ǫ 1 [ S ]] , in t he right hand side expression of (35) goes t o zero as n → ∞ by Lemma 2. W it hout loss of generality , it can be assumed t hat the output of the host source is ˜ s n , and ( W 1 , W 2 ) = (1 , 1) is being transm itted from the encoder . Hence, the codeword X n ( ˜ s n , 1 , 1) is transmitted from the encoder . Let F 1 be the event that ˜ s n ∈ T n ǫ 1 [ S ] is output of the host source. The follo wing error e vents are considered to com pute P r[E(2) | F ] and c an be made to approach zero as n → ∞ . 1) E 1 : ( U n ( ˜ s n , 1) , X n ( ˜ s n , 1 , 1) , Y n , Z n ) 6∈ T n ǫ [ S , U , X , Y , Z | ˜ s n ] under the event F . B y using Lemma 2, we can show that Pr[ E 1 | F ] → 0 as n → ∞ . 2) E 2 : ( U n ( ˜ s n , m 2 ) , Y n ) ∈ T n ǫ [ S , U , Z | ˜ s n ] under the e vent F ∩ E c 1 for all m 2 6 = 1 . It can b e shown that Pr( E 2 | F ) → 0 as n → ∞ by using Lem ma 2 and Lemma 3 if 0 ≤ R 2 < I ( U ; Z | S ) . 3) E 3 : ( U n ( s n , m 2 ) , Y n ) ∈ T n ǫ [ S , U , Z | s n ] under the event F ∩ E c 1 for all m 1 and s n 6 = ˜ s n . It can be sh o wn that Pr( E 3 | F ) → 0 as n → ∞ by using Lemma 2 and Lemma 3 if 0 ≤ R 2 < I ( U , S ; Z ) − H ( S ) . From t he all above error ev ents, it can be concluded th at Pr[ E (1 ) | F ] → 0 as n → ∞ if 0 ≤ R 2 < I ( U , S ; Z ) − H ( S ) . The following error e vents are consi dered to com pute Pr[E(1) | F ] and can be m ade to approach zero as n → ∞ . 1) E 4 : ( U n ( s n , m 2 ) , Y n ) ∈ T n ǫ [ S , U , Y | s n ] for m 1 6 = 1 or s n 6 = ˜ s n . By consi dering the error e vents si milar to E 2 and E 3 , it can be shown that Pr( E 4 | F , E c 1 ) → 0 as n → ∞ if 0 ≤ R 2 < I ( U , S ; Y ) − H ( S ) . October 28, 2018 DRAFT 30 2) E 5 : ( X n ( ˜ s n , m 1 , 1) , Y n ) ∈ T n ǫ [ S , U , X , Y | ˜ s n , U n ( ˜ s n , 1)] for m 1 6 = 1 . It can be sh o wn that Pr( E 5 | F , E c 1 , E c 4 ) → 0 as n → ∞ if 0 ≤ R 1 < I ( X ; Y | S , U ) . Then by us ing the union bound, Pr[E(1) ∪ E(2) | F ] goes to zero as n → ∞ if rate pair ( R 1 , R 2 ) satisfies (12 ). It can be concluded that P n e → 0 as n → 0 if rate pair ( R 1 , R 2 ) satisfies (12). • A ve rage distortions: Since ( X n , ˜ s n ) is jointly strongly t ypical wit h hi gh probabi lity and the distribution belongs to P ( ∆) , it can be shown that the average distort ion D ( n ) associated with the generated code satisfies the distortion constraint ∆ as n → ∞ as in the Proof of Theorem 1. 2) Con verse: W e s ho w that any s equence of ( ⌈ 2 nR 1 ⌉ , ⌈ 2 nR 2 ⌉ , D ( n ) , n ) codes , i.e., X n = f ( W 1 , W 2 , S n ) , g n 1 ,C ′ ( Y n ) = ( ˆ W 1 , ˆ W 2 , ˆ S n ) , and g n 2 ,C ′ ( Z n ) = ( ˆ W 2 , ˆ S n ) , with lim n →∞ P n e = 0 and lim n →∞ D ( n ) ≤ ∆ , the rate pair ( R 1 , R 2 ) mu st sati sfy (12) for so me ( U , S , X , Y , Z ) ∈ P (∆) . Consider a gi ven code of block leng th n . The joint distribution on W 1 × W 2 × S n × X n × Y n × Z n induced by the code is giv en by p ( w 1 , w 2 , s n , x n , y n , z n ) = 1 ⌈ 2 nR 1 ⌉⌈ 2 nR 2 ⌉ p ( s n ) p ( x n | w 1 , w 2 , s n ) × n Y i =1 p ( y j | x j , s j ) p ( z j | y j ) , where, p ( x n | w 1 , w 2 , s n ) is 1 if x n = f n ( w 1 , w 2 , s n ) and 0 otherwise. W e can bound the rate R 1 as follows: nR 1 ≤ H ( W 1 ) ( a ) = H ( W 1 | W 2 , S n ) = H ( W 1 | W 2 , S n ) − H ( W 1 | W 2 , S n , Y n ) + H ( W 1 | W 2 , S n , Y n ) ( b ) ≤ I ( W 1 ; Y n | W 2 , S n ) + nǫ n = n X j =1 I ( W 1 ; Y j | W 2 , S n , Y j − 1 ) + nǫ n = n X j =1 [ H ( Y j | W 2 , S n , Y j − 1 ) − H ( Y j | W 1 , W 2 , S n , Y j − 1 )] + nǫ n October 28, 2018 DRAFT 31 ( c ) = n X j =1 [ H ( Y j | W 2 , S n , Y j − 1 , Z j − 1 ) − H ( Y j | W 1 , W 2 , S n , Y j − 1 , Z j − 1 )] + nǫ n ( d ) ≤ n X j =1 [ H ( Y j | W 2 , S n , Z j − 1 ) − H ( Y j | W 1 , W 2 , S n , Y j − 1 , Z j − 1 , X n )] + nǫ n ( e ) = n X j =1 [ H ( Y j | W 2 , S n , Z j − 1 ) − H ( Y j | X j , S j )] + nǫ n ( f ) = n X j =1 [ H ( Y j | S j , ˜ U j ) − H ( Y j | X j , S j )] + nǫ n = n X j =1 I ( X j ; Y j | S j , ˜ U j ) + nǫ n , (36) where, (a) follows from the fa ct th at W 1 , W 2 and S n are mutuall y independent , (b) follows from Fano’ s inequali ty and ǫ n → 0 as n → ∞ , (c) follows from Y j ↔ ( W 2 , S n , Y j − 1 ) ↔ Z j − 1 and Y j ↔ ( W 1 , W 2 , S n , Y j − 1 ) ↔ Z j − 1 , (d) follo ws from H ( Y j | W 2 , S n , Y j − 1 , Z j − 1 ) ≤ H ( Y j | W 2 , S n , Z j − 1 ) , and X n is a determini stic function of ( W 1 , W 2 , S n ) , (e) follows from memoryl ess properties of t he b roadcast channel, and (f) follows from ˜ U j := { W 2 , S j − 1 1 , S n j +1 } . W e can also bound th e rate R 2 as follows: nR 2 ≤ H ( W 2 ) ( a ) ≤ H ( W 2 , S n ) − H ( S n ) ( b ) ≤ I ( W 2 , S n ; Z n ) − H ( S n ) + nǫ n = n X j =1 [ I ( W 2 , S n ; Z j | Z j − 1 ) − H ( S j | S j − 1 )] + nǫ n ( c ) = n X j =1 [ H ( Z j | Z j − 1 ) − H ( Z j | W 2 , S n , Z j − 1 ) − H ( S j )] + nǫ n ( d ) ≤ n X j =1 [ H ( Z j ) − H ( Z j | ˜ U j , S j ) − H ( S j )] + nǫ n = n X j =1 [ I ( ˜ U j , S j ; Z j ) − H ( S j )] + nǫ n October 28, 2018 DRAFT 32 where, (a) follows from the fa ct th at W 1 , W 2 and S n are mutuall y independent , (b) follows from Fano’ s inequali ty and ǫ n → 0 as n → ∞ , (c) follows from the fa ct th at S n is an i.i.d. random vector , (d) follows from H ( Z j | Z j − 1 ) ≤ H ( Z j ) , and ˜ U j := { W 2 , S j − 1 1 , S n j +1 } . W e can then write (36 ) and (37a) as R 1 ≤ I ( X ; Y | Q , S , ˜ U ) + ǫ n , (37a) R 2 ≤ I ( ˜ U , S ; Z | Q ) − H ( S ) + ǫ n , (37b) where Q t ak es values in the set Q ∈ { 1 , 2 , . . . , n } with equal probabil ity and the j oint prob ability distribution on ( S , Q , ˜ U , X , Y , Z ) is p ( S = s , Q = q , ˜ U = ˜ u , X = x ) p ( y | x , s ) p ( z | y ) , with p ( S = s , Q = q , ˜ U = ˜ u , X = x ) = p ( s ) p ( q ) p ( U q = ˜ u | s , q ) p ( X q = x | s , q , ˜ u ) . Finally , we can write (37) as R 1 ≤ I ( X ; Y | U , S ) + nǫ n , R 2 ≤ I ( U , S ; Z ) − H ( S ) + nǫ n , where U := ( Q , ˜ U ) , since I ( ˜ U , S ; Z | Q ) ≤ I ( Q , ˜ U , S ; Z ) . Giv en any δ > 0 , the associated distortion D ( n ) , for suf ficiently large n , satis fies ∆ + δ ≥ D ( n ) = E d ( X n , S n ) = 1 n n X j =1 X x , s p ( X j = x , S j = s ) d ( x , s ) = X x , s p ( X = x , S = s ) d ( x , s ) = E d ( X , S ) . As n → ∞ and δ → 0 , ( U , S , X , Y , Z ) ∈ P (∆) and ( R 1 , R 2 ) ∈ C C ′ . October 28, 2018 DRAFT 33 R E F E R E N C E S [1] B. Chen, “Design and Analysis of Digital W atermarking , Information Embedding , and Data Hiding Systems, ” Ph.D. dissertation, Massachusetts Institute of T echnology , Cambridge, MA, 2000. [2] B. Chen and G. W . W ornell, “Quantization Index Modulation: A Class of P ro v ably Good Methods f or Digital W atermarking and Information Embedding, ” IEEE T rans. Inform. T heor y , vol. 47, no. 4, pp. 1423–144 3, May 2001. [3] R. J. Anderson and F . A. P . Petitcolas, “On the Limits of Steg anography , ” I EEE J ournal of Se lected A r eas in Communications , vol. 16, no. 4, pp. 474–484 , May 1998. [4] M. D. Swanson, M. Ko bayashi, and A. H. T e wfik, “Multimedia Data-Embedding and W atermarking T echn ologies, ” in Pr oc. IEEE Int. Conf. Communications (ICC) , vol. 2, 1998, pp. 823–827 . [5] S. I. Gel’fand and M. S . Pinsker , “Coding for Channel with Random Parameters, ” Pr obl.Contr . and Information Theory , vol. 9, no. 1, pp. pp.19–31, 1980. [6] M. H . M. Costa, “Writing on Dirty Paper, ” IE EE T rans. Inform. T heory , vol. vol.IT -29, pp. 439–441, May 1983. [7] P . Moulin and J. O’Sulliv an, “Information-theoretic Analysis of Information Hiding, ” IEE E Tr ans. Inform. Theory , vol. 49, pp. 563–593, 2003. [8] A. S. Cohen, “The Gaussian W atermarking Game, ” IEEE Tr ans. Inform. Theory , vol. vol.48, pp. 1639–1669, June 2002. [9] T . Kalker and F . W illems, “Capacity Bounds and Constructions for Rev ersible Data-hiding, ” in Pr oc. Int. Conf. Digital Signal Pr ocessing , 2002, pp. 71–76. [10] ——, “Capacity Bounds and Constructions for Rev ersible Data-hiding, ” in Proc. SPIE Int. Conf. Security and W atermarking of Multimedia Contents , vol. 5020, 2003, pp. 604–611. [11] A. Somekh-Baruch and N. Merhav , “On the error exponen t and capacity games of priv ate watermarking systems, ” IEEE T rans. Inform. Theory , vol. 49, no. 3, pp. 537–562 , Mar . 2003. [12] ——, “On the capacity game of public watermarking system, ” IEEE Tr ans. Inform. T heory , vol. 50, no. 3, pp. 511–524, Mar . 2004. [13] ——, “On the capacity game of pri v ate fingerprinting systems unde r collusion attacks, ” IEEE Tr ans. Inform. Theory , vol. 51, no. 3, pp. 884–899, Mar . 2005. [14] A. Maor and N. Merhav , “On Joint Information E mbedd ing and Lossy Compression, ” IE EE T rans. Inform. Theory , vol. 51, no. 8, pp. 2998–30 08, Aug. 2005. [15] ——, “On Joint Information E mbedd ing and Lossy Compression in the Presence of a Memoryless Att ack, ” IEE E Tr ans. Inform. Theory , vol. 51, no. 9, pp. 3166–3175, 2005. [16] N. Merhav , “On Joint Coding for W atermarking and Encryption, ” IEE E T ran s. Inform. Theory , vol. 52, no. 1, pp. 190–205, Jan. 2006. [17] S. I. Gel’fand and M. S. Pinsker , “On Gaussian Channels wit h Random Parameters, ” in Pro c. IEE E Int. Symp. Information Theory (ISIT) , 1983. [18] Y . H. Kim, A. Sutivo ng, and S. Sigurj ´ onsson, “Multiple User Writi ng on Dirty Paper, ” in Pr oc. IEEE Int. Symp. Information Theory (ISIT) , June 27 - July 2 2004. [19] A. Khisti, U. Erez, and G. W . W ornell, “Writing on Man y Pieces of Dirty Paper at Once: The Binary Case, ” in Proc. IEEE Int. Symp. Information Theory (ISIT) , June 27 – July 2 2004. [20] S. Kotag iri and J. N. Laneman, “Achiev able Rates for Multiple Access Channels with State Information Kno wn at O ne Encoder, ” in Proc. All erton Conf. Communications, Contr ol, and Computing , 2004. October 28, 2018 DRAFT 34 [21] Y . Steinberg, “Coding for the Degrad ed Broadcast Channel with Random parameters, with Causal and Noncausal Side Information, ” IEEE Tr ans. I nform. Theory , vol. vol.51, pp. 2867–2877 , August 2005. [22] Y . Cemal and Y . Steinberg, “Multiple Access Channel with Partial State Information at the Encoders, ” IEEE T rans. Inform. Theory , vol. vol.IT -51, pp. 3992–4003, Nov ember 2005. [23] S. A. Jafar , “Capacity with Causal and Non-Causal Side Information - A Unified V ie w, ” IEEE T rans. Inform. Theory , vol. 52, no. 12, pp. 5468–5475, Dec. 2006. [24] S. K otagiri and J. N. Laneman, “Multiple Access Channels with State Information Known at Some Encoders, ” submitted to ” EURA SIP J. W ir eless C omm. Net. , ”, S eptember 2007. [25] A. S omek h-Baruch, S. Shamai, and S. V erdu, “Cooperativ e E ncod ing with Asymmetric State Information at the T ransmitters, ” in Proc. Allerton Conf. Communications, Contr ol, and Computing , 2006. [26] S. K otagiri and J. N. Laneman, “Multiaccess Channels with State Known to One E nco der: A Case of Degraded Message Sets, ” in P r oc. IE EE Int. Symp. Information Theory (ISIT) , June 24 - June 29 2007. [27] A. Somekh-Baruch, S. Shamai, and S. V erdu, “Cooperati ve Multiple Access Encoding with States A v ailable at One T ransmitter, ” in Proc. IEEE Int. Symp. Information T heor y (ISIT ) , June 24 - June 29 2007. [28] Y . Steinberg, “Rev ersible Information Embedding with Compressed Hosts at the Decoder, ” in Pr oc. IEEE Int. Symp. Information Theory (ISIT) , July 9 - July 14 2006. [29] W . S un and E. Y ang, Information Hiding , ser. Lecture Notes in Computer Science. Berlin / Heidelberg: Springer Berlin / Heidelberg, Dec. 2004, ch. On Achie v able Regions of Public Multiple-Access Gaussian W atermarking Systems, pp. 38–51. [30] T . M. Cov er and J. A. Thomas, Elements of Information Theory . Ne w Y ork: John Wile y & Sons, Inc., 1991. [31] I. Csi sz ´ ar and J. K orner , Eds., Information Theory: Coding ThoTheor emsr Discr ete Memoryless Systems . Ne w Y ork: Academic Press Inc., 1981. [32] S. Tung , “Multiterminal Source Coding, ” Ph.D. dissertation, Cornell Univ ersity , Ithaca, New Y ork, May 1978. October 28, 2018 DRAFT
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment