A New Achievable Rate Region for the Discrete Memoryless Multiple-Access Channel with Noiseless Feedback

A new single-letter achievable rate region is proposed for the two-user discrete memoryless multiple-access channel(MAC) with noiseless feedback. The proposed region includes the Cover-Leung rate region [1], and it is shown that the inclusion is stri…

Authors: Ramji Venkataramanan, S. S, eep Pradhan

A New Achievable Rate Region for the Discrete Memoryless Multiple-Access   Channel with Noiseless Feedback
1 A Ne w Achie v able Rate Re gion for the Multiple-Access Channel with Noiseless Feedback Ramji V enkataramanan, Member , IEEE, S. Sandeep Pradhan, Member , IEEE, Abstract —A new single-letter achiev able rate region is pro- posed for the two-user discrete memoryless multiple-access chan- nel(MA C) with noiseless feedback. The proposed r egion includes the Cover -Leung rate region [1], and it is sho wn that the inclusion is strict. The proof uses a block-Markov superposition strategy based on the observation that the messages of the two users are correlated given the feedback. The rates of transmission ar e too high f or each encoder to decode the other’ s message directly using the feedback, so they transmit corr elated information in the next block to learn the message of one another . They then cooperate in the follo wing block to resolv e the residual uncertainty of the decoder . The coding scheme may be viewed as a natural generalization of the Cover -Leung scheme with a delay of one extra block and a pair of additional auxiliary random variables. W e compute the proposed rate r egion for two different MA Cs and compare the results with other known rate regions for the MA C with feedback. Finally , we show ho w the coding scheme can be extended to obtain larger rate regions with more auxiliary random variables. Index T erms —Capacity r egion, Feedback, Multiple-access channel I . I N T RO D U C T I O N T He tw o-user discrete memoryless multiple-access channel (MA C) is sho wn in Figure 1. The channel has two inputs X 1 , X 2 , one output Y , and is characterized by a conditional probability law P Y | X 1 X 2 . A pair of transmitters wish to reliably communicate independent information to a receiv er by using the channel simultaneously . The transmitters each hav e access to one channel input, and the receiv er has access to the channel output. The transmitters do not communicate with each other . The capacity region for this channel without feedback ( S 1 and S 2 open in Figure 1) was determined by Ahlswede [2] and Liao [3]. In a MA C with noiseless feedback, the encoders have access to all previous channel outputs before transmitting the present channel input. Gaarder and W olf [4] demonstrated that feedback can enlarge the MAC capacity region using the example of a binary erasure MA C. Cov er and Leung [1] then established a single-letter achie vable rate region for discrete Manuscript recei ved; re vised. This work was supported by NSF grants CCF- 0448115 (CAREER), CCF-0915619. The material in this paper was presented in part at the IEEE International Symposium on Information Theory , Seoul, South Korea, June 2009. R. V enkataramanan was with the Department of Electrical Engineering and Computer Science, Univ ersity of Michigan. He is no w with the Department of Electrical Engineering, Y ale University , Ne w Haven, CT 06511, USA (e- mail:rvenkata@umich.edu). S. Sandeep Pradhan is with the Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor , MI 48109, USA (e- mail:pradhan v@eecs.umich.edu). Communicated by M. Gastpar, Associate Editor for Shannon Theory . Encoder 2 W 2 W 1 P ( Y n | X 1 n , X 2 n ) Y n X 1 n X 2 n delay Encoder 1 Decoder delay ˆ W 1 , ˆ W 2 S 1 S 2 Fig. 1. The multiple-access channel. When S 1 , S 2 are closed there is feedback to both encoders. memoryless MA Cs with feedback. The Cover -Leung (C-L) region was sho wn to be the feedback capacity region for a class of discrete memoryless MACs [5]. Ho wever , the C-L region is smaller than the feedback capacity in general, the white Gaussian MA C being a notable example [6], [7]. The feedback capacity region of the additiv e white Gaussian MAC was de- termined in [6] using a Gaussian-specific scheme; this scheme is an extension of the Schalkwijk-Kailath scheme [8] for the point-to-point white Gaussian channel with feedback. The capacity region of the MAC with feedback was characterized by Kramer [9], [10] in terms of directed information. Ho wev er , this is a ‘multi-letter’ characterization and is not computable. The existence of a single-letter capacity characterization for the discrete memoryless MAC with feedback remains an open question. A single-letter extension of the C-L region was proposed by Bross and Lapidoth in [11]. Outer bounds to the capacity region of the MA C with noiseless feedback were established in [12] and [13]. In [14], it was sho wn that the optimal transmission scheme for the MA C with noiseless feedback could be realized as a state machine, with the state at an y time being the a posteriori probability distrib ution of the messages of the two transmitters. MA Cs with partial/noisy feedback have also been con- sidered in several papers. Willems [15] showed that the C- L rate region can be achieved ev en with partial feedback, i.e., feedback to just one decoder . Achiev able regions for memoryless MACs with noisy feedback were obtained by Carleial [16] and W illems [17]; outer bounds for this setting were obtained in [18]. Recently , improved achiev able rates for the Gaussian MA C with partial or noisy feedback were deriv ed 2 in [19]. The basic idea behind reliable communication over a MA C with feedback is the follo wing. Before communication begins, the tw o transmitters have independent messages to transmit. Suppose the transmitters use the channel once by sending a pair of channel inputs which are functions of the corresponding messages. Then, conditioned on the channel output, the mes- sages of the two transmitters become statistically correlated. Since the channel output is av ailable at all terminals before the second transmission, the problem now becomes one of transmitting correlated messages over the MAC. As more channel uses are expended, the posterior correlation between the messages increases. This correlation can be exploited to combat interference and channel noise more effecti vely in subsequent channel uses. The objectiv e is to capture this idea quantitati vely using a single-letter information-theoretic characterization. The Gaarder-W olf and the C-L schemes exploit feedback in two stages. Each message pair is conv eyed to the decoder ov er two successiv e blocks of transmission. In the first block, the two encoders transmit messages at rates outside the no- feedback capacity re gion. At the end of this block, the decoder cannot decode the message pair; howe ver , the rates are lo w enough for each encoder to decode the message of the other using the feedback. This is possible because each encoder has more information than the decoder . The decoder now forms a list of highly likely pairs of messages. The two encoders can then cooperate and send a common message to resolve the decoder’ s list in the next block. In the C-L scheme, this procedure is repeated over several blocks, with fresh information superimposed over resolution information in ev ery block. This block-Markov superposition scheme yields a single-letter achiev able rate region for the MAC with feed- back. In this scheme, there are two kinds of communication that take place: (i) Fresh independent information exchanged between the encoders, (ii) Common resolution information communicated to the recei ver . This scheme pro vides a strict improv ement over the no-feedback capacity region. Bross and Lapidoth [11] obtained a single-letter inner bound to the capacity rate region by constructing a novel coding scheme which uses the C-L scheme as the starting point. In their scheme, the two encoders spend additional time at the end of each block to engage in a two-way exchange, after which they are able to perfectly reconstruct the messages of one another . In the next block, the encoders cooperate to send the common resolution information to the decoder . This coding scheme reduces to the C-L scheme when there is no two-way exchange. In this paper , we propose a new achiev able rate region for the MA C with feedback by taking a different path, while still using C-L region as the starting point. T o get some insight into the proposed approach, consider a pair of transmission rates significantly larger than any rate pair in the no-feedback capacity region, i.e., the rate pair is outside even the C- L rate region. Below we describe a three-phase scheme to communicate at these rates. F irst Phase : The encoders transmit independent information at the chosen rates over the channel in the first phase, and re- M 2 ^ M 1 ^ M M X X Y MAC 1 1 2 2 ENCODER 1 ENCODER 2 Fig. 2. First phase: transmission of independent information on common output two-way channel with list decoding ceiv e the corresponding block of channel outputs via feedback. The rates are too high for each encoder to correctly decode the message of the other . At the end of this phase, encoder 1 has its own message, and a list of highly likely messages of encoder 2 . This list is created by collecting all the X 2 sequences that are compatible (jointly typical) with its o wn channel input and the channel output, i.e., the ( X 1 , Y ) sequence pair . In other words, the list is a high conditional probability subset of the set of messages of encoder 2 ; this set is clearly smaller than the original message set of encoder 2 . Similarly , encoder 2 can form a list of highly likely messages of encoder 1 . Thus at the end of the first phase, the encoders have correlated information. They wish to transmit this information over the next block. Conditioned on the channel output sequence, the above lists of the two encoders together can be thought of as a high-probability subset of M 1 × M 2 , where M 1 and M 2 denote the message sets of the two encoders. A useful way to visualize this is in terms of a bipartite graph: the left vertices of the graph are the encoder 1 messages that are compatible with the Y sequence, and the right vertices are the encoder 2 messages that are compatible with the Y sequence. A left verte x and a right vertex are connected by an edge if and only if the corresponding messages are together compatible with the Y sequence, i.e., the corresponding ( X 1 , X 2 ) sequence pair is jointly typical with the Y sequence. This bipartite graph (henceforth called a message graph) captures the de- coder’ s uncertainty about the messages of the two encoders. In summary , the first phase of communication can be thought of as transmission of independent information by two terminals ov er a common-output two-way channel with list decoding, as shown in Figure 2. Second Phase : The situation at the end of the first phase is as if a random edge is picked from the abov e message graph with encoder 1 knowing just the left vertex of this edge, and encoder 2 knowing just the right verte x. The two encoders now hav e to communicate ov er the channel so that each of them can recov er this edge. The channel output block of the previous phase can be thought of as common side information observed by all terminals. This second phase of communication can be thought of as tw o terminals transmitting correlated information ov er a common output two-w ay channel with common side information, as shown in Figure 3. W e note that the common side-information is ‘source state’ rather than ‘channel state’- the output block of the previous phase is correlated with the messages (source of information) of the current phase. The channel behavior in the second phase does not depend on the common side information since the channel is assumed to be 3 M 2 ^ M 1 ^ M 1 M X X Y MAC 1 2 2 ENCODER 1 ENCODER 2 Z Fig. 3. Second phase: transmission of correlated information with common side information Z on common output two-way channel. Z is the channel output of phase one. X X Y M ,M V MAC ENCODER 1 ENCODER 2 and DECODER M ,M 2 1 1 2 1 2 ^ ^ Fig. 4. Third phase: transmission of information with common side information V on point-to-point channel. V is the channel output of phases one and two. memoryless. One approach to this communication problem is a strategy based on separate-source-channel coding: first perform dis- tributed compression of the correlated messages (conditioned on the common side information) to produce two nearly independent indices, then transmit this pair of indices using a two-way channel code. This strate gy of separate source and channel coding is not optimal in general. A more efficient way to transmit is to accomplish this jointly: each encoder maps its message and the side information directly to the channel input. By doing this, the two encoders recover the messages of each other at the end of the second phase. In other words, conditioned on the channel output blocks of the two phases, the messages of the two encoders become perfectly correlated with high probability . The decoder ho we ver still cannot recov er these messages and has a list of highly likely message pairs. Thir d Phase : In the final phase of communication, the encoders wish to send a common message over the channel to the decoder so that its list of highly likely message pairs is disambiguated. This is shown in Figure 4. This phase can be thought of as transmission of a message over a point-to- point channel by an encoder to a decoder , with both terminals having common side information (the channel output blocks of the previous two phases) that is statistically correlated with the message. As before, the channel behavior in this phase is independent of this side information owing to the memoryless nature of the channel. For this phase, separate source and channel coding is optimal. Having gone through the basic idea, let us consider some of the issues inv olved in obtaining a single-letter characterization of the performance of such a system. Suppose one uses a random coding procedure for the first phase based on single- letter product distributions on the channel inputs. Then the message graph obtained at the end of this phase is a random subset of the conditionally jointly typical set of channel inputs giv en the channel output. Due to the law of lar ge numbers, with high probability , this message graph is nearly semi- regular [20], i.e., the degrees of the left vertices are nearly equal, and the degrees of the right vertices are nearly equal. T ransmission of correlated sources and correlated message graphs over the MAC has been studied in [21] and [22], re- spectiv ely . In the former , the correlated information is modeled as a pair of memoryless correlated sources with a single-letter joint probability distribution. Unlike the model in [21], the statistical correlation of the messages at the beginning of the second phase cannot be captured by a single-letter probability distribution; rather , the correlation is captured by a message graph that is a random subset of a conditionally typical set. In other words, the random edges in the message graph do not exhibit a memoryless-source-like behavior . In [22], the correlation of the messages is modeled as a sequence of random edges from a sequence of nearly semi-regular bipartite graphs with increasing size. Inspired by the approaches of both [21] and [22], for the two-way communication in the second phase, we will construct a joint- source-channel coding scheme that takes advantage of the common side information. At the beginning of the third phase, the uncertainty list of the decoder consists of the likely message pairs conditioned on the channel outputs of the previous two blocks. Due to the law of large numbers, each message pair in this list is nearly equally likely to be the one transmitted by the encoders in the first phase. This leads to a simple coding strategy for the third phase: a one-to-one mapping that maps the message pairs in the list to an index set, followed by channel coding to transmit the index over a point-to-point channel. Finally , we superimpose the three phases to obtain a ne w block-Markov superposition coding scheme. Fresh information enters in each block and is resolved over the next two blocks. This scheme dictates the joint distributions we may choose for coding. It turns out that there is one more hurdle to cross before we obtain a single-letter characterization - we need to ensure the stationarity of the coding scheme. Recall that in the second phase, each encoder generates its channel input based on its o wn message and the common side information. The channel inputs of the two encoders are correlated, and we need the joint distribution of these correlated inputs to be the same in each block. W e ensure this by imposing a condition on the distributions used at the encoders to generate these correlated channel inputs. This leads to stationarity , resulting in a single-letter characterization. W e show that this scheme yields a single-letter rate region inv olving three auxiliary random variables that includes the C-L re gion, and that the inclusion is strict using two examples. Looking back, we make a couple of comments. At the be- ginning of the first phase, it is easy to see that the independent messages of the encoders can be thought of as a random edge in a fully connected bipartite graph. In other words, since each pair of messages is equally likely to be transmitted in the first phase, every left vertex in the message graph is connected to e very right v ertex. The message graph gets progressi vely thinner over the three phases, until (with high probability) 4 it reduces to a single edge at the end of the third phase. W e note that this thinning of the message graph could be accomplished in four phases or ev en more. This results in improv ed rate regions inv olving a larger collection of auxiliary random variables. In the rest of the paper , we shall consider a formal treatment of the problem. In Section II, we give the required definitions and state the main result of the paper . In Section III, we use bipartite message graphs to explain the main ideas behind the coding scheme quantitati vely . In Section IV, we compare the proposed region with others in the literature using a couple of examples. The formal proof of the main theorem is giv en in Section V. In Section VI, we sho w how our coding scheme can be extended to obtain larger rate regions with additional auxiliary random variables. Section VII concludes the paper . Notation : W e use uppercase letters to denote random vari- ables, lower -case for their realizations and bold-face notation for random vectors. Unless otherwise stated, all vectors have length N . Thus A , A N , ( A 1 , . . . , A N ) . For any α such that 0 < α < 1 , ¯ α , 1 − α . Unless otherwise mentioned, logarithms are with base 2 , and entropy and mutual information are measured in bits. I I . P R E L I M I NA R I E S A N D M A I N R E S U L T A two-user discrete memoryless MA C is defined by a quadruple ( X 1 , X 2 , Y , P Y | X 1 ,X 2 ) of input alphabets X 1 , X 2 and output alphabet Y , and a set of probability distributions P Y | X 1 X 2 ( . | x 1 , x 2 ) on Y for all x 1 ∈ X 1 , x 2 ∈ X 2 . The channel law for n channel uses satisfies the following for all n = 1 , 2 , . . . Pr ( Y n = y n | X n 1 = x 1 , X n 2 = x 2 , Y n − 1 = y ) = P Y | X 1 X 2 ( y n | x 1 n , x 2 n ) for all y n ∈ Y , x 1 ∈ X n 1 , x 2 ∈ X n 1 and y ∈ Y n − 1 . There is noiseless feedback to both encoders ( S 1 and S 2 are both closed in Figure 1). Definition 1. An ( N , M 1 , M 2 ) transmission system for a given MA C with feedback consists of 1) A sequence of mappings for each encoder: e 1 n : { 1 , . . . , M 1 } × Y n − 1 → X 1 , n = 1 , . . . , N e 2 n : { 1 , . . . , M 2 } × Y n − 1 → X 2 , n = 1 , . . . , N 2) A decoder mapping given by g : Y N → { 1 , . . . , M 1 } × { 1 , . . . , M 2 } W e assume that the messages ( W 1 , W 2 ) are drawn uni- formly from the set { 1 , . . . , M 1 } × { 1 , . . . , M 2 } . The channel input of encoder i at time n is gi ven by X in = e in ( W i , Y n − 1 ) for n = 1 , 2 , . . . , N and i = 1 , 2 . The average error probability of the above transmission system is given by τ = 1 M 1 M 2 M 1 X w 1 =1 M 2 X w 2 =1 Pr ( g ( Y ) 6 = ( w 1 , w 2 ) | W 1 , W 2 = w 1 , w 2 ) . Definition 2. A rate pair ( R 1 , R 2 ) is said to be achievable for a given discr ete memoryless MA C with feedbac k if ∀  > 0 , ther e exists an N (  ) such that for all N > N (  ) ther e exists an ( N , M 1 , M 2 ) transmission systems that satisfies the following conditions 1 N log M 1 ≥ R 1 − , 1 N log M 2 ≥ R 2 − , τ ≤ . The set of all achievable r ate pairs is the capacity r e gion with feedback. The following theorem is the main result of this paper . Definition 3. F or a given MA C ( X 1 , X 2 , Y , P Y | X 1 ,X 2 ) define P as the set of all distributions P on U × A × B × X 1 × X 2 × Y of the form P U P AB P X 1 | U A P X 2 | U B P Y | X 1 X 2 (1) wher e U , A and B are arbitrary finite sets. Consider two sets of random variables ( U, A, B , X 1 , X 2 , Y ) and ( ˜ U , ˜ A, ˜ B , ˜ X 1 , ˜ X 2 , ˜ Y ) each having the above distrib ution P . F or conciseness, we often refer to the collection ( U, A, B , Y ) as S , ( ˜ U , ˜ A, ˜ B , ˜ Y ) as ˜ S , and U × A × B × Y as S . Hence P S X 1 X 2 = P ˜ S ˜ X 1 ˜ X 2 = P . Define Q as the set of pairs of conditional distributions ( Q A | ˜ S , ˜ X 1 , Q B | ˜ S , ˜ X 2 ) , that satisfy the following consistency condition X ˜ s, ˜ x 1 , ˜ x 2 ∈S ×X 1 ×X 2 P ˜ S ˜ X 1 ˜ X 2 ( ˜ s, ˜ x 1 , ˜ x 2 ) Q A | ˜ S , ˜ X 1 ( a | ˜ s, ˜ x 1 ) Q B | ˜ S , ˜ X 2 ( b | ˜ s, ˜ x 2 ) = P AB ( a, b ) , ∀ ( a, b ) ∈ A × B . (2) Then, for any ( Q A | ˜ S , ˜ X 1 , Q B | ˜ S , ˜ X 2 ) ∈ Q , the joint distrib u- tion of the two sets of random variables - ( ˜ S , ˜ X 1 , ˜ X 2 ) and ( S, X 1 , X 2 ) - is given by P ˜ S ˜ X 1 ˜ X 2 Q A | ˜ S , ˜ X 1 Q B | ˜ S , ˜ X 2 P U X 1 X 2 Y | AB . (3) Theorem 1. F or a MAC ( X 1 , X 2 , Y , P Y | X 1 ,X 2 ) , for any distribution P fr om P and a pair of conditional distribu- tions ( Q A | ˜ S , ˜ X 1 , Q B | ˜ S , ˜ X 2 ) fr om Q , the following rate-r egion is achievable . R 1 ≤ I ( X 1 ; Y | X 2 B U ˜ S ˜ X 2 ) −  I ( A ; X 2 | Y B U ˜ S ˜ X 2 ) − I ( U ; Y | ˜ U ˜ Y )  + , R 2 ≤ I ( X 2 ; Y | X 1 AU ˜ S ˜ X 1 ) −  I ( B ; X 1 | Y AU ˜ S ˜ X 1 ) − I ( U ; Y | ˜ U ˜ Y )  + , R 1 + R 2 ≤ I ( X 1 X 2 ; Y | U ˜ S ) + I ( U ; Y | ˜ U ˜ Y ) . (4) In the above, we have used x + to denote max(0 , x ) . If we set A = B = φ , we obtain the Cov er-Leung region, specified by (5)-(7) in next section. Remark : The rate region of Theorem 1 is con ve x. I I I . T H E C O D I N G S C H E M E In this section, we giv e a sketch of the proof of the coding theorem. The discussion here is informal; the formal proof of the theorem is given in Section V. As we have seen in Section I, to visualize the ideas behind the coding scheme, it is useful to represent the messages of the two encoders in terms of a bipartite graph. Let us suppose that the two encoders 5 (a) (b) (c) Fig. 5. Decoder’ s message graph for the C-L scheme: (a) Before transmission (b) When each encoder can decode the other’ s message upon receiving block output Y (c) When the encoders cannot decode from the output Y wish to transmit independent information at rates R 1 and R 2 , respectiv ely and use the channel N times. Before transmission begins, the message graph is a fully connected bipartite graph with 2 N R 1 left vertices and 2 N R 2 right vertices. This graph is shown in Figure 5(a), where each left v ertex denotes a message of encoder 1 and each right verte x represents a message of encoder 2 . An edge connecting two vertices represents a message pair that has non-zero probability . W e shall first revie w the C-L scheme in the framework of message graphs, and then extend the ideas to dev elop our coding scheme. A. The Cover -Leung Scheme Fact 1. Cover -Leung (C-L) Region [1] : Consider a joint dis- tribution of the form P U X 1 X 2 Y = P U P X 1 | U P X 2 | U P Y | X 1 X 2 , wher e P Y | X 1 X 2 is fixed by the channel and U is a discr ete random variable with car dinality min {|X 1 | · |X 2 | + 1 , |Y | + 2 } . Then the following rate pairs ( R 1 , R 2 ) are achie vable. R 1 < I ( X 1 ; Y | X 2 U ) , (5) R 2 < I ( X 2 ; Y | X 1 U ) , (6) R 1 + R 2 < I ( X 1 X 2 ; Y ) . (7) In this scheme, there are L blocks of transmission, with a fresh pair of messages in each block. Let ( W 1 l , W 2 l ) , 1 ≤ l < L , denote the message pair for block l , drawn from sets of size 2 N R 1 and 2 N R 2 , respectiv ely . The codebooks of the two encoders for each block are drawn i.i.d according to distributions P X 1 | U and P X 2 | U , respectively , where U is an auxiliary random v ariable known to both transmitters. Let ( X 1 l , X 2 l ) denote the code words corresponding to the message pair . ( W 1 l , W 2 l ) (or equiv alently , ( X 1 l , X 2 l ) ) corre- sponds to a random edge in the graph of Figure 5(a). After the decoder recei ves the output Y l , the message graph conditioned on the channel output (posterior message graph) for block l is the set of all message pairs ( W 1 l , W 2 l ) that could hav e occurred gi ven Y l . W e can define a high probability subset of the posterior message graph, which we call the effective posterior message graph, as follo ws. Let L l be the set of all message pairs ( i, j ) such that ( X 1 l ( i ) , X 2 l ( j ) , Y l ) are jointly typical. The edges of the effecti ve posterior message graph are the message pairs contained in L l . If the rate pair ( R 1 , R 2 ) lies outside the no-feedback ca- pacity region, the decoder cannot decode ( W 1 l , W 2 l ) from the output Y l . Owing to feedback, both encoders know Y l at the end of block l . If R 1 and R 2 satisfy (5) and (6), it can be shown that using the feedback, each encoder can correctly decode the message of the other with high probability . In other words, each edge of the effecti ve posterior message graph is uniquely determined by knowing either the left v ertex or the right verte x. Thus, upon recei ving Y l , the effecti ve posterior message graph at the decoder has the structure shown in Figure 5(b). The number of edges in this graph is approximately 2 N ( R 1 + R 2 − I ( X 1 X 2 ; Y | U )) . The two encoders cooperate to resolv e this decoder uncertainty using a common codebook of U sequences. This codebook has size 2 N R 0 , with each code word symbol chosen i.i.d according to P U . Each codeword index es an edge in the message graph of Figure 5(b). Since both encoders know the random edge ( W 1 l , W 2 l ) , they pick the appropriate codew ord from this codebook and set it as U l +1 . U l +1 uniquely specifies the edge in the graph if the codebook size is greater than the number of edges in the graph of Figure 5(b). This happens if R 0 > R 1 + R 2 − I ( X 1 X 2 ; Y | U ) . (8) The code words X 1( l +1) , X 2( l +1) carry fresh messages for block ( l + 1) , and are picked conditioned on U l +1 according to P X 1 | U and P X 2 | U , respectiv ely . Thus in each block, fresh information is superimposed on resolution information for the previous block. The decoder can decode U l +1 from Y l +1 if the rate R 0 of the U -codebook satisfies R 0 < I ( U ; Y ) (9) Combining (8) and (9), we obtain the final constraint (7) of the C-L rate region. B. Pr oposed Coding scheme Suppose that the rate pair ( R 1 , R 2 ) lies outside the C- L region. Then at the end of each block l , the encoders cannot decode the message of one another . The effecti ve posterior message graph at the decoder on recei ving Y l now looks like Figure 5(c) - with high probability , each verte x no longer has degree one. The degree of each left vertex X 1 l is the number of codew ords X 2 l that are jointly typical with ( X 1 l , Y l ) . This number is approximately 6 (a) X 1 l X 2 l A l +1 B l +1 (b) X 1 l X 2 l Determines U l +2 Fig. 6. Message graph for the pair ( W 1 l , W 2 l ) , the transmitted message pair shown in bold-face: a) After receiving Y l b) After receiving Y l +1 2 N ( R 2 − I ( X 2 ; Y | X 1 U )) . Similarly , the degree of each right vertex is approximately 2 N ( R 1 − I ( X 1 ; Y | X 2 U )) . The number of left vertices is approximately 2 N ( R 1 − I ( X 1 ; Y )) and the number of right vertices is approximately 2 N ( R 2 − I ( X 2 ; Y )) . This graph is nearly semi-regular . Moreover , since the channel output is a random sequence, this graph is a random subset of the conditionally typical set of ( X 1 , X 2 ) given ( Y , U ) . Clearly , the uncertainty of the decoder about ( W 1 l , W 2 l ) now cannot be resolved with just a common message since both encoders cannot agree on the edge in the effectiv e posterior message graph. Of course, conditioned on Y l , the messages are correlated , rather than independent. In other words, the effecti ve posterior message graph conditioned on Y l in Figure 5(c) has left and right degrees that are strictly less than R 1 and R 2 , respectively . The objectiv e now is to efficiently transmit the random edge ( W 1 l , W 2 l ) from the effecti ve message graph of Figure 5(c). Generate a sequence A for each jointly typical sequence pair ( X 1 , Y ) , with symbols generated i.i.d from the distribu- tion P A | X 1 Y . Similarly , generate a sequence B for each jointly typical pair ( X 2 , Y ) , according to distribution P B | X 2 Y . Recall that ( X 1 l , X 2 l ) denotes the codew ord pair transmitted in block l . Encoder 1 sets A l +1 equal to the A -sequence corresponding to ( X 1 l , Y l ) , and encoder 2 sets B l +1 equal to the B -sequence corresponding to ( X 2 l , Y l ) . This is shown in Figure 6(a). The codew ord X 1( l +1) , which carries a fresh message for block ( l + 1) , is chosen conditioned on A l +1 . Similarly , X 2( l +1) is chosen conditioned on B l +1 . W e note that A l +1 and B l +1 are correlated since they are chosen conditioned on ( X 1 l , Y l ) and ( X 2 l , Y l ) , respectively . At the end of block ( l + 1) , the decoder and the two encoders receive Y l +1 . Encoder 1 decodes B l +1 from ( Y l +1 , A l +1 , X 1 l ) . Similarly , encoder 2 decodes A l +1 from ( Y l +1 , B l +1 , X 2 l ) . Assuming this is done correctly , both en- coders no w know the message pair ( W 1 l , W 2 l ) , b ut the decoder does not, since it may not be able decode ( A l +1 , B l +1 ) from Y l +1 . Then the effecti ve posterior message graph at the decoder on recei ving Y l +1 has the form shown in Figure 6(b). Y l − 1 B l A l − 1 , B l − 1 A l Y l − 2 Fig. 7. Correlation propagates across blocks Since both encoders now know the edge in the effecti ve poste- rior message graph conditioned on ( Y l , Y l +1 ) corresponding to ( W 1 l , W 2 l ) , they can cooperate to resolve the decoder’ s uncertainty using a common sequence U l +2 in block ( l + 2) . T o summarize, codewords ( X 1 l , X 2 l ) which carry the fresh messages for block l , can be decoded by neither the encoders nor the decoder upon receiving Y l . So the encoders send correlated information ( A l +1 , B l +1 ) in block ( l + 1) to help each other decode ( W 1 l , W 2 l ) . They then cooperate to send U l +2 , so that the decoder can decode ( W 1 l , W 2 l ) at the end of block ( l + 2) . In the ‘one-step’ C-L coding scheme, the rates ( R 1 , R 2 ) are low enough so that each encoder can decode the message of the other at the end of the same block. In other words, the fully-connected graph of Figure 5(a) is thinned to the degree- 1 graph of Figure 5(b) in one block. In our ‘two- step’ strategy , the thinning of the fully-connected graph to the degree-1 graph takes place o ver two blocks as sho wn in Figure 6. 1) Stationarity of the coding scheme: The scheme pro- posed abov e has a shortcoming - it is not stationary and hence does not yield a single-letter rate region. Recall that for any block l , A l and B l are produced conditioned on Y l − 1 . Y l − 1 is produced by the channel based on inputs ( X 1( l − 1) , X 2( l − 1) ) , which in turn depend on A l − 1 and B l − 1 , respectiv ely . Thus we hav e correlation that propagates across blocks, as shown in Figure 7. This implies that the resulting rate region will be a multi-letter characterization that depends on the joint distribution of the variables in all L blocks : { ( U l , A l , B l , X 1 l , X 2 l , Y l ) } L l =1 . T o obtain a single-letter rate region, we require a stationary distribution of sequences in each block. In other words, we need the random sequences ( U , A , B , X 1 X 2 , Y ) to be characterized by the same single-letter product distribution in each block. This will happen if we can ensure that the A , B sequences in each block have the same single-letter distribution P AB . The correlation between A l +1 and B l +1 cannot be arbitrary - it is generated using the information av ailable at each encoder at the end of block l . At this time, both encoders know s l , ( u , a , b , y ) l . In addition, encoder 1 also knows x 1 l and hence we make it generate A l +1 according to the product distribution Q n A | ˜ S ˜ X 1 ( . | s l , x 1 l ) . Similarly , we make encoder 2 generate B l +1 according to 7 Q n B | ˜ S ˜ X 2 ( . | s l , x 2 l ) . If the pair ( Q A | ˜ S ˜ X 1 , Q B | ˜ S ˜ X 2 ) ∈ Q , then equation (2) ensures that the pair ( A l +1 , B l +1 ) corresponding to ( W 1 l , W 2 l ) belongs to the typical set T ( P AB ) with high probability . This ensures stationarity of the coding scheme. Our block-Markov coding scheme, with conditions imposed to ensure stationarity , is similar in spirit to that of Han for two-way channels [23]. Finally , a couple of comments on the chosen input distribution in (1). In block ( l + 1) , the en- coders generate A l +1 and B l +1 independently based on their own messages for block l and the common side information S l = ( U , A , B , Y ) l . Why do they not use S l − 1 , S l − 2 , . . . (the side information accumulated from earlier blocks) as well? This is because ( W 1( l − 2) , W 2( l − 2) ) is decoded at the decoder at the end of block l , and ( W 1( l − 2) , W 2( l − 2) ) determines ( A , B ) l − 1 . Hence, for block ( l + 1) , S l − 1 , S l − 2 , . . . is known at all terminals and is just shared common randomness. Also note that U , which carries common information sent by both encoders, is independent of the random variables ( A, B ) . It is sufficient to choose a distribution of the form P U P AB (rather than P U AB ). This is because separate source and channel coding is optimal when the encoders send common information ov er the MA C. Joint source-channel coding is needed only for sending correlated information. Hence A , B are generated conditioned on the information av ailable at each encoder , but U is generated independently . W e remark that our scheme can be extended as follo ws. The above coding scheme thins the fully-connected graph to the degree-one graph over two blocks. Instead, we could do it over three blocks, going through two intermediate stages of progressiv ely thinner (more correlated) graphs before obtain- ing the de gree-one graph. This would yield a potentially lar ger rate region, albeit with extra auxiliary random variables. This is discussed in Section VI. I V . C O M PA R I S O N S In this section, the rate region of Theorem 1 is compared with the other known regions for the memoryless MA C with noiseless feedback. W e first consider the white Gaussian MA C. Since its feedback capacity is known [6], this channel provides a benchmark to compare the rate region of Theorem 1. W e see that our rate region yields rates strictly better than the C-L region, but smaller than the feedback capacity . Ozaro w’ s capacity-achieving scheme in [6] is specific to the Gaussian case and does not extend to other MACs. The rate regions of Kramer [9] and Bross and Lapidoth (B-L) [11] extend the C- L region for a discrete memoryless MAC with feedback. W e compare our scheme with these in Sections IV -B and IV -C. W e mention that all the calculations in this section are done using the rate constraints in (45), an equi valent representation of the rate constraints in Theorem 1. This equiv alence is established by equations (46)-(48) in Section V. A. Additive White Gaussian MAC Consider the A WGN MAC with po wer constraint P on each of the inputs. This channel, with X 1 = X 2 = Y = R , is defined by Y = X 1 + X 2 + N (10) where N is a Gaussian noise random variable with mean 0 and variance σ 2 that is independent of X 1 and X 2 . The inputs x 1 and x 2 for each block satisfy 1 N P N n =1 x 2 1 n ≤ P , 1 N P N n =1 x 2 2 n ≤ P . For this channel, the equal-rate point on the boundary of the C-L region [1] is ( R C L , R C L ) where R C L = 1 2 log 2 r 1 + P σ 2 − 1 ! (11) The achiev able rate region of Theorem 1 for the discrete memoryless case can be extended to the A WGN MAC using a similar proof. For the joint distribution P U AB X 1 X 2 Y in (1), define U ∼ N (0 , 1) and ( A, B ) jointly Gaussian with mean zero and cov ariance matrix K AB =  1 λ λ 1  . (12) The input distributions P X 1 | U A and P X 2 | U B are defined by X 1 = √ αP I X 1 + p β P A + q α + β P U, X 2 = √ αP I X 2 + p β P B + q α + β P U (13) where I X 1 , I X 2 are independent N (0 , 1) random variables, α, β > 0 and α + β ≤ 1 . I X 1 and I X 2 represent the fresh information and U is the resolution information for the decoder sent cooperativ ely by the encoders. A and B represent the information to be decoded using feedback by encoders 2 and 1 , respectively . Recall that ˜ S , ( ˜ U , ˜ A, ˜ B , ˜ Y ) . The distributions Q A | ˜ S ˜ X 1 and Q B | ˜ S ˜ X 2 to generate A and B at the encoders are defined as Q A | ˜ S ˜ X 1 : A = k 1 ˜ X 1 − p α + β P ˜ U − √ β P ˜ A √ αP + k 2 f ( ˜ U , ˜ A, ˜ B , ˜ Y ) , Q B | ˜ S ˜ X 2 : B = − k 1 ˜ X 2 − p α + β P ˜ U − √ β P ˜ B √ αP − k 2 f ( ˜ U , ˜ A, ˜ B , ˜ Y ) (14) where k 1 , k 2 ∈ R and f ( Y , A, B , U ) , Y − √ β P A − √ β P B − 2 p α + β P U √ 2 αP + σ 2 . (15) It can be verified that this choice of ( Q A | ˜ S ˜ X 1 , Q B | ˜ S ˜ X 2 ) satisfies the consistency condition (2) (required for Theorem 1) if the following equations are satisfied. E [ A 2 ] = E [ B 2 ] = 1 , E [ AB ] = λ. (16) Using (15) and (14), the conditions in (16) become 1 = E [ A 2 ] = k 2 1 + k 2 2 + 2 k 1 k 2 r αP 2 αP + σ 2 , (17) λ = E [ AB ] = − k 2 2 − 2 k 1 k 2 r αP 2 αP + σ 2 , (18) Adding (17) and (18), we get k 2 1 = 1 + λ. Substituting k 1 = ± √ 1 + λ in (17) yields a quadratic equation that can be solved 8 T ABLE I C O MPA R IS O N O F E Q UA L - R ATE B O UN DA RY P OI N T S ( I N B I TS ) P /σ 2 0 . 5 1 5 10 100 R C L 0 . 2678 0 . 4353 0 . 9815 1 . 2470 2 . 1277 R ∗ 0 . 2753 0 . 4499 1 . 0067 1 . 2709 2 . 1400 R FBcap 0 . 2834 0 . 4642 1 . 0241 1 . 2847 2 . 1439 to obtain k 2 . The condition for the quadratic to yield a valid (real) solution for k 2 is λ ≤ αP αP + σ 2 . (19) 1) Evaluating the rates: For a v alid ( α, β , λ ) the achie vable rates can be ev aluated from Theorem 1 to be R 1 , R 2 < min { G, H } , R 1 + R 2 < 1 2  1 + 2 P σ 2 (1 + α + β + λβ )  , (20) where G = 1 2 log  1 + αP σ 2 + β P (1 + λ ) αP + σ 2  , H = 1 2 log(1 + α P σ 2 ) + 1 2 log(1 + 4 α + β P /σ 2 2( α + β + β λ ) P /σ 2 + 1 ) + 1 2 log  1 + β (1 + λ ) P /σ 2 (1 + 2 αP /σ 2 )(1 + αP /σ 2 )  . For different v alues of the signal-to-noise ratio P /σ 2 , we (numerically) compute the equal-rate point ( R ∗ , R ∗ ) on the boundary of (20). For various values of P /σ 2 , T able I com- pares R ∗ with R C L , the equal-rate point of the C-L region giv en by (11), and with the equal rate-point R FBcap on the boundary of the feedback capacity region [6]. W e observe that our equal-rate points represent a significant improvement o ver the C-L region, and are close to the feedback capacity for large SNR. B. Comparison with Kramer’ s Generalization of the Cover- Leung Region In [9, Section 5.3-5.4], a multi-letter generalization of the Cov er-Leung region using was proposed. This characterization was based on directed information, and is giv en below . Definition 4. F or a triple of M -dimensional random vectors ( A M , B M , C M ) jointly distributed accor ding to P A M ,B M ,C M = Q M i =1 P A i ,B i ,C i | A i − 1 ,B i − 1 ,C i − 1 , we define I ( A M → B M ) = M X i =1 I ( A i ; B i | B i − 1 ) , (21) I ( A M → B M || C M ) = M X i =1 I ( A i ; B i | B i − 1 C i ) . (22) The first quantity above is called the directed information fr om A M to B M , and the second quantity is the directed information fr om A M to B M causally conditioned on C M . F or any random variable V jointly distributed with these random vectors, the above definitions are extended in the natural way when we condition on V : I ( A M → B M | V ) = M X i =1 I ( A i ; B i | B i − 1 V ) , (23) I ( A M → B M || C M | V ) = M X i =1 I ( A i ; B i | B i − 1 C i V ) . (24) Fact 2 (Generalized C-L region [9]): For any positive integer M , consider a joint distribution of the form P U M X M 1 X M 2 Y M ( u M , x M 1 , x M 2 , y M ) = M Y i =1 P U ( u i ) P X 1 i | U X i − 1 1 Y i − 1 ( x 1 i | u i x i − 1 1 y i − 1 ) · P X 2 i | U X i − 1 2 Y i − 1 ( x 2 i | u i x i − 1 2 y i − 1 ) P Y | X 1 X 2 ( y i | x 1 i x 2 i ) where P Y | X 1 X 2 is fixed by the channel, and the other dis- tributions can be picked arbitrarily . Then the following rate pairs ( R 1 , R 2 ) are achiev able over the MA C with noiseless feedback: R 1 ≤ 1 M I ( X M 1 → Y M || X M 2 | U M ) , R 2 ≤ 1 M I ( X M 2 → Y M || X M 1 | U M ) , R 1 + R 2 ≤ 1 M I ( X M 1 X M 2 → Y M ) . (25) W e now compare the region of Theorem 1 with the generalized C-L region for M = 2 . This is a fair comparison because in each of these regions, we have fiv e distributions to pick: P U , and two conditional distributions for each encoder . W ith M = 2 , the equal rate point on the boundary of (25) was computed for a fe w examples in [9]. For the A WGN MA C with P /σ 2 = 10 , the best equal rate pair was R 1 = R 2 = 1 . 2566 bits, which is smaller than the rate 1 . 2709 bits obtained using Theorem 1 (see T able I). Consider the joint distribution of the generalized C-L scheme for M = 2 : P U ( u 1 ) P X 11 | U ( x 11 | u 1 ) P X 21 | U ( x 21 | u 1 ) P Y | X 1 X 2 ( y 1 | x 11 x 21 ) P U ( u 2 ) P X 12 | U X 11 Y 1 ( x 12 | u 2 x 11 y 1 ) P X 22 | U X 21 Y 1 ( x 22 | u 2 x 21 y 1 ) P Y | X 1 X 2 ( y 2 | x 12 x 22 ) . The generalized C-L scheme uses block-Mark ov superposition with L blocks of transmission, each block being of length N . (W ithout loss of generality , we will assume that the block length N is even.) At the beginning of each block, to resolve the decoder’ s residual uncertainty , both encoders agree on the U codew ord ( u 1 , . . . , u N ) , chosen i.i.d according to P U . Each of the 2 N R 1 codew ords of encoder 1 is generated according the following distribution: P X 11 | U ( x 11 | u 1 ) P X 12 | U X 11 Y 1 ( x 12 | u 2 x 11 y 1 ) P X 11 | U ( x 13 | u 3 ) P X 12 | U X 11 Y 1 ( x 14 | u 3 x 13 y 3 ) . . . (26) In other words, the odd-numbered symbols of the block are chosen conditioned on just U (like in the C-L scheme), while the ev en-numbered symbols are chosen conditioned on the preceding input symbol and the corresponding output. 9 Equiv alently , we can think of the block of length N being divided into two sub-blocks of length N 2 , where the first sub- block has symbols chosen i.i.d according to P X 11 | U , and the symbols of the second sub-block are chosen iid according to P X 12 | U X 11 Y , i.e., conditioned on the inputs and outputs of the first sub-block. W e can no w establish an analogy between this coding scheme and that of Theorem 1. In Theorem 1, choose A = ( ˜ X 1 , ˜ Y ) and B = ( ˜ X 2 , ˜ Y ) . (Recall that ˜ is used to denote symbols of the previous block.) It can be verified that the consistency condition (2) is trivially satisfied for this choice of A and B . W ith this choice, the encoder 1 generates its inputs in each block according to P X 1 | U ˜ X 1 ˜ Y , and encoder 2 generates its inputs according to P X 2 | U ˜ X 2 ˜ Y . In particular , note that encoder 1 chooses the channel inputs for the entire block conditioned on the channel outputs and its own inputs of the previous block. In contrast, the generalized C-L scheme uses such a conditional input distribution only for one half of each block (the second sub-block). In the other half, the input symbols are conditionally independent given U . Since our coding scheme utilizes the correlation generated by feedback for the entire block, we expect it to yield higher rates. Of course, this comparison was made with the specific choice A = ( ˜ X 1 , ˜ Y ) , B = ( ˜ X 2 , ˜ Y ) . Other choices of A and B may yield higher rates in Theorem 1 - the A WGN MAC in the previous subsection is such an example. W e emphasize that this is only a qualitativ e comparison of the tw o coding schemes, and we have not formally sho wn that generalized C-L region for M = 2 is strictly contained in the rate region of Theorem 1 for the above choice of A and B . C. Comparison with Br oss-Lapidoth Region Bross and Lapidoth (B-L) [11] established a rate region that extends the Cover -Leung region. The B-L scheme uses block-Markov superposition coding. Each block consists of two phases - a MA C phase and a two-w ay phase, and is transmitted in (1 + η ) N units of time. In the MA C phase of length N , the encoders send fresh information for the current block superimposed ov er resolution information for the previous block. This part of the B-L scheme is identical to the Cov er-Leung scheme. This is followed by the two-way phase of length η N where the encoders communicate to exchange functions V 1 and V 2 of the information available to each of them. In our coding scheme, A and B play a role similar to the functions V 1 and V 2 - they are generated based on the information av ailable to the encoders at the end of the block. The key difference lies in how they are exchanged. In the B- L scheme, an extra η N time units is spent in each block to exchange V 1 , V 2 . Our scheme superimposes this information onto the next block; each block l carries three layers of information - the base layer U to resolve the decoder’ s list of block ( l − 2) , information exchange through A and B for the encoders to learn the messages of block ( l − 1) , and fresh messages corresponding to block l . Each block in our scheme has length N as opposed to (1 + η ) N in B-L, i.e., our scheme may be vie wed as superim- posing the two-way phase of the B-L scheme onto the MAC phase. In general, superposition is a more efficient way of exchanging correlated information than dedicating extra time for the exchange 1 ; howe ver , in order to obtain a single-letter rate region with superposition-based information exchange, we cannot choose P AB arbitrarily - it needs to satisfy the consis- tency condition (2). Hence a direct comparison of our rate region with the Bross-Lapidoth region appears difficult. Both the B-L region and our region are non-con vex optimization problems, and there are no efficient ways to solve these. (In fact, the C-L region and the no-feedback MA C capacity region are non-conv ex optimization problems as well.) In [11], the Poisson two-user MAC with feedback was considered as an example. It was shown that computing the feedback capacity of the Poisson MAC is equiv alent to computing the feedback capacity of the following binary MA C. The binary MA C, with inputs ( X 1 , X 2 ) and output Y is specified by P Y | X 1 X 2 (1 | 01) = P Y | X 1 X 2 (1 | 10) = q , P Y | X 1 X 2 (1 | 11) = 2 q , P Y | X 1 X 2 (1 | 00) = 0 where 0 < q < 0 . 5 . Note that if an encoder input is 0 and the channel output is 1 , the other input is uniquely determined. In all other cases, one input, together with the output, does not determine the other input. Thus the condition for C-L optimality [5] is not satisfied. It was shown in [11] that feedback capacity region of the two-user Poisson MAC is the set of all rate pairs lim q → 0 ( R 1 ( q ) q , R 2 ( q ) q ) , where ( R 1 ( q ) , R 2 ( q )) are achiev able for the above binary MA C with feedback achiev able for the abov e binary channel with parameter q . W e shall compare the maximal equal rate points for this channel for small q . The maximum symmetric sum rate in the C-L region is [11] 1 q ( R 1 + R 2 ) = 0 . 4994 + o (1) nats . (27) where o (1) → 0 as q → 0 . Our rate region from Theorem 1 yields the symmetric sum-rate 1 q ( R 1 + R 2 ) = 0 . 5132 + o (1) nats . (28) The computation is found in Appendix A. The B-L symmetric sum rate reported in [11] is 1 q ( R 1 + R 2 ) = 0 . 553 + o (1) nats, but there appears to be an error in the calculation, which we hav e communicated to the authors. V . P R O O F O F T H E O R E M 1 A. Pr eliminaries W e shall use the notion of strong typicality as defined in [24]. Consider three finite sets V , Z 1 and Z 2 , and an arbitrary distribution P V Z 1 Z 2 on them. Definition 5. F or any distribution P V on V , a sequence v N ∈ V N is said to be  -typical with r espect to P V , if     1 N #( a | v N ) − P V ( a )     ≤  |V | , 1 For similar reasons, the Cover -Leung scheme outperforms the Gaarder- W olf scheme for the binary erasure MA C [1], [4]. 10 for all a ∈ V , and no a ∈ V with P V ( a ) = 0 occurs in v N , wher e #( a | v N ) denotes the number of occurrences of a in v N . Let A ( N )  ( P V ) denote the set of all sequences that are  -typical with r espect to P V . The follo wing are some of the properties of typical se- quences that will be used in the proof. Property 0: F or all  > 0 , and for all sufficiently large N , we hav e P N V [ A ( N )  ( P V )] > 1 −  . Property 1: Let v N ∈ A ( N )  ( P V ) for some fixed  > 0 . If a random vector Z N 1 is generated from the product distribution Q N i =1 P Z 1 | V ( ·| v i ) , then for all sufficiently large N , we hav e P r [( v N , Z N 1 ) 6∈ A ( N ) ˜  ( P V Z 1 )] <  , where ˜  =  ( |V | + |Z 1 | ) . Property 2: Let v N ∈ A ( N )  ( P V ) for some fixed  > 0 . If a random vector Z N 1 is generated from the product distribution Q N i =1 P Z 1 | V ( ·| v i ) and Z N 2 is generated from the product distribution Q N i =1 P Z 2 | V ( ·| v i ) , then for all suf ficiently large N , we have Pr [( v N , Z N 1 , Z N 2 ) ∈ A ( N ) ˜  ( P V Z 1 Z 2 )] < 2 N δ (  ) 2 N H ( Z 1 Z 2 | V ) 2 N H ( Z 1 | V ) 2 N H ( Z 2 | V ) where ˜  =  ( |V | + |Z 1 ||Z 2 | ) , and δ (  ) is a continuous positi ve function of  that goes to 0 as  → 0 . B. Random Codebook generation Fix a distribution P U AB X 1 X 2 Y from P as in (1), and a pair of conditional distribution ( Q A | ˜ S , ˜ X 1 , Q B | ˜ S , ˜ X 2 ) from Q . Fix positiv e integers N , M 1 and M 2 . N is the block length, M 1 and M 2 denote the size of the message sets of the two transmitters in each block. Fix a positiv e integer L ; L is the number of blocks in encoding and decoding. Let M 0 [1] = M 0 [2] = 1 , and fix ( L − 2) positive integers M 0 [ l ] for l = 3 , . . . , L . Fix  > 0 , and let  [ l ] =  (2 |S ||X 1 ||X 2 | ) l − 1 . Recall that S denotes the collection ( U, A, B , Y ) and S denotes U × A × B × Y . For l = 2 , . . . , L , independently perform the following random experiments. • For ev ery ( s , x 1 ) ∈ S N × X N 1 , generate one sequence A [ l, s , x 1 ] from Q N n =1 Q A | ˜ S , ˜ X 1 ( ·| s n , x 1 n ) . • Similarly , for ev ery ( s , x 2 ) ∈ S N × X N 2 , generate one sequence B [ l, s , x 2 ] from Q N n =1 Q B | ˜ S , ˜ X 2 ( ·| s n , x 2 n ) . For l = 1 , independently perform the following random experiment. • Generate a pair of sequences ( A N [1 , − , − ] , B N [1 , − , − ] ) from the product distribution P N AB . The dashes indicate that for the first block, A N and B N are generated directly using P AB , unlike blocks 2 , . . . , L where they are gener- ated using the S, X 1 , X 2 sequences corresponding to the previous block. For l = 1 , . . . , L , independently perform the following random experiments. • Independently choose M 0 [ l ] sequences U [ l,m ] , m = 1 , 2 , . . . , M 0 [ l ] , where each sequence is generated from the product distribution P N U . • For each ( u , a ) ∈ U N × A N , independently generate M 1 sequences X 1[ l,i, u , a ] , i = 1 , 2 , . . . , M 1 , where each sequence is generated from Q N n =1 P X 1 | U A ( ·| u n , a n ) . • Similarly , for each ( u , b ) ∈ U N ×B N , independently gen- erate M 2 sequences X 2[ l,i, u , b ] , i = 1 , 2 , . . . , M 2 , where each sequence is generated from Q N n =1 P X 2 | U B ( ·| u n , b n ) . Upon receiving the channel output of block l , the decoder decodes the message pair corresponding to block ( l − 2) , while the encoders decode the messages of one another corresponding to block ( l − 1) . This is explained below . C. Encoding Operation Let W 1 [ l ] and W 2 [ l ] denote the transmitters’ messages for block l . These are independent random variables uniformly distributed over { 1 , 2 , . . . , M 1 } and { 1 , 2 , . . . , M 2 } , respec- tiv ely for l = 1 , 2 , . . . , ( L − 2) . W e set W 1 [0] = W 2 [0] = W 1 [ L − 1] = W 2 [ L − 1] = W 1 [ L ] = W 2 [ L ] = 1 . For large, L , this will hav e a negligible effect on the rates. For each block l , the encoder 1 chooses a triple of sequences from U N × A N × X N 1 , denoted by ( U 1 [ l ] , A [ l ] , X 1 [ l ]) , ac- cording to the encoding rule giv en below . Similarly encoder 2 chooses a triple of sequences from U N × B N × X N 2 which is denoted by ( U 2 [ l ] , B [ l ] , X 2 [ l ]) . W e will later see that with high probability U 1 [ l ] = U 2 [ l ] . The MAC output sequence in block l is denoted by Y [ l ] . Since output feedback is av ailable at both the encoders, each encoder maintains a copy of the decoder , so all three terminals are in synchrony . Block 1 : • Encoder 1 computes U 1 [1] = U [1 , 1] , A [1] = A [1 , − , − ] , and X 1 [1] = X 1[1 ,W 1 [1] , U 1 [1] , A [1]] . Then transmits X 1 [1] as the channel input sequence. • Encoder 2 computes U 2 [1] = U [1 , 1] , B [1] = B [1 , − , − ] , and X 2 [1] = X 2[1 ,W 2 [1] , U 2 [1] , B [1]] . Then transmits X 2 [1] as the channel input sequence. • The MA C produces Y [1] . • Encoder 1 sets j [0] = 1 , ˆ B [1] = B [1] , and S 1 [1] = ( U 1 [1] , A [1] , ˆ B [1] , Y [1]) . For l = 1 , . . . , L , j [ l ] denotes encoder 1 ’ s estimate of W 2 [ l ] , and ˆ B [ l ] denotes its estimate of B [ l ] . • Encoder 2 sets i [0] = 1 , ˆ A [1] = A [1] , and S 2 [1] = ( U 2 [1] , ˆ A [1] , B [1] , Y [1]) . 2 For l = 1 , . . . , L , i [ l ] denotes encoder 1 ’ s estimate of W 1 [ l ] , and ˆ A [ l ] denotes its estimate of A [ l ] . • Both encoders create the list L [0] as the set containing the ordered pair (1 , 1) . L [ l ] denotes the list of highly likely message pairs corresponding to block l at the decoder . The construction of this list for l > 1 will be described in Section V -D. Block l , l = 2 , . . . , L : The encoders perform the following sequence of operations. • If the message pair ( W 1 [ l − 2] , j [ l − 2]) is present in the list L [ l − 2] , Encoder 1 computes k 1 [ l ] as the inde x of this message pair in the list L [ l − 2] . Otherwise, it sets k 1 [ l ] = 1 . Encoder 1 then computes U 1 [ l ] = U [ l,k 1 [ l ]] , A [ l ] = A [ l, S 1 [ l − 1] , X 1 [ l − 1]] , and X 1 [ l ] = X 1[ l,W 1 [ l ] , U 1 [ l ] , A [ l ]] . It then transmits X 1 [ l ] as the channel input sequence. 2 W e see that S 1 [1] = S 2 [1] . In future blocks, this will only hold with high probability . 11 T ABLE II T I ME - L I NE O F E V EN T S F O R T W O S U CC E S SI V E B L O C KS ( E AC H B L O C K O F L E NG T H N ) Time instant . . . ( l − 1) N ( l − 1) N + 1 . . . lN lN + 1 block ( l − 1) ends block l begins block l ends block ( l + 1) begins Encoder 1 knows . . . a l − 1 , W 1( l − 2) , y l − 1 a l , W 1( l − 1) , y l Encoder 1 decodes . . . b l − 1 → W 2( l − 2) b l → W 2( l − 1) Encoder 1 produces . . . u l , a l → x 1 l u l +1 , a l +1 → x 1( l +1) Encoder 2 knows . . . b l − 1 , W 2( l − 2) , y l − 1 b l , W 2( l − 1) , y l Encoder 2 decodes . . . a l − 1 → W 1( l − 2) a l → W 1( l − 1) Encoder 2 produces . . . u l , b l → x 2 l u l +1 , b l +1 → x 2( l +1) Decoder u l − 1 → u l → decodes W 1( l − 3) , W 2( l − 3) W 1( l − 2) , W 2( l − 2) • If the message pair ( i [ l − 2] , W 2 [ l − 2]) is present in the list L [ l − 2] , Encoder 2 computes k 2 [ l ] as the inde x of this message pair in the list L [ l − 2] . Otherwise, it sets k 2 [ l ] = 1 . Encoder 2 then computes U 2 [ l ] = U [ l,k 2 [ l ]] , B [ l ] = B [ l, S 2 [ l − 1] , X 2 [ l − 1]] , and X 2 [ l ] = X 2[ l,W 2 [ l ] , U 2 [ l ] , B [ l ]] . It then transmits X 1 [ l ] as the channel input sequence. • The MA C produces Y [ l ] . • After receiving Y [ l ] , Encoder 1 wishes to decode W 2 [ l − 1] . It attempts to find a unique index j [ l − 1] such that the following two tuples: S 1 [ l − 1] , X 1 [ l − 1] , X 2[( l − 1) ,j [ l − 1] , U 1 [ l − 1] , ˆ B [ l − 1]] and U 1 [ l ] , A [ l ] , X 1 [ l ] , Y [ l ] , B [ l, S 1 [ l − 1] , X 2[ l − 1 ,j [ l − 1] , U 1 [ l − 1] , ˆ B [ l − 1]] ] are jointly  [ l ] -typical with respect to (3). Note that encoder 1 uses ( U 1 [ l − 1] , U 1 [ l ]) in place of ( U 2 [ l − 1] , U 2 [ l ]) for this task. If there exists no such index or if more than one such index is found, it sets j [ l − 1] = 1 . If successful, it computes an estimate of B [ l ] using the following equation: ˆ B [ l ] = B [ l, S 1 [ l − 1] , X 2[ l − 1 ,j [ l − 1] , U 1 [ l − 1] , ˆ B [ l − 1]] ] . It then computes S 1 [ l ] = ( U 1 [ l ] , A [ l ] , ˆ B [ l ] , Y [ l ]) . • After receiving Y [ l ] , Encoder 2 wishes to decode W 1 [ l − 1] . It attempts to find a unique index i [ l − 1] such that the following two tuples S 2 [ l − 1] , X 2 [ l − 1] , X 1[( l − 1) ,i [ l − 1] , U 2 [ l − 1] , ˆ A [ l − 1]] , and U 2 [ l ] , B [ l ] , X 2 [ l ] , Y [ l ] , A [ l, S 2 [ l − 1] , X 1[ l − 1 ,i [ l − 1] , U 2 [ l − 1] , ˆ A [ l − 1]] ] , are jointly  [ l ] -typical with respect to (3). Note that encoder 2 uses ( U 2 [ l − 1] , U 2 [ l ]) in place of ( U 1 [ l − 1] , U 1 [ l ]) for this task. If there exists no such index or if more than one such index is found, it sets i [ l − 1] = 1 . If successful, it computes an estimate of A [ l ] using the following equation: ˆ A [ l ] = A [ l, S 2 [ l − 1] , X 1[ l − 1 ,i [ l − 1] , U 2 [ l − 1] , ˆ A [ l − 1]] ] . It then computes S 2 [ l ] = ( U 2 [ l ] , ˆ A [ l ] , B [ l ] , Y [ l ]) . • Both encoders then execute the actions of the decoder corresponding to block l (described in the next subsec- tion). This step results in a list of message pairs L [ l − 1] of block l − 1 , defined in equation (30) at the bottom of this page. The time-line of ev ents at the encoder for two successi ve blocks is shown in T able II. D. Decoding Operation Block 1 : • The decoder receiv es Y [1] , and sets k [1] = 1 Block 2 : • Upon receiving Y [2] , the decoder sets k [2] = 1 . It then sets ¯ A [1] = A [1] , ¯ B [1] = B [1] , and S [1] = ( U [1 ,k [1]] , ¯ A [1] , ¯ B [1] , Y [1]) . • The decoder computes L [1] , the list of message pairs defined by (29) at the bottom of this page. Block l , l = 3 , . . . , L : • Upon receiving Y [ l ] , the decoder determines the unique index k [ l ] ∈ { 1 , 2 , . . . , M 0 [ l ] } such that ( Y [ l ] , U [ l,k [ l ]] , Y [ l − 1] , U [ l − 1 ,k [ l − 1]] ) is  [ l ] -typical. If no such index exists or more than one such index exists, then the decoder declares error . If successful in the above operation, the decoder computes the k [ l ] th pair in the list L [ l − 2] , and declares it as the reconstruction ( ˆ W 1 [ l − 2] , ˆ W 2 [ l − 2]) of the message pair . L [1] = n ( i, j ) : ( S [1] , X 1[1 ,i, U [1 ,k [1]] , ¯ A [1]] , X 2[1 ,j, U [1 ,k [1]] , ¯ B [1]] ) is  [ l ] -typical and ( U [2 ,k [2]] , Y [2] , A [2 , S [1] , X 1[1 ,i, U [1 ,k [1]] , ¯ A [1]] ] B [2 , S [1] , X 2[1 ,j, U [1 ,k [1]] , ¯ B [1]] ] ) is  [ l ] -typical o , (29) L [ l − 1] = n ( i, j ) : ( S [ l − 1] , X 1[ l − 1 ,i, U [ l − 1 ,k [ l − 1]] , ¯ A [ l − 1]] , X 2[ l − 1 ,j, U [ l − 1 ,k [ l − 1]] , ¯ B [ l − 1]] ) is  [ l ] -typical and ( U [ l,k [ l ]] , Y [ l ] , A [ l, S [ l − 1] , X 1[ l − 1 ,i, U [ l − 1 ,k [ l − 1]] , ¯ A [ l − 1]] ] B [ l, S [ l − 1] , X 2[ l − 1 ,j, U [ l − 1 ,k [ l − 1]] , ¯ B [ l − 1]] ] ) is  [ l ] -typical o . (30) 12 • The decoder computes an estimate of A [ l − 1] using the equation ¯ A [ l − 1] = A [ l − 1 , S [ l − 2] , X 1[ l − 2 , ˆ W 1 [ l − 2] , U [ l − 2 ,k [ l − 2]] , ¯ A [ l − 2]] ] . Similarly , the decoder computes an estimate of B [ l − 1] using the equation ¯ B [ l − 1] = B [ l − 1 , S [ l − 2] , X 2[ l − 2 , ˆ W 2 [ l − 2] , U [ l − 2 ,k [ l − 2]] , ¯ B [ l − 2]] ] . The decoder then computes S [ l − 1] = ( U [ l − 1 ,k [ l − 1]] , ¯ A [ l − 1] , ¯ B [ l − 1] , Y [ l − 1]) . • The decoder then computes L [ l − 1] , the list of message pairs defined by (30) at the bottom of the previous page. E. Err or Analysis For block l ∈ { 1 , 2 , . . . , L } , if U 1 [ l ] = U 2 [ l ] , then let U [ l ] = U 1 [ l ] ; otherwise, let U [ l ] be a fixed deterministic sequence that does not depend on l . Block 1: Let E [1] c be the ev ent that ( U [1] , A [1] , B [1] , X 1 [1] , X 2 [1] , Y [1]) is  [1] -typical with respect to P U AB X 1 X 2 Y . By Property 0 in Section V -A, we hav e Pr [ E [1]] ≤  for all suf ficiently large N . Block 2: - Let E 1 [2] be the event that Encoder 1 fails to decode W 2 [1] upon receiving Y [2] . - Let E 2 [2] be the event that Encoder 2 fails to decode W 1 [1] upon receiving Y [2] . - Let E 3 [2] be the ev ent that at the decoder |L [1] | > 2 N ( I ( U ; Y | ˜ U ˜ Y ) − 2 δ 1 (  [2])) . Here δ 1 ( · ) is a continuous positiv e function that tends to 0 as its argument tends to 0 , similar to the one used in Property 2 of typical sequences. The error ev ent E [2] in Block 2 is giv en by E [2] = E 1 [2] ∪ E 2 [2] ∪ E 3 [2] . Conditioned on the ev ent E [1] c , the conditional probabil- ity that the tuples ( U [1] , A [1] , B [1] , X 1 [1] , X 2 [1] , Y [1]) and ( U [2] , A [2] , B [2] , X 1 [2] , X 2 [2] , Y [2]) are not jointly  [2] - typical with respect to (3) is smaller than  for all sufficiently large N (by Property 1 ). Using this and Property 2 of typical sequences, we have the following upper bound on Pr [ E 1 [2] | E [1] c ] : Pr [ E 1 [2] | E [1] c ] ≤  + M 2 X j =1 2 N δ 1 (  [2]) 2 N H ( ˜ X 2 B | ˜ S ˜ X 1 U AX 1 Y ) 2 N H ( ˜ X 2 | ˜ U ˜ B ) 2 N H ( B | ˜ S ˜ X 2 ) ( a ) =  + M 2 X j =1 2 N δ 1 (  [2]) 2 − N ( I ( ˜ X 2 ; ˜ Y | ˜ U ˜ A ˜ B ˜ X 1 )+ I ( ˜ X 2 B ; Y | ˜ S ˜ X 1 U AX 1 ) ) ( b ) =  + M 2 X j =1 2 N δ 1 (  [2]) 2 − N ( I ( X 2 ; Y | U AB X 1 )+ I ( ˜ X 2 B ; Y | ˜ S ˜ X 1 U AX 1 )) ( c ) =  + M 2 X j =1 2 N δ 1 (  [2]) 2 − N I ( X 2 ; Y | U AX 1 ˜ X 1 ˜ S ) ( d ) ≤ 2 . (31) In the abov e, ( a ) can be obtained using the chain rule of mutual information along with the following Markov chains: B ˜ X 2 − ˜ S ˜ X 1 − U A, ˜ S ˜ X 1 ˜ X 2 − U A − X 1 , A ˜ X 1 − ˜ S ˜ X 2 − U B , ˜ S ˜ X 1 ˜ X 2 − U B − X 2 , ˜ S ˜ X 1 ˜ X 2 − U AB − X 1 X 2 − Y . (32) Indeed, H ( ˜ X 2 | ˜ U ˜ B ) + H ( B | ˜ S ˜ X 2 ) − H ( ˜ X 2 B | ˜ S ˜ X 1 U AX 1 Y ) = I ( ˜ X 2 ; ˜ A ˜ X 1 ˜ Y U AX 1 Y | ˜ U ˜ B ) + I ( B ; ˜ X 1 U AX 1 Y | ˜ S ˜ X 2 ) = I ( ˜ X 2 ; ˜ A ˜ X 1 ˜ Y | ˜ U ˜ B ) + I ( ˜ X 2 ; U AX 1 Y | ˜ S ˜ X 1 ) + I ( B ; U AX 1 Y | ˜ S ˜ X 1 ˜ X 2 ) = I ( ˜ X 2 ; ˜ Y | ˜ U ˜ A ˜ B ˜ X 1 ) + I ( ˜ X 2 B ; Y | ˜ S ˜ X 1 U AX 1 ) . ( b ) follo ws from the fact that ( ˜ S , ˜ X 1 , ˜ X 2 ) has the same distribution as ( S, X 1 , X 2 ) , and ( c ) can be obtained as follows using (32): I ( X 2 ; Y | U AB X 1 ) + I ( ˜ X 2 B ; Y | ˜ S ˜ X 1 U AX 1 ) = I ( X 2 ; Y | U AB X 1 ˜ S ˜ X 1 ˜ X 2 ) + I ( ˜ X 2 B ; Y | ˜ S ˜ X 1 U AX 1 ) = I ( X 2 ˜ X 2 B ; Y | ˜ S ˜ X 1 U AX 1 ) = I ( X 2 ; Y | U AX 1 ˜ X 1 ˜ S ) . ( d ) holds for all sufficiently large N if 1 N log M 2 < I ( X 2 ; Y | ˜ S ˜ X 1 U AX 1 ) − 4 δ 1 (  [2]) . (33) Similarly Pr [ E 2 [2] | E [1] c ] ≤ 2  for all sufficiently large N if 1 N log M 1 < I ( X 1 ; Y | ˜ S ˜ X 2 U B X 2 ) − 4 δ 1 (  [2]) . (34) T o bound Pr [ E 3 [2] | E [1] c ] , start by defining Ψ k,l = 1 if ( k , l ) ∈ L [1] and equal to 0 otherwise. Then E ( |L [1] | ) = E Ψ W 1 [1] ,W 2 [1] + X i 6 = W 1 [1] E Ψ i,W 2 [1] + X j 6 = W 2 [1] E Ψ W 1 [1] ,j + X i 6 = W 1 [1] ,j 6 = W 2 [1] E Ψ i,j . (35) For j 6 = W 2 [1] , using Property 2 of typical sequences we hav e E Ψ W 1 [1] ,j ≤ 2 N δ 1 (  [2]) 2 N H ( ˜ X 2 B | ˜ S ˜ X 1 U AY ) 2 N H ( ˜ X 2 | ˜ U ˜ B ) 2 N H ( B | ˜ S ˜ X 2 ) ( a ) = 2 N δ 1 (  [2]) 2 − N I ( ˜ X 2 ; ˜ Y | ˜ U ˜ A ˜ B ˜ X 1 ) 2 − N I ( B ; Y | ˜ S ˜ X 1 U A ) (36) where ( a ) is obtained by using the chain rule of mutual information and the Markov chains in (32) as follows. H ( ˜ X 2 | ˜ U ˜ B ) + H ( B | ˜ S ˜ X 2 ) − H ( ˜ X 2 B | ˜ S ˜ X 1 U AY ) = I ( ˜ X 2 ; ˜ A ˜ X 1 ˜ Y U AY | ˜ U ˜ B ) + I ( B ; ˜ X 1 U AY | ˜ S ˜ X 2 ) = I ( ˜ X 2 ; ˜ Y | ˜ U ˜ A ˜ B ˜ X 1 ) + I ( ˜ X 2 ; Y | ˜ S ˜ X 1 U A ) + I ( B ; Y | ˜ S ˜ X 1 ˜ X 2 U A ) = I ( ˜ X 2 ; ˜ Y | ˜ U ˜ A ˜ B ˜ X 1 ) + I ( B ˜ X 2 ; Y | ˜ S ˜ X 1 U A ) = I ( ˜ X 2 ; ˜ Y | ˜ U ˜ A ˜ B ˜ X 1 ) + I ( B ; Y | ˜ S ˜ X 1 U A ) . 13 Using the fact that ( ˜ S , ˜ X 1 , ˜ X 2 ) has the same distribution as ( S, X 1 , X 2 ) , (36) becomes 1 N log( X j 6 = W 2 [1] E Ψ W 1 [1] ,j ) ≤ log M 2 N − I ( X 2 ; Y | U AB X 1 ) − I ( B ; Y | ˜ S ˜ X 1 U A ) + δ 1 (  [2]) . (37) Similarly , 1 N log( X i 6 = W 1 [1] E Ψ i,W 2 [1] ) ≤ log M 1 N − I ( X 1 ; Y | U AB X 2 ) − I ( A ; Y | ˜ S ˜ X 2 U B ) + δ 1 (  [2]) . (38) Using Property 2 of typical sequences, we hav e for i 6 = W 1 [1] and j 6 = W 2 [1] , E Ψ i,j ≤ 2 N δ 1 (  [2]) 2 N H ( ˜ X 1 ˜ X 2 AB | ˜ S U Y ) 2 N H ( ˜ X 1 | ˜ U ˜ A ) 2 N H ( ˜ X 2 | ˜ U ˜ B ) 2 N H ( A | ˜ S ˜ X 1 ) 2 N H ( B | ˜ S ˜ X 2 ) ( a ) = 2 N δ 1 (  [2]) 2 − N I ( ˜ X 1 ˜ X 2 ; ˜ Y | ˜ U ˜ A ˜ B ) 2 − N I ( AB ; Y | U ˜ S ) (39) where ( a ) is obtained by using the chain rule of mutual information and the Markov chains in (32) following steps similar to those for (36). Hence 1 N log( X i 6 = W 1 [1] ,j 6 = W 2 [1] E Ψ i,j ) ≤ 1 N log M 1 + 1 N log M 2 − I ( X 1 X 2 ; Y | U AB ) − I ( AB ; Y | U ˜ S ) + δ 1 (  [2]) . (40) Using (37),(38) and (40), (35) can be written as E |L 1 | ≤ 1 + M 1 2 − N ( I ( X 1 ; Y | U AB X 2 )+ I ( A ; Y | ˜ S ˜ X 2 U B ) − δ 1 (  [2])) + M 2 2 − N ( I ( X 2 ; Y | U AB X 1 )+ I ( B ; Y | ˜ S ˜ X 1 U A ) − δ 1 (  [2])) + M 1 M 2 2 − N ( I ( X 1 X 2 ; Y | U AB )+ I ( AB ; Y | U ˜ S ) − δ 1 (  [2])) . (41) Using (41) in the Markov inequality , one can show that for all sufficiently large N , P  |L [1] | < 2 N (max { T 1 ,T 2 ,T 3 } +2 δ 1 (  [2]))  > 1 −  where T 1 , log M 1 N + log M 2 N − I ( X 1 X 2 ; Y | AB U ) − I ( AB ; Y | U ˜ S ) , T 2 , log M 1 N − I ( X 1 ; Y | X 2 AB U ) − I ( A ; Y | U B ˜ S ˜ X 2 ) , T 3 , log M 2 N − I ( X 2 ; Y | X 1 AB U ) − I ( B ; Y | U A ˜ S ˜ X 1 ) . (42) Hence Pr [ E 3 [2] | E [1] c ] < 2  if max { T 1 , T 2 , T 3 } ≤ I ( U ; Y | ˜ U ˜ Y ) − 4 δ 1 (  [2]) (43) Hence Pr [ E [2] | E [1] c ] < 6  if (33), (34) and (43) are satisfied. Block l : 3 , . . . , L : - Let E 1 [ l ] be the ev ent that after receiving Y [ l ] , Encoder 1 fails to decode W 2 [ l − 1] . - Let E 2 [ l ] be the ev ent that after receiving Y [ l ] , Encoder 2 fails to decode W 1 [ l − 1] . - Let E 3 [ l ] be the event that at the decoder |L [ l − 1] | > 2 n ( I ( U ; Y | ˜ U ˜ Y ) − 2 δ 1 (  [ l ])) . - Let E 4 [ l ] be the event that the decoder fails to correctly decode U [ l ] . The error ev ent E [ l ] in Block l is given by E [ l ] = E 1 [ l ] ∪ E 2 [ l ] ∪ E 3 [ l ] ∪ E 4 [ l ] . Using arguments similar to those used in Block 2, it can be shown that Pr [ E i [ l ] | E [ l − 1] c ] < 2  for i = 1 , 2 , 3 for all sufficiently lar ge N , if the conditions giv en by (33), (34), and (43) are satisfied with  [2] replaced by  [ l ] . Moreov er , using standard arguments one can also show that Pr [ E 4 [ l ] | E [ l − 1] c ] < 2  for all sufficiently large N if 1 N log M 0 [ l ] = I ( U ; ˜ U ˜ Y Y ) − 2 δ 1 (  [ l ]) = I ( U ; Y | ˜ U ˜ Y ) − 2 δ 1 (  [ l ]) . (44) Hence Pr [ E [ l ] | E [ l − 1] c ] < 8  for all sufficiently large N . Overall Decoding Err or Pr obability: The abov e arguments imply that we can mak e the probability of decoding error ov er L blocks satisfy Pr [ E ] = Pr " L [ l =1 E [ l ] # ≤ 8 L if M 0 [ l ] is chosen according (44) for l = 3 , . . . , L , and M 1 , M 2 satisfy the following conditions: 1 N log M 2 ≤ I ( X 2 ; Y | ˜ S ˜ X 1 U AX 1 ) − θ 1 N log M 1 ≤ I ( X 1 ; Y | ˜ S ˜ X 2 U B X 2 ) − θ 1 N log M 1 + 1 N log M 2 ≤ I ( X 1 X 2 ; Y | AB U ) + I ( AB ; Y | U ˜ S ) + I ( U ; Y | ˜ U ˜ Y ) − θ 1 N log M 1 ≤ I ( X 1 ; Y | X 2 AB U ) + I ( A ; Y | U B ˜ S ˜ X 2 ) + I ( U ; Y | ˜ U ˜ Y ) − θ 1 N log M 2 ≤ I ( X 2 ; Y | X 1 AB U ) + I ( B ; Y | U A ˜ S ˜ X 1 ) + I ( U ; Y | ˜ Y ˜ Y ) − θ where θ = P L l =1 4 δ 1 (  [ l ]) . This implies that the following rate region is achiev able. R 1 ≤ I ( X 1 ; Y | U AB X 2 ) + I ( A ; Y | U B ˜ S ˜ X 2 ) + I ( U ; Y | ˜ U ˜ Y ) , R 2 ≤ I ( X 2 ; Y | U AB X 1 ) + I ( B ; Y | U A ˜ S ˜ X 1 ) + I ( U ; Y | ˜ U ˜ Y ) , R 1 ≤ I ( X 1 ; Y | U B X 2 ˜ S ˜ X 2 ) , R 2 ≤ I ( X 2 ; Y | U AX 1 ˜ S ˜ X 1 ) , R 1 + R 2 ≤ I ( X 1 X 2 ; Y | AB U ) + I ( AB ; Y | U ˜ S ) + I ( U ; Y | ˜ U ˜ Y ) . (45) Next we show that the abov e rate region is equiv alent to that gi ven in Theorem 1. Using the Mark ov chains in (32), we 14 get I ( X 1 X 2 ; Y | AB U ) + I ( AB ; Y | U ˜ S ) = I ( X 1 X 2 ; Y | AB U ˜ S ) + I ( AB ; Y | U ˜ S ) = I ( AB X 1 X 2 ; Y | U ˜ S ) = I ( X 1 X 2 ; Y | U ˜ S ) . (46) Moreov er , I ( X 1 ; Y | U AB X 2 ) + I ( A ; Y | U B ˜ S ˜ X 2 ) = I ( X 1 ; Y | U AB X 2 ˜ S ˜ X 2 ) + I ( A ; Y | U B ˜ S ˜ X 2 ) = I ( X 1 ; Y X 2 | U AB ˜ S ˜ X 2 ) + I ( A ; Y | U B ˜ S ˜ X 2 ) = I ( X 1 ; Y X 2 | U AB ˜ S ˜ X 2 ) + I ( A ; Y X 2 | U B ˜ S ˜ X 2 ) − I ( A ; X 2 | U B Y ˜ S ˜ X 2 ) = I ( AX 1 ; Y X 2 | U B ˜ S ˜ X 2 ) − I ( A ; X 2 | U B Y ˜ S ˜ X 2 ) = I ( AX 1 ; Y | U B X 2 ˜ S ˜ X 2 ) − I ( A ; X 2 | U B Y ˜ S ˜ X 2 ) = I ( X 1 ; Y | U B X 2 ˜ S ˜ X 2 ) − I ( A ; X 2 | U B Y ˜ S ˜ X 2 ) . (47) Similarly , I ( X 2 ; Y | U AB X 1 ) + I ( B ; Y | U A ˜ S ˜ X 1 ) = I ( X 2 ; Y | U AX 1 ˜ S ˜ X 1 ) − I ( B ; X 1 | U AY ˜ S ˜ X 1 ) . (48) (46), (47) and (48) imply the desired result. V I . E X T E N S I O N O F C O D I N G S C H E M E W e can extend the coding scheme by thinning the fully- connected graph to the perfectly correlated graph over three blocks, i.e., going through two intermediate steps with progres- siv ely thinner graphs in each step. This yields a potentially larger rate region, as described below . Let the rate pair ( R 1 , R 2 ) lie outside the region of Theorem 1. Consider the transmission of message pair ( W 1 l , W 2 l ) through ( X 1 l , X 2 l ) in block l . • At the end of block l , the effecti ve message graph of the decoder gi ven Y b is shown in Figure 8(a). This is a correlated message graph. For each sequence X 1 , choose one sequence A 0 , conditioned on the information at encoder 1 . Similarly , choose one sequence B 0 for each X 2 , based on the information at encoder 2 . The A 0 and B 0 sequences corresponding to X 1 l and X 2 l are set to A 0 l +1 and B 0 l +1 , respectiv ely . Note that A 0 and B 0 here are similar to A and B of the original coding scheme. • At the end of block ( l + 1) , both encoders and the decoder receiv e Y l +1 . The de gree of each left vertex in the graph of Figure 8(a) is too large for encoder 2 to decode A 0 l +1 from Y l +1 . Similarly , encoder 1 cannot decode B 0 l +1 from Y l +1 . So we hav e the correlated message graph of Figure 8(b)- this graph is a subgraph of the graph in Figure 8(a). An edge in graph 8(a) is present in graph 8(b) if and only if the corresponding ( A 0 l +1 , B 0 l +1 ) pair is jointly typical with Y l +1 . At the end of block ( l + 1) , though the encoders do not know the edge ( W 1 l , W 2 l ) , observe that we ha ve thinned the message graph, i.e., the degree of each vertex in graph 8(b) is strictly smaller than its degree in graph 8(a). • Each left verte x in graph 8(b) represents a pair ( X 1 l , A 0 l +1 ) . For each such pair , choose one sequence (a) X 1 l X 2 l A 0 l +1 B 0 l +1 (b) X 1 l X 2 l A l +2 B l +2 (c) X 1 l X 2 l Determines U l +3 Fig. 8. Decoder’ s message graph for message pair ( W 1 l , W 2 l ) : a) After receiving Y l b) After receiving Y l +1 c)After receiving Y l +2 A conditioned on the information at encoder 1 at the end of block ( l + 1) . Similarly , for each right verte x ( X 2 l , B 0 l +1 ) , choose one sequence B at encoder 2 . The A and B sequences corresponding to ( X 1 l , A 0 l +1 ) and ( X 2 l , B 0 l +1 ) are set to A l +2 and B l +2 , respectively . • At the end of block ( l + 2) , the two encoders can decode A l +2 and B l +2 from Y l +2 with high probability . (The graph of Figure 8(b) should be sufficiently ‘thin’ to ensure this). They now know the edge ( W 1 l , W 2 l ) , and the message graph is as shown in Figure 8(c). The two encoders cooperate to send U l +3 resolve the decoder’ s residual uncertainty . Thus in this extended scheme, each message pair is decoded by the encoders with a delay of two blocks, and by the decoder with delay of one block. Stationarity : T o obtain a single-letter rate region, we require a stationary distribution of sequences in each block. In other words, we need the random sequences ( U , A 0 , B 0 , A , B , X 1 , X 2 , Y ) to be characterized by the same single-letter product distribution in each block. This will happen if we can ensure that the A 0 , B 0 , A , B sequences in each block hav e the same single-letter distribution P A 0 B 0 AB . The correlation between ( A 0 l +1 , A l +1 ) and ( B 0 l +1 , B l +1 ) is generated using the information av ailable at each encoder at the end of block l . At this time, both encoders know s l , ( u , a , b , y ) l . In addition, encoder 1 also kno ws ( a 0 l , x 1 l ) and hence we make it generate ( A 0 , A ) l +1 according to the product distribution Q n A 0 A | ˜ S ˜ A 0 ˜ X 1 ( . | s l , a 0 l , x 1 l ) . Recall that we 15 use ˜ to denote the sequence of the previous block. Similarly , we make encoder 2 generate generate ( B 0 , B ) l +1 according to the product distribution Q n B 0 B | ˜ S ˜ B 0 ˜ X 2 ( . | s l , b 0 l , x 2 l ) . If the pair ( Q A 0 A | ˜ S ˜ A 0 ˜ X 1 , Q B 0 B | ˜ S ˜ B 0 ˜ X 2 ) satisfy the consis- tency condition defined belo w , the pair ( A 0 , B 0 , A , B ) l +1 belongs to the typical set T ( P A 0 B 0 AB ) with high probability . This ensures stationarity of the coding scheme. W e state the coding theorem below . Definition 6. F or a given MA C ( X 1 , X 2 , Y , P Y | X 1 ,X 2 ) define P as the set of all distributions P on U × A × B × A 0 × B 0 × X 1 × X 2 × Y of the form P U P A 0 B 0 AB P X 1 | U A 0 A P X 2 | U B 0 B P Y | X 1 X 2 (49) wher e U , A 0 , A , B 0 , B are arbitrary finite sets. Consider two sets of random variables ( U, A 0 , B 0 , A, B , X 1 , X 2 , Y ) and ( ˜ U , ˜ A 0 , ˜ B 0 , ˜ A, ˜ B , ˜ X 1 , ˜ X 2 , ˜ Y ) each having the above distribu- tion P . F or conciseness, we r efer to the collection ( U, A, B , Y ) as S , and to ( ˜ U , ˜ A, ˜ B , ˜ Y ) as ˜ S . Hence P S,X 1 ,X 2 = P ˜ S , ˜ X 1 , ˜ X 2 = P . Define Q as the set of pairs of conditional distributions ( Q A 0 A | ˜ S , ˜ A 0 , ˜ X 1 , Q B 0 B | ˜ S , ˜ B 0 , ˜ X 2 ) of the form Q A 0 A | ˜ S , ˜ A 0 , ˜ X 1 = Q A | ˜ S , ˜ A 0 · Q A 0 | A, ˜ X 1 , ˜ S , ˜ A 0 Q B 0 B | ˜ S , ˜ B 0 , ˜ X 2 = Q B | ˜ S , ˜ B 0 · Q B 0 | B , ˜ X 2 , ˜ S , ˜ B 0 that satisfy the following consistency condition ∀ ( a 0 , b 0 , a, b ) ∈ A 0 × B 0 × A × B . P A 0 B 0 AB ( a 0 , b 0 , a, b ) = X ˜ s, ˜ a 0 , ˜ b 0 , ˜ x 1 , ˜ x 2  P ˜ S , ˜ A 0 , ˜ B 0 , ˜ X 1 , ˜ X 2 ( ˜ s, ˜ a 0 , ˜ b 0 , ˜ x 1 , ˜ x 2 ) · Q A 0 A | ˜ S , ˜ A 0 , ˜ X 1 ( a 0 a | ˜ s, ˜ a 0 , ˜ x 1 ) Q B 0 B | ˜ S , ˜ B 0 , ˜ X 2 ( b 0 b | ˜ s, ˜ b 0 , ˜ x 2 )  . (50) Then, for any ( Q A 0 A | ˜ S , ˜ A 0 , ˜ X 1 , Q B 0 B | ˜ S , ˜ B 0 , ˜ X 2 ) ∈ Q , the joint distribution of the two sets of random variables - ( ˜ S , ˜ A 0 , ˜ B 0 , ˜ X 1 , ˜ X 2 ) and ( S, A 0 , B 0 , X 1 , X 2 ) - is given by P ˜ S ˜ A 0 ˜ B 0 ˜ X 1 ˜ X 2 Q A 0 A | ˜ S , ˜ A 0 , ˜ X 1 Q B 0 B | ˜ S , ˜ B 0 , ˜ X 2 P U X 1 X 2 Y | A 0 B 0 AB . Theorem 2. F or a MAC ( X 1 , X 2 , Y , P Y | X 1 ,X 2 ) , for any dis- tribution P fr om P and a pair of conditional distributions ( Q A 0 A | ˜ S , ˜ A 0 , ˜ X 1 , Q B 0 B | ˜ S , ˜ B 0 , ˜ X 2 ) fr om Q , the following r ate- r egion is achievable . R 1 < I ( X 1 ; Y | X 2 B 0 B ˜ S U ) , R 1 < I ( X 1 ; Y | X 2 A 0 B 0 AB ˜ S U ) + I ( A 0 ; Y | B 0 AB ˜ S U ) + I ( A ; Y | B ˜ S U ) + I ( U ; Y ) , R 2 < I ( X 2 ; Y | X 1 A 0 A ˜ S U ) , R 2 < I ( X 2 ; Y | X 1 A 0 B 0 AB ˜ S U ) + I ( B 0 ; Y | A 0 AB ˜ S U ) + I ( B ; Y | A ˜ S U ) + I ( U ; Y ) , R 1 + R 2 < I ( X 1 X 2 ; Y | U ˜ S ) + I ( U ; Y ) . The proof essentially consists of: a) Computing the left and right degrees of the message graph at each stage in Figure 8, b) ensuring both encoders can decode ( A , B ) (the edge from the graph 8(b)) in each block, and c) ensuring that the decoder can decode U in each block. W e omit the formal proof since it is an extended version of the arguments in Section V. V I I . C O N C L U S I O N W e proposed a ne w single-letter achiev able rate region for the two-user discrete memoryless MAC with noiseless feedback. This rate re gion is achieved through a block-Markov superposition coding scheme, based on the observ ation that the messages of the two users are correlated giv en the feedback. W e can represent the messages of the two users as left and right vertices of a bipartite graph. Before transmission, the graph is fully connected, i.e., the messages are independent. The idea is to use the feedback to thin the graph gradually , until it reduces to a set of disjoint edges. At this point, each en- coder knows the message of the other, and the y can cooperate to resolve the decoder’ s residual uncertainty . It is not clear if this idea can be applied to a MA C with partial/noisy feedback - the difficulty lies in identifying common information between the encoders to summarize at the end of each block. Howe ver , this method of exploiting correlated information could be useful in other multi-terminal communication problems. A P P E N D I X C O M P U T I N G T H E S Y M M E T R I C S U M R AT E The random v ariables U, A, B , X 1 , X 2 are all chosen to hav e binary alphabet. The stationary input distribution has the form P U · P AB · P X 1 | AU · P X 2 | B U and is defined as follows. P U (0) = p 0 , P U (1) = p 1 = 1 − p 0 , (51) P AB (1 , 1) = y , P AB (0 , 1) = P AB (0 , 1) = x, P AB (0 , 0) = 1 − 2 x − y , (52) P X 1 | U A (1 | u, 0) = P X 1 | U B (1 | u, 0) = p u 0 , P X 1 | U A (1 | u, 1) = P X 2 | U B (1 | u, 1) = p u 1 , u ∈ { 0 , 1 } . (53) Recall that ˜ S = ( ˜ U , ˜ A, ˜ B , ˜ Y ) . The distributions Q A | ˜ X 1 ˜ S and Q B | ˜ X 1 ˜ S , which generate A and B using the feedback information, are defined as follows. Q A | ˜ X 1 ˜ S : A =  1 if ˜ X 1 6 = ˜ Y 0 if ˜ X 1 = ˜ Y (54) Q B | ˜ X 2 ˜ S : B =  1 if ˜ X 2 6 = ˜ Y 0 if ˜ X 2 = ˜ Y (55) For (54) and (55) to generate a joint distribution P AB as in (52), the consistency condition giv en by (2) needs to be satisfied. Thus we need P AB (1 , 1) = y = P ( ˜ X 1 = 1 , ˜ X 2 = 1 , ˜ Y = 0) (56) P AB (0 , 1) = x = P ( ˜ X 1 = 0 , ˜ X 2 = 1 , ˜ Y = 0) + P ( ˜ X 1 = 1 , ˜ X 2 = 0 , ˜ Y = 1) (57) P AB (1 , 0) = x = P ( ˜ X 1 = 1 , ˜ X 2 = 0 , ˜ Y = 1) + P ( ˜ X 1 = 0 , ˜ X 2 = 1 , ˜ Y = 0) . (58) 16 H ( Y | X 2 AB U ) = x X u p u [(1 − p 0 u ) h ( q p 1 u ) + p 0 u h ( q (1 + p 1 u )) + (1 − p 1 u ) h ( q p 0 u ) + p 1 u h ( q (1 + p 0 u ))] + y X u p u [(1 − p 1 u ) h ( q p 1 u ) + p 1 u h ( q (1 + p 1 u ))] + (1 − 2 x − y ) X p u [(1 − p 0 u ) h ( q p 0 u ) + p 0 u h ( q (1 + p 0 u ))] , H ( Y | U B ˜ Y ˜ X 2 ) = X u p u  ( x + y ) h  q ( p u 1 + xp u 0 + y p u 1 x + y )  + (1 − x − y ) h  q ( p u 0 + (1 − 2 x − y ) p u 0 + xp u 1 1 − x − y )  + o ( q ) , H ( Y | U B X 2 ˜ Y ˜ X 2 ) = X u p u ( x + y )  p u 1 h  q (1 + xp u 0 + y p u 1 x + y )  + (1 − p u 1 ) h  q xp u 0 + y p u 1 x + y  + X u p u (1 − x − y )  p u 0 h  q (1 + (1 − 2 x − y ) p u 0 + xp u 1 1 − x − y )  + (1 − p u 0 ) h  q (1 − 2 x − y ) p u 0 + xp u 1 1 − x − y  + o ( q ) . W e can expand (56) as y = P ( ˜ X 1 = 1 , ˜ X 2 = 1)(1 − q ) = X u p u ( y p 2 u 1 + 2 xp u 0 p u 1 + (1 − 2 x − y ) p 2 u 0 ) (1 − q ) . (59) As q → 0 , the abov e condition becomes y = X u p u ( y p 2 u 1 + 2 xp u 0 p u 1 + (1 − 2 x − y ) p 2 u 0 ) . (60) Similarly , as q → 0 , (57) and (58) become x = X u p u [ y (1 − p u 1 ) p u 1 + x (1 − p u 1 ) p u 0 + x (1 − p u 0 ) p u 1 + (1 − 2 x − y )(1 − p u 0 ) p u 0 ] . (61) (61) and (60) can be written in matrix form as  a 11 a 12 a 21 a 22   x y  =  P u p u p u 0 (1 − p u 0 ) P u p u p 2 u 0  (62) where a 11 , 1 − X u p u ( p u 1 − p u 0 )(1 − 2 p u 0 ) , a 12 , X u p u [ p u 0 (1 − p u 0 ) − p u 1 (1 − p u 1 )] , a 21 , 2 X u p u p u 0 ( p u 0 − p u 1 ) , a 22 , 1 − X u p u ( p 2 u 1 − p 2 u 0 ) . (62) uniquely determines x and y giv en the values of p u , p u 0 and p u 1 for u ∈ { 0 , 1 } . Therefore the joint distribution is completely determined. The information quantities W e calculate the information quantities in nats below . W e use the notation h ( . ) to denote the binary entropy function in nats. h ( x ) = − x ln x − (1 − x ) ln(1 − x ) , 0 ≤ x ≤ 1 . (63) H ( Y ) = h (2 q ( x + y )) , H ( Y | U ) = 1 X u =0 p u · h (2 q (( x + y ) p u 1 + (1 − x − y ) p u 0 )) , H ( Y | X 1 X 2 ) = 2 xh ( q ) + y h (2 q ) , H ( Y | AB U ) = 1 X u =0 p u [2 xh ( q ( p u 1 + p u 0 )) + y h (2 qp u 1 ) + (1 − 2 x − y ) h (2 q p u 0 )] , H ( Y | ˜ Y U ) = H ( Y | U ) + o ( q ) , and H ( Y | X 2 AB U ) , H ( Y | U B ˜ Y ˜ X 2 ) , H ( U B X 2 ˜ Y ˜ X 2 ) are giv en by the equations at the top of this page. Here o ( q ) is any function such that o ( q ) q → 0 as q → 0 . Using these in the rate constraints of (45), we can obtain the constraints for R 1 and R 1 + R 2 . Due to the symmetry of the input distribution, the bound for R 2 is the same as that for R 1 abov e. Optimizing over p u , p u 0 , p u 1 for u ∈ { 0 , 1 } , we obtain an achiev able symmetric sum rate of R 1 + R 2 = 0 . 5132 q + o ( q ) nats for P ( U = 0) = p 0 = 0 . 0024 , P ( U = 1) = 1 − p 0 = 0 . 9976 , P X 1 | U A (1 | 0 , 0) = p 00 = 0 . 791 , P X 1 | U A (1 | 1 , 0) = p 10 = , (  is a constant very close to 0) , P X 1 | U A (1 | 0 , 1) = p 01 = 0 . 861 , P X 1 | U A (1 | 1 , 1) = p 11 = 0 . 996 . A C K N O W L E D G E M E N T S W e thank the anonymous re viewers and the associate editor for their v aluable comments, which led to a significantly improv ed paper . R E F E R E N C E S [1] T . M. Cover and C. S. K. Leung, “ An achiev able rate region for the multiple-access channel with feedback, ” IEEE T rans. Inf. Theory , vol. 27, no. 3, pp. 292–298, 1981. [2] R. Ahlswede, “Multi-w ay communication channels, ” in Pr oc. Second Int. Symp. Inform. T ransmission , Tsahkadsor, Armenia, USSR, Hungarian Press, 1971. [3] H. D. Liao, Multiple-Access Channels . PhD thesis, Univ . Hawaii, 1972. [4] N. T . Gaarder and J. K. W olf, “The capacity region of a multiple-access discrete memoryless channel can increase with feedback, ” IEEE T rans. Inf. Theory , v ol. 21, no. 1, pp. 100–102, 1975. [5] F . M. J. W illems, “The feedback capacity region of a class of discrete memoryless multiple access channels, ” IEEE T rans. Inf. Theory , vol. 28, no. 1, pp. 93–95, 1982. 17 [6] L. H. Ozarow , “The capacity of the white Gaussian multiple access channel with feedback, ” IEEE T rans. Inf. Theory , v ol. 30, no. 4, pp. 623– 628, 1984. [7] M. Wigger , Cooperation on the Multiple-Access Channel . PhD thesis, Swiss Federal Institute of T echnology , Zurich, 2008. [8] J. P . M. Schalkwijk and T . Kailath, “ A coding scheme for additive noise channels with feedback- part II: Band-limited signals, ” IEEE Tr ans. Inf. Theory , vol. IT -12, pp. 183–189, April 1966. [9] G. Kramer , Directed Information for channels with F eedback . PhD thesis, Swiss Federal Institute of T echnology , Zurich, 1998. [10] G. Kramer , “Capacity Results for the Discrete Memoryless Network, ” IEEE Tr ans. Inf. Theory , vol. 49, pp. 4–20, January 2003. [11] S. Bross and A. Lapidoth, “ An improved achiev able region for the discrete memoryless two-user multiple-access channel with noiseless feedback, ” IEEE T rans. Inf. Theory , vol. IT -51, pp. 811–833, March 2005. [12] A. P . Hekstra and F . M. J. Willems, “Dependence balance bounds for single-output two-way channels, ” IEEE T rans. Inf. Theory , v ol. 35, pp. 44–53, Jan 1989. [13] R. T andon and S. Ulukus, “Outer bounds for multiple access channels with feedback using dependence balance, ” IEEE T rans. Inf. Theory , vol. 55, pp. 4494–4507, October 2009. [14] A. Anastasopoulos, “ A sequential transmission scheme for the multiple access channel with noiseless feedback, ” in Pr oc. Allerton Conf. on Comm., Control, and Computing , (Monticello, Illinois), 2009. [15] F . M. J. Willems and E. C. V an der Muelen, “Partial feedback for the discrete memoryless multiple access channel, ” IEEE T rans. Inf. Theory , vol. 29, pp. 287–290, March 1983. [16] A. B. Carleial, “Multiple-access channels with different generalized feedback signals, ” IEEE T rans. Inf. Theory , vol. 28, no. 6, pp. 841– 850, 1982. [17] F . M. J. W illems, E. van der Meulen, and J. Schalkwijk, “Generalized feedback for the discrete memoryless multiple-access channel, ” in Proc. 21st Annual Allerton Conf. on Comm., Contr ol, and Computing, Mon- ticello, IL , pp. 284–292, 1983. [18] M. Gastpar and G. Kramer, “On cooperation with noisy feedback, ” in Pr oc. International Zurich Seminar on Communications , pp. 146–149, 2006. [19] A. Lapidoth and M. A. W igger , “On the A WGN MA C with imperfect feedback, ” IEEE Tr ans. Inf. Theory , vol. 56, no. 11, pp. 5432–5476, 2010. [20] R. P . Stanley , Enumerative combinatorics . Cambrigde University Press, 2002. [21] T . M. Cover , A. El Gamal, and M. Salehi, “Multiple access channels with arbitrarily correlated sources, ” IEEE T rans. Inf. Theory , vol. 26, no. 6, pp. 648–657, 1980. [22] S. S. Pradhan, S. Choi, and K. Ramchandran, “ A graph-based framework for transmission of correlated sources over multiple-access channels, ” IEEE Tr ans. Inf. Theory , vol. 53, no. 12, pp. 4583–4604, 2007. [23] T . S. Han, “ A general coding scheme for the two-way channel, ” IEEE T rans. Inf . Theory , vol. 30, no. 1, pp. 35–43, 1984. [24] I. Csisz ´ ar and J. K ¨ orner , Information Theory: Coding Theorems for Discr ete Memoryless Systems . New Y ork: Academic Press,, 1981. Ramji V enkataramanan recei ved the B.T ech de gree in Electrical Engineering from the Indian Institute of T echnology , Madras in 2002, and the Ph.D degree in Electrical Engineering (Systems) from the University of Michigan, Ann Arbor in 2008. He is currently a postdoctoral research associate at Y ale Univ ersity . His research interests include information theory , coding and stochastic network theory . S. Sandeep Pradhan obtained his M.E. degree from the Indian Institute of Science in 1996 and Ph.D. from the Univ ersity of California at Berkeley in 2001. From 2002 to 2008 he was an assistant professor in the Department of Electrical Engineering and Computer Science at the University of Michigan at Ann Arbor, where he is currently an associate professor . He is the recipient of 2001 Eliahu Jury a ward given by the Uni versity of California at Berkele y for outstanding research in the areas of systems, signal processing, commu- nications and control, the CAREER award given by the National Science Foundation (NSF), and the Outstanding achievement award for the year 2009 from the University of Michigan. His research interests include sensor networks, multi-terminal communication systems, coding theory , quantization, information theory .

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment