Robust Key Agreement Schemes
This paper considers a key agreement problem in which two parties aim to agree on a key by exchanging messages in the presence of adversarial tampering. The aim of the adversary is to disrupt the key agreement process, but there are no secrecy constr…
Authors: Terence Chan, Ning Cai, Alex Grant
Rob ust K e y Agreement Schemes T erence Chan ∗ , Ning Cai † and Alex Grant ∗ ∗ Institute for T elecommunications Research, Univ ersity of South Australia † Xidian Univ ersity , China Abstract — This paper considers a key agr eement problem in which two parties aim to agree on a k ey by exchanging messages in the presence of adversarial tampering. The aim of the adversary is to disrupt the key agr eement process, but ther e are no secrecy constraints (i.e. we do not insist that the key is kept secret from the adversary). The main results of the paper are coding schemes and bounds on maximum key generation rates f or this pr oblem. I . I N T R O D U C T I O N In many distrib uted collaborativ e algorithms or applications, it is required that each in volved party shares a common random key or seed. For instance, in authentication [1] or secret communications [2], the client and the server may need to share a common pri vate ke y . In another scenario, a common random seed may need to be shared by a group of cooperativ e users to run a distributed probabilistic algorithm. In such cases, key secrecy may not be important. In all of these examples howe ver , it is important that each party has the same key . It is important to inv estigate methods for generation and distribution of random ke ys. For example, in one scenario, it may be required to “divide” a secret key into smaller pieces, for distribution to a group of users. The goal is to ensure that only legitimate groups of users, each of which holds one small piece of the secret, can reconstruct the secret key . This is the secret sharing problem [3]. In another scenario, two legitimate parties (and possibly an adversary) may observe correlated randomness. The objectiv e is for the two parties to extract a common random key from their observ ations, by exchanging messages over a public channel. The goal is to ensure that an adversary who observes all the messages exchanged ov er the public channel has no knowledge about the agreed key [4]. The focus of this paper is on robust key agreement in the presence of adversarial tampering (i.e. the adversary can alter some of the messages exchanged between the legitimate parties during the key agreement process). W e are interested in coding methods and key generation rates, where the only requirement is that the parties obtain the same key . W e do not require that the key be kept secret from the adversary . One approach to this problem is for one party to simply generate a key and then send it to the other party . Using this simple approach, the key agreement problem reduces to the standard problem of reliable communication. T o ensure the other party can reliably reconstruct the ke y in the presence of noise or tampering, the sender adds redundancy in the form of an error correction code [5]. Recently , the error correction problem was studied in the context of network coding [6]. Although this direct communications approach is very simple, we shall see that its application to key agreement can be suboptimal. The organization of this paper is as follows. Section II provides the problem formulation. Section III focuses on zero-error key agreement, which is the worst case scenario assuming adversaries hav e unbounded computational abilities. Section IV considers a weaker adversarial model in which adversaries can only make certain kinds of simple attacks. Notations: V ectors will be denoted by bold-faced lowercase letters whose entries are denoted by superscripts. For example x is the vector , x 1 is its first entry and x [ i,j ] is the i th to j th entries of x . In addition, define A m ( n, d ) as a maximum rate 2 m -ary code of length n and minimum Hamming distance d . I I . P R O B L E M F O R M U L A T I O N Consider a simple two-way network as depicted in Figure 1. Alice and Bob aim to agree on a common random key by ex- changing messages through the network. Eve is the adversary in the network, whose only objective is to prev ent Alice and Bob from agreeing on a ke y . She attacks by replacing some of the exchanged messages. There is no requirement to keep the key secret from Eve. n 1 n 2 Alice Bob Fig. 1. A two-way network W e will mainly consider a two-round ke y agreement sce- nario. In the first round, Alice generates n 1 message v ec- tors, x 1 , . . . , x n 1 . Assume without loss of generality that all messages are binary vectors of length m . These are sent to Bob using the n 1 forward links (one for each message). Eve observes the messages and can replace some of them with messages of her o wn choosing. Let the n 1 messages received by Bob be denoted ˆ x 1 , . . . ˆ x n 1 . In the second round, after recei ving ˆ x 1 , . . . ˆ x n 1 , Bob gener- ates n 2 message vectors, y 1 , . . . , y n 2 . These are sent to Alice using the n 2 backward links. Again, Eve may observe and replace some of the messages. Let the n 2 messages recei ved by Alice be denoted ˆ y 1 , . . . ˆ y n 2 . If Eve can attack ev ery link, it is impossible for Alice and Bob to agree on a key . Howe ver there are many scenarios of interest where it may be reasonable to assume that this is not possible (e.g. due to limited network access, or the use of special hardened links). Henceforth, we assume that Eve can attack at most t links in total. In other words, n 1 X i =1 d H ( x i , ˆ x i ) + n 2 X i =1 d H ( y i , ˆ y i ) ≤ t (1) where d H ( · ) is a Hamming distortion function with d H ( x , y ) = ( 0 x = y 1 x 6 = y i.e. two distinct vectors are at distance 1, regardless of how many element disagree. After these two rounds of message exchange (forward and backward), Alice and Bob make independent decisions on their random key . Let g a and g b be their (key-)decoding functions respectiv ely . A key agreement error occurs if g a ( x 1 , . . . , x n 1 , ˆ y 1 , . . . ˆ y n 2 ) 6 = g b ( y 1 , . . . , y n 2 , ˆ x 1 , . . . ˆ x n 1 ) . A ke y agreement scheme is specified by the encoders and decoders used by Alice and Bob . W e shall use a probabilistic setting. Alice’ s encoder E a is specified by a probability distribution Pr( x 1 , . . . , x n 1 ) which governs how Alice gen- erates the first round of messages. Bob’ s encoder E b howe ver is specified by a conditional distrib ution Pr( y 1 , . . . , y n 2 | ˆ x 1 , . . . ˆ x n 1 ) which determines ho w the second round of mes- sages should be generated after receiving the possibly cor- rupted messages from Alice. A key agreement scheme will be denoted by the tuple ( m, n 1 , n 2 , E a , E b , g a , g b ) or simply ( E a , E b , g a , g b ) if m, n 1 , n 2 are understood. Eve’ s attack is specified by a pair of conditional probability distributions Pr( ˆ x 1 , . . . ˆ x n 1 | x 1 , . . . , x n 1 ) (2) and Pr( ˆ y 1 , . . . ˆ y n 2 | y 1 , . . . , y n 2 , x 1 , . . . , x n 1 , ˆ x 1 , . . . , ˆ x n 1 ) , (3) These distributions must satisfy the constraint (1). Let K 1 = g a ( x 1 , . . . , x n 1 , ˆ y 1 , . . . ˆ y n 2 ) K 2 = g b ( y 1 , . . . , y n 2 , ˆ x 1 , . . . ˆ x n 1 ) . The probability distribution of K 1 and K 2 depends on Eve’ s attacking strategy . For a given attacking strategy E , let H E ( K 1 | K 1 = K 2 ) , − X k 1 Pr( K 1 = k 1 | K 1 = K 2 ) log Pr( K 1 = k 1 | K 1 = K 2 ) where Pr( K 1 = k 1 | K 1 = K 2 ) is the conditional probability that K 1 = k 1 giv en the ev ent that K 1 = K 2 . Let A E be the set of attacking strate gies that Eve can choose (i.e. the set of pairs of conditional distributions (2) and (3) satisfying (1)). W e define the key agreement rate (for a giv en key agreement scheme) as min E ∈A E H E ( K 1 | K 1 = K 2 ) . I I I . Z E R O - E R RO R K E Y A G R E E M E N T The objectiv e of zero-error key agreement is for Alice and Bob to generate identical keys with probability one at some positiv e rate. Definition 1: For giv en positi ve integers n 1 , n 2 , m , the ke y rate R is called zero-error admissible if there exists a key agreement scheme ( E a , E b , g a , g b ) such that (1) the probability of key agreement error is zero for all attacking strategies that Eve can choose and (2) R ≤ min E ∈A E H E ( K 1 ) . The zero- error key agreement capacity is the supremum of all zero-error admissible rates. The natural fundamental question is: What is the zero-error key agreement capacity? In this paper , we will giv e lo wer bounds for the zero-error key agreement capacity and simple schemes that achiev e the lower bounds. Theor em 1: If t ≥ max( n 1 , n 2 ) , then the zero-error key agreement capacity is 0. Pr oof sketch: Since t ≥ n 1 , n 2 , no matter which messages Alice and Bob send, Eve can replace them with any other messages. If the probability of key agreement error is zero, then the key that Alice and Bob agree on must be independent of ˆ x 1 , . . . ˆ x n 1 and ˆ y 1 , . . . ˆ y n 2 . As such, the agreed key can only be a constant. A. Examples W e will now de velop some small examples that pro vide motiv ation for a general coding scheme. If messages can be sent only in one direction (i.e., either n 1 or n 2 is zero), then k ey agreement is equi valent to transmission of a random key from one party to another . When messages can be sent in both directions, we can nai vely decouple the two rounds of message transmissions into two rounds of random key transmissions as follows. Example 1 (Dir ect ke y transmission): Suppose n 1 = n 2 = 3 and t = 1 . Let C a = { ( x 1 , x 2 , x 3 ) : x 1 = x 2 = x 3 } and let Pr( x 1 , x 2 , x 3 ) , ( 1 / 2 m if ( x 1 , x 2 , x 3 ) ∈ C a 0 otherwise. Since C a has minimum distance 3, no matter ho w Eve at- tacks, Bob can reconstruct x 1 without error . Note that, if the minimum distance of C a is less than 3, then Bob may fail to correctly reconstruct x 1 . Similarly , let C b = { ( y 1 , y 2 , y 3 ) : y 1 = y 2 = y 3 } and Pr( y 1 , y 2 , y 3 | ˆ x 1 , ˆ x 2 , ˆ x 3 ) , ( 1 / 2 m if ( y 1 , y 2 , y 3 ) ∈ C b 0 otherwise for all ( ˆ x 1 , ˆ x 2 , ˆ x 3 ) . Again, Alice can reconstruct y 1 without error , no matter how Eve attacks. Finally , Alice and Bob can use ( x 1 , y 1 ) as the common random key whose entropy is 2 m . The above scheme essentially consists of two one-round key transmission schemes. The resulting key consists of two random parts, one generated by Alice (and sent to Bob) and one generated by Bob (and sent to to Alice). Despite its simplicity , this scheme is not optimal as shown by the following example. Example 2: Suppose n 1 = n 2 = 3 and t = 1 . Let C a be an A m (3 , 2) code and Pr( x 1 , x 2 , x 3 ) , ( 1 / |C a | if ( x 1 , x 2 , x 3 ) ∈ C a 0 otherwise. Note that C a has minimum distance 2. Therefore, if Eve attacks one of the forward links, Bob can always detect it but not necessarily correct it. Consider the following codebooks C ∗ b, 0 = A m − 1 (3 , 3) and C ∗ b, 1 = A m − 1 (3 , 1) . Let C b, 0 = ( y 1 , y 2 , y 3 ) : y 1 1 = y 1 2 = y 1 3 = 0 and ( y [2 ,m ] 1 , y [2 ,m ] 2 , y [2 ,m ] 3 ) ∈ C ∗ b, 0 , C b, 1 = ( y 1 , y 2 , y 3 ) : y 1 1 = y 1 2 = y 1 3 = 1 and ( y [2 ,m ] 1 , y [2 ,m ] 2 , y [2 ,m ] 3 ) ∈ C ∗ b, 1 If Bob does not detect any errors (i.e., ( ˆ x 1 , ˆ x 2 , ˆ x 3 ) ∈ C a ) , then Pr( y 1 , y 2 , y 3 | ˆ x 1 , ˆ x 2 , ˆ x 3 ) ( 1 / |C b, 0 | if ( y 1 , y 2 , y 3 ) ∈ C b, 0 0 otherwise. Otherwise, if an error is detected, Pr( y 1 , y 2 , y 3 | ˆ x 1 , ˆ x 2 , ˆ x 3 ) , ( 1 / |C b, 1 | if ( y 1 , y 2 , y 3 ) ∈ C b, 1 0 otherwise . After recei ving ˆ y 1 1 , ˆ y 1 2 , ˆ y 1 3 , Alice can reconstruct y 1 1 . There- fore, Alice will know which codebook Bob used. It is easy to see that ( y 1 , y 2 , y 3 ) can also be reconstructed perfectly . Finally , Alice and Bob agree on K = ( k o , k a , k b ) such that • k o = y 1 1 , which indicates whether errors were detected in the forward links; • k a = 0 if k o = 1 . Otherwise, k a = ( x 1 , x 2 , x 3 ) ; • k b = ( y 1 , y 2 , y 3 ) . It is straightforward to prove that the probability of key agreement error is zero and that the entropy of the key K is at least min(log | A m (3 , 2) | + log | A m − 1 (3 , 3) | , log | A m − 1 (3 , 1) | ) . When m is sufficiently large, the Singleton bound is tight, and hence the entropy of K is at least 3 m − 1 . Compared with the key agreement rate in Example 1, a 50% gain is achie ved. From the abov e example, it is easy to see that the direct key transmission scheme in Example 1 is suboptimal because Bob did not use his recei ved messages to estimate how many forward links were attacked by Eve. As a result, Bob has to pessimistically protect his messages, assuming that Eve can attack t backward links. Although the key agreement scheme in Example 2 may appear to be a modified direct key transmission scheme, there are some subtle dif ferences. Using direct key transmission (multiple one-round key distribution sessions), the agreed key consists of two random parts, one from Alice and one from Bob . The entropy of the agreed key will be the same no matter how Eve attacks. On the other hand, in the scheme detailed in Example 2, the size of the ke y depends on how Eve attacks. For instance, if Eve attacks the forward link, then the entropy of the resulting key is the lar gest. Furthermore, in this case, the key is essentially solely generated by Bob . In this paper , we are not concerned with the source of randomness. Howe ver , in some other scenarios, it may be of a practical concern. For example, suppose that there is another adversary who can “observe” how Bob can generate the random messages ( y 1 , y 2 , y 3 ) . Then, it may cause a problem if that adversary will know the ke y completely . The following is another interesting example in which direct ke y transmission fails altogether , but the key agreement capacity is nonzero. Example 3 ( n 1 = n 2 = 2 and t = 1 ): Let C a = A m (2 , 2) and Pr( x 1 , x 2 ) , ( 1 / |C a | if ( x 1 , x 2 ) ∈ C a 0 otherwise. Again, if Eve attacks the forward links, Bob can detect it b ut not correct it. Consider codebooks C ∗ b, 0 = A m − 1 (2 , 2) and C ∗ b, 1 = A m − 1 (2 , 1) . Let C b, 0 = ( y 1 , y 2 ) : y 1 1 = y 1 2 = 0 and ( y [2 ,m ] 1 , y [2 ,m ] 2 ) ∈ C ∗ b, 0 , C b, 1 = ( y 1 , y 2 ) : y 1 1 = y 1 2 = 1 and ( y [2 ,m ] 1 , y [2 ,m ] 2 ) ∈ C ∗ b, 1 . If Bob does not detect any errors (i.e., ( ˆ x 1 , ˆ x 2 ) ∈ C a ) , then Pr( y 1 , y 2 | ˆ x 1 , ˆ x 2 ) ( 1 / |C b, 0 | if ( y 1 , y 2 ) ∈ C b, 0 0 otherwise. Otherwise, Pr( y 1 , y 2 | ˆ x 1 , ˆ x 2 ) ( 1 / |C b, 1 | if ( y 1 , y 2 ) ∈ C b, 1 0 otherwise. As before, we can easily show that the resulting key agreement capacity is at least log |C a | = m . B. Generalization W e will now generalize Examples 2 and 3 to arbitrary n 1 , n 2 and t . Let ` = d log( t + 1) e and Ω m ( d, t 1 ) 1 be defined as follows: Ω m ( d, t 1 ) = ( log | A m ( n 1 , d ) || A m − ` ( n 2 , 2 t + 1) | d > t + t 1 log | A m − ` ( n 2 , 2( t − t 1 ) + 1) | otherwise. Theor em 2 (Inner bound): Suppose that n 2 > 2 t . Then the zero-error key agreement capacity is at least max n 1 ≥ d>t min(Ω m ( d, 0) , Ω m ( d, d − t )) . (4) 1 W e do not explicitly indicate the dependency of Ω m ( d, t 1 ) on n 1 , n 2 , t to simplify notations. Pr oof: Let C a be an A m ( n 1 , d ) code and Pr( x 1 , . . . , x n 1 ) , ( 1 / |C a | if ( x 1 , . . . , x n 1 ) ∈ C a 0 otherwise. Let t 1 be the number of forward links that Eve attacks. If t 1 < d − t , then Bob can reconstruct ( x 1 , . . . , x n 1 ) perfectly . Otherwise, Bob can deduce that at least d − t forward links hav e been attacked by Eve. Any integer i between 0 and t , can be easily represented using ` bits. For each i , let C ∗ b,i = A m − ` ( n 2 , 2( t − i ) + 1) and C b,i , ( ( y 1 , . . . , y n 2 ) : y [1 ,` ] 1 = · · · = y [1 ,` ] n 2 = i and ( y [ ` +1 ,m ] 1 , . . . , y [ ` +1 ,m ] n 2 ) ∈ C ∗ b,i ) . Suppose that Bob can detect and correct errors (i.e., ( ˆ x 1 , . . . , ˆ x n 1 ) is within a distance of d − t − 1 from a codew ord in C a ), then he can also determine the number of forward links i that were attacked by Eve. Then let Pr( y 1 , . . . , y n 2 | ˆ x 1 , . . . , ˆ x n 1 ) = ( 1 / |C b,i | if ( y 1 , . . . , y n 2 ) ∈ C b,i 0 otherwise. Similarly , if Bob determines that at least d − t errors occur in the forward links, then Pr( y 1 , . . . , y n 2 | ˆ x 1 , . . . , ˆ x n 1 ) = ( 1 / |C b,d − t | if ( y 1 , . . . , y n 2 ) ∈ C b,d − t 0 otherwise. Let K = ( k o , k a , k b ) such that • k o = y [1 ,` ] 1 which is the number of errors (or attacks) occurred in the forward links; • k a = 0 if k o = d − t . Otherwise, k a = ( x 1 , . . . , x n 1 ) ; • k b = ( y 1 , . . . , y n 2 ) . It is straightforward to prove that K is known to both Alice and Bob, and the entropy of the common ke y K is at least H ( K ) ≥ min 0 ≤ t 1 ≤ t Ω m ( d, t 1 ) (5) = min(Ω m ( d, 0) , Ω m ( d, d − t )) (6) and the result then follows. In abov e, we considered only two-round k ey-agreement schemes and obtained inner bounds on rates of the agreed key . W e can easily extend the bounds to multi-round scenarios. Define R w,n 1 ,...,n w ,t,m as the key agreement capacity in a w -round key agreement scenario in which (1) the number of messages that can be sent in the i th round is n i , (2) the maximum number of links that can be attacked by Eve is t , and (3) each message is a binary vector of length m . Theor em 3: Suppose n 1 , . . . , n w > 2 t + 1 . Then for any d such that 2 t ≥ d > t , R w,n 1 ,...,n w ,t,m is at least min log | A m − ` ( n 1 , d ) | + R w − 1 ,n 2 ,...,n w ,t,m − ` , R w − 1 ,n 2 ,...,n w , 2 t − d,m − ` (7) where ` = d log( t + 1) e . Remark: By replacing the key agreement capacity terms in (7) with their corresponding inner bounds, we can get inner bounds for the multi-round key agreement capacity from Theorems 2 and 3. I V . R A N D O M E R RO R S In the previous section, we considered the worst case scenario in which no errors are allo wed in key agreement. Even if Eve attacks links randomly , there is still a small but positiv e probability that she may choose the most damaging attack. In fact, in this worst case scenario, we can ev en assume that Eve has knowledge the messages sent by Alice prior to attack. W e will now relax our model to allow small errors and assume that it is infeasible for Ev e to determine which is the most damaging attack. More specifically , Eve can only decide on the number of links to be attacked in each direction, but not explicitly which links. W e will consider the asymptotic case: n 1 = λ 1 r , n 2 = λ 2 r , and t = τ r where r approaches infinity . Also, each link can transmit either zero or one (i.e., m = 1 ). Definition 2: A normalized key agreement rate R is - error admissible (with respect to given λ 1 , λ 2 and τ ) if there exists a sequence of key agreement schemes (1 , n 1 , n 2 , t, E a , E b , g a , g b ) 2 such that 1) lim r →∞ n 1 /r = λ 1 , lim r →∞ n 2 /r = λ 2 and lim r →∞ t/r = τ , 2) R ≤ min E ∈A E H E ( K 1 | K 1 = K 2 ) /r , 3) The probability of ke y agreement failure, denoted as P e (1 , n 1 , n 2 , t, E a , E b , g a , g b ) , goes to zero as r goes to infinity . The normalized -error key agreement capacity (for giv en λ 1 , λ 2 and τ ) is the supremum -error admissible R . In the following, we will obtain a lower bound for the capacity . Definition 3 (Combinatorial Binary Symmetric Channel): A C B S ( ) channel takes binary inputs and gives binary output. Let ( X 1 , . . . , X n ) be the n input symbols to the channel and ( ˆ X 1 , . . . , ˆ X n ) be the n output symbols. The channel inputs and outputs are related by ( ˆ X 1 , . . . , ˆ X n ) = ( X 1 , . . . , X n ) ⊕ ( E 1 , . . . , E n ) , where ( E 1 , . . . , E n ) is a binary error vector , independent of the inputs and uniformly distributed ov er { ( e 1 , . . . , e n ) : d H ( e 1 , . . . , e n ) ≤ n } where < 1 / 2 is a channel parameter . The C B S ( ) channel is not memoryless, but behaves like a memoryless binary symmetric channel with crossover prob- ability for sufficiently large n . Let X = { 0 , 1 } . Consider a rate s error correcting/detecting code. The encoder is a mapping f : { 1 , . . . , 2 ns } 7→ X n and the decoder is a mapping g : X n 7→ { 0 , 1 , . . . , 2 ns } where decoder output zero means that the decoder f ails to correct errors. Suppose that the transmitted code word is f ( i ) . 2 The sequence of schemes are indexed by r . For notational simplicity , we do not indicate the dependency explicitly . A correction failur e means that decoding output is not i and a detection failur e means that the output is neither i nor 0 . Clearly , the probabilities of correction and detection failures depend on the channel model. In this paper, we will focus on CBS channels. For a gi ven error correcting/detecting code C , Let P c e ( C , ξ ) and P d e ( C , ξ ) be respecti vely the probabilities of correction and detection failures when the channel is a C B S ( ξ ) channel. Pr oposition 1 (Achie vability): Fix ξ < 1 / 2 . Let I ( ξ ) , 1 + ξ log ξ + (1 − ξ ) log(1 − ξ ) . For any s < I ( ξ ) , we can construct a sequence of rate s n error correcting/detecting codes C n such that lim inf i →∞ s n ≥ s (8) lim n →∞ P c e ( C n , ) = 0 , for ≤ ξ (9) lim n →∞ P d e ( C n , ) = 0 , ∀ (10) Pr oof: The sequence of codes C n is randomly con- structed as follo ws. The proof of (9) and (10) is straightforw ard and will be omitted. The encoder f is a randomly selected mapping f : { 1 , . . . , 2 ns } 7→ X n such that each symbol in the codeword f ( i ) is independently and uniformly distributed over { 0 , 1 } . The decoder g is a “bounded distance decoder” g : X n 7→ { 0 , 1 , . . . , 2 ns } . For any sequence ( ˆ X 1 , . . . , ˆ X n ) , if there exists a unique f ( i ) such that if the Hamming weight of ( ˆ X 1 , . . . , ˆ X n ) − f ( i ) is less than nξ , then the decoder output will be i . Otherwise, g ( ˆ X 1 , . . . , ˆ X n ) = 0 . Proposition 1 prov es the e xistence of error correct- ing/detecting codes that can correct ξ -fraction of errors. W e can use these codes to construct key agreement schemes as before. As a result, we obtain the following bounds on the -error key agreement capacity . Theor em 4 (Inner bound): Assume τ /λ 2 < 1 / 2 . Let γ = ( τ − λ 1 ξ ) /λ 2 . Then the key agreement rate is at least max ξ ≤ τ /λ 1 λ 2 I ( γ /λ 2 ) , λ 1 I ( ξ ) + λ 2 I ( τ /λ 2 )) . (11) Pr oof: Let r ν be the number of links attacked in the forward direction. By Proposition 1, for sufficiently large r , there exists a code at rate close to I ( ξ ) which with high probability can correct any n 1 ξ = r λ 1 ξ ’ s errors and detect any number of errors. That τ /λ 2 < 1 / 2 guarantees that Bob can successfully inform Alice whether he can correctly decode his recei ved message. If Bob’ s decoder can correct the errors, then he and Alice both know ( X 1 , . . . , X n 1 ) . Otherwise, Bob can determine that Eve has attacked at least r λ 1 ξ links and that she can attack at most r γ , r τ − rλ 1 ξ of backward links. Hence Bob can transmit r λ 2 I ( γ /λ 2 ) bits of random key to Alice. Depending on the number of forward link attacks made by Eve, the key agreement rate is gi ven by Ω( ξ , ν ) = ( λ 1 I ( ξ ) + λ 2 I ( τ /λ 2 ) if ν < λ 1 ξ λ 2 I ( γ /λ 2 ) if ν ≥ λ 1 ξ Alice and Bob can agree on a common random key at rate no less than max ξ ≤ τ /λ 1 min 0 ≤ ν ≤ τ Ω( ξ , ν ) (12) By monotonicity of the function I ( · ) , we can further reduce (12) to (11) and hence the result follows. Note: If both Alice and Bob share a small priv ate key which is unkno wn by Eve, they can use the priv ate key in a way so that any attacks made by Eve are no better than a random attack. V . C O N C L U S I O N In this paper , we consider a ke y (or seed) agreement problem in which two parties aim to agree on a key by exchanging messages in the presence of adversarial tampering. W e showed that naiv ely decoupling the problem into two key transmission problems is suboptimal. W e proposed an improv ed scheme and obtained lower bounds on the key generation rates. Although the proposed scheme is very simple, it can significantly improv e the key agreement rate. Finally , we e xtended the proposed schemes and bounds to a weaker scenario in which the adversary has a limited computational po wer and cannot select the most damaging attacks. R E F E R E N C E S [1] U. Maurer, “ Authentication theory and hypothesis testing, ” IEEE Tr ans. Inform. Theory , Jan 2000. [2] C. Shannon, “Communication theory of secrecy systems, ” Bell System T echnical J ournal , vol. 28, no. 4, pp. 656–715, 1949. [3] A. Shamir , “Ho w to share a secret, ” Communications of the ACM , Jan 1979. [4] U. Maurer and S. W olf, “Unconditionally secure ke y agreement and the intrinsic conditional information, ” IEEE Tr ans. Inform. Theory , pp. 499– 514, March 1999. [5] S. Lin and D. J. Costello, Error Control Coding, Second Edition . Upper Saddle Riv er, NJ, USA: Prentice-Hall, Inc., 2004. [6] N. Cai and R. W . Y eung, “Network error correction, i: Basic concepts and upper bounds, ” Commun. Inf. Syst. , vol. 6, no. 1, pp. 19–36, 2006.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment