A Group Signature Based Electronic Toll Pricing System
With the prevalence and development of GNSS technologies, location-based vehicle services (LBVS) have experienced a rapid growth in recent years. However, location is a sensitive and private piece of information, so the design and development of such…
Authors: Xihui Chen, Gabriele Lenzini, Sjouke Mauw
A Gr oup Signature Based Electron i c T oll Pricing System Xihui Chen, † ,♯ Gabriele Lenzini, † Sjouk e Mauw , † , ‡ Ju n P ang ‡ † Interdisciplinary Centre f or Secur ity ♯ itrust consulting s.à r .l., ‡ Computer Science and Reliabil ity and T rust, Luxem bourg Communi c ations, Unive rs ity of Luxem bourg, Luxembourg Unive rs ity of Luxem bourg, Luxembourg ABSTRA CT W ith the prev alence of GNSS technologies, no wadays freely av ail- able for every one, location based vehicle services such as elec- tronic tolling pricing systems and pay-as-you-dri ve services are rapidly gro wing. Because these systems collect and process trav el records, if not carefully designed, they can t hreat users’ location priv acy . Finding a secure and p riv acy-friendly solution is a chal- lenge for system designers. In addition to location pri vac y , com- munication and computation o verhead should be taken into ac count as well in order to make such systems widely adopted in prac- tice. In this paper , we propose a ne w electronic toll pricing sys- tem b ased on group signatures. Our system preserves anonym it y of users within groups, in addition to correctness and accountability . It also achie ves a balance between priv acy and ov erhead imposed upon user dev ices. Categories and Subject Descriptors C.2.0 [ Compu ter -Communication Networks ]: General— Secu- rity and pr otection ; K. 4.1 [ Computer and Society ]: Public Policy Issues— Privacy General T erms Security and priv acy Keyw ords electronic toll pricing systems, security protocols, priv acy 1. INTR ODUCTION Electronic T oll Pricing (ETP ) systems, by collecting tolls electron - ically , aim to eliminate delays due to queuing on toll roads and thus to increase the throughpu t of transportation netw orks. S ince Norway built the first w orking ETP system in 1986, ETP systems hav e been implemented worldwide. Now adays, by exploiting the av ailability of free Global Navigation Satell ite S ystems (GNSS ), traditional ETP systems are evo lving into more sophisticated loca- tion based vehicu lar services. They can of fer smart pricing, e.g., by charging less who dri ves on roads withou t congestions or dur- ing of f-peak hours. Insu rance companies can also bind insurance SAC ’12 March 25-29, 2012, Ri va del Garda, Italy . premiums to roads that their users actually use, and offer a service kno wn as “Pay-As-Y ou-Driv e” (P A YD) [21]. Moreo ver , t he col- lected t raffic usage records can be used for public interest, such as planning roads maintenance, or for resolving legal disputes in case of accidents. As location is usually co nsidered as a sensiti ve and priv ate piece of informa tion, ETP and P A YD sy stems raise ob vious priv acy concerns . In addition, by processing locations and travel records, the y can learn and re veal user sensitiv e information such as home addresses and medical informations [15], which conse- quently can l ead t o material loss or ev en bodily harm. Building secure ETP a nd P A YD systems that guarantee location privacy and high quality of service is actually a scientific challenge. In the last few years, secure ETP and P A YD systems have been widely studied [21, 6, 18, 3, 9, 16]. They can roughly be di- vided into two categories based on whether locations are stored in user dev ices or collected by central t oll servers. PriP A YD [21], PrETP [3], the cell-based solution described in [ 9], and Milo [ 16] belong to the first category . In these systems, locations and tolls are managed by user de vices while servers are allowed to pro cess only aggre gated data. In the second cate gory we fi nd VPri v [18 ], where the server stores a datab ase of users’ travel history , and the ETP system described in [6], where the server collects hash values of the trip records. Both cate gories ha ve advantag es and disadv antages. Hiding loca- tions from servers drastically reduces concerns about location pri- v acy . Ho weve r, the load for user devices is considerable. T ypi- cally , de vices hav e to manage the st orage of locations and proofs to con vince serv ers not to ha ve cheated, e.g., mak ing use of ze ro- kno wledge proofs. On the other side, the av ailability of locations databases collected by s ervers like in VPri v can help impro ve appli- cations such as traffic monitoring and con trol although the integra- tion of multiple systems should be carried out carefully . W hereas, solutions to prese rve location pri vac y become a m andatory require- ment. In this paper, we follo w the design principles of VPriv [18]. In VPriv , users select a set of random tags beforehand and send their loca ti ons attached with these tags to th e toll server . The server then computes and returns all location fees. Each user adds up his location fees according to his tags and prov es the summation’ s correctness to the serv er by using zero-kno wledge proof, without rev ealing the ownership of the tags. This process needs to run se v- eral rounds every time to avoid user behaviours de viating from the system. Thus the main disadv antage with VPriv is that the com - putation and communica ti on overhead increases linearly with the number of r ounds exec uted and with the number of users . Our contributions. W e propose a nov el but simple ET P system which achie ves a balance between priv acy and ov erhead fo r users. By dividing users into groups and calculating toll in one round, we reduce the amount of exchange d information as well as the computation overhead due to the smaller number of locations of a group. W e use group signature schemes to guarantee anony mity within a group, with the authority being the group manager . Fr om a group signature, only the gro up manager can learn the identity of the signer in the group. Note t hat the concept o f groups, howe ver , requires us to desig n an effecti ve group di vision polic y to optimally preserve users’ location pri vac y . W e discuss a solution in Sect. 6. Users collect their locations and anon ymously send them to the toll server , together with locations signed using a group signature scheme. T o p reve nt the authority from l earning user locations while opening signatures to resolve disputes, only the hashed values of l o- cations are sign ed. When it comes time to pay the toll, the server publishes t he hashed locations collected and the related fees. Fees are encrypted with a homomorphic cryptosystem. Cl ients identify the fees that correspon d t o their loca ti ons, add them up (homo mor- phically), and return t he summation as their payments which, in turn, are opened by the server . If the summation of user payments does not match t he summation of the fees of collected locations, the se rver asks the authority to find ou t the dishonest user . T ogether with the request, it sends to t he authority the receiv ed group signa- tures, the correspond ing encrypted fees, and user payments. The authority cannot deduce locations from the fees because of t he ho- momorphic cryptosystem but, based on the location sign atures and the fees, it can compute (homomorphically) the real t oll of each user , and can thus identify t he misbeha ving users: they are those whose committed payments differ from the real tolls. W e pro ve that our sy stem is correct (users always p ay their usage to the server) and assures accountability (originators of misb ehaviours can always be fou nd and an ef fective punishing p olicy can be run to av oid repetitive misbeha viours). Furthermore, our system enforces unlinkability between users and their locations. Structure of t he paper . Sect. 2 describes the partici pants of ou r system, the t hreat model and our assumptions, and it states t he se- curity goals of our design. Sect. 3 recalls group signature schemes and other cryptog raphic primitiv es we adop t. Our ETP system is fully described i n Sect. 4. Sect. 5 defines the security properties and shows their enforcement b y our sy stem. W e conclude our pa- per in Sect. 6 with some discussions and ideas for future work. 2. SYSTEM MODEL Principals. Our system consists of t he follo wing four principals: users , their cars with on board units ( OBUs ), the authority , and the toll server (see F ig. 1). Users own and drive cars, and they are responsible for toll pay- ments. T o be entitled to use the electronic tolling service, a user brings his car to the authority , which registers the car and installs an OBU on it . The authority is a gov ernmental department tr usted by both users and the toll server . It also builds u p t he group sig- nature scheme and man ages groups of users. An OBU computes the car’ s locations using GNSS satellites and stores them in a USB stick, which interfaces the OB U and contains security information. It transmits location data to the toll server , which is a logical or- ganisation that can be run by multiple agents. The server co ll ects location data and computes the fee f or each l ocation record. It can also contact the authority to resolve dispu tes with insolve nt users. OBU Cars Users Server Authority GNSS Satellites manage charge transmit location data issue own Figure 1: The relationship among the principals. Adversary model. Considering the deploymen t en vironment of ETP systems, the possible t hreats can come from: 1) manipulated OBUs, which generate false location tuples; 2) dishonest users, who (partially) avoid to p ay for t heir road usage; 3) dishonest toll servers, wh o intend to increase their re venue and breach use rs’ pri- v acy; 4) the honest but curious authority . Assumptions. In our system, dishonest servers perform any ac- tions so as to satisfy their stron g economic motiv ation and curiosity . They can de viate from the protocol specification an d collude wit h other attackers. The att ackers considered in the system follow the Dole v-Y ao intrud er model [8]. Specifically , the y ha ve full co ntrol ov er the network, which means they can eavesd rop, block and inject messages anywhere at anytime. Howe ver , an encrypted message can nev er be opened unless they have the right ke y . The identifica- tion based on the message transmission i s out of ou r scope i n this paper . W e a ssume that location tuples are transmitted to toll servers anon ymously . This can be achiev ed by the architecture i n [11], for instance, which uses a communication service provider to separate authentication from data collection. The authority is supp osed to be curious but not to collude with an y other participants. It has b een shown tha t users’ moving traces can be reconstructed from anonymised positions, e.g., b y multi-target tracking techniques or taking i nto account user mobility profil es [12, 2 0], an d use rs’ p ri- v ate i nformation can thus be inferred [13]. Howe ver , we observed from e xperiments in the literature that track ing users remains dif- ficult i n prac ti ce, especially when the interv als between transmis- sions are big ( about one minute) and the number of traveling u sers is not small. Therefore, similar to VPriv , in this paper , we focus on priv acy leakage from ETP systems without con sidering the above mentioned techniques and attacks. Security Pro p erties. In addition to the aim of redu cing commu- nication and computation ov erheads, our system should employ proper measures to protect honest users and serv ers. Referri ng to what other systems achiev e (e. g., [ 21, 1 8, 3]), we address the fol- lo wing security properties: Corr ectness. Clients pay for their o wn road u sage and the server collects the right amount of tolls. Accountability . If a malicious action t hat de viates from t he specifi- cation of the system occurs, sufficient evidence can be gathered to identify its originator . Unlinkability . An intruder cannot link a giv en location record to its generator . 3. CR YPTOGRAPHIC PRIMI TIVES Group Signature S chemes. Group signatures [5] p rovide the si gn- ers anonymity among a group of users. A group signature scheme in volv es group members and a group manager . The task of the group manager i s to organise t he group, to set up the group signa- ture infrastructure and to re veal the signer if nee ded. The signature of a message, signed by a group member , can be v erifi ed by oth- ers base d on t he g roup public k ey while the identity of the si gner remains secret. Group sign ature schemes consist of at least the followin g fiv e func- tions: S E T U P , J O I N , S I G N , V E R I F Y and O P E N . The function S E T U P initialises the group pub lic key , the grou p manager’ s secret k ey and other related data. The procedure J O I N allows new members t o join the group. Group members call the function S I G N to generate a group signature base d on their secret keys. The V E R I FY function makes use of the group public key to check if the signature is si gned by a group member . The function O P E N determines the identity of the signer based on the group manager secret key . W e take group signature schemes as an essential building block of our system because they hav e the following properties, which effe ctively meet our security goals. C O R R E C T N E S S Signatures produced by a group member using S I G N must be accepted by V E R I FY . U N F O R G E A B I L I T Y Only group members are able to sign messages on behalf of the group. A N O N Y M I T Y Giv en a v alid sign ature of some mes sage, identify- ing the actual signer is u nfeasible f or e veryone but the g roup manager . U N L I N K A B I L I T Y Deciding whether two dif ferent valid signatures were computed by the same group member is unfeasible. E X C U L P A B I L I T Y Neither a group member nor the group manager can sign on behalf of other group members. T R A C E A B I L I T Y The group manager is al ways able to open a va lid signature and identify the actual signer . C O A L I T I O N - R E S I S TAN C E A colluding subset of group members cannot generate a valid signature that the group manager can- not link to one of the colluding group members. There are some other properties we desire as well , e.g ., ef ficiency and dynamic group manageme nt. Ef ficiency contains the length of signatures and computation time of each function, which deter- mines the feasibility of our system. Dynamic group management enables users to join and quit at any time when they are n ew or not satisfied with current groups. In the last decade, efficien t gro up sign ature schemes with ne w fancy features have been de veloped, e.g., group message authentication [19] and group signc ryption [2 ]. W e will make use of an abstract version of group signature schemes in the description of our system as it can ado pt any g roup signature sch eme as long a s it has the required security properties. Moreover , some of the schemes may improve the security of our system. For instance, an efficient group sign- cryption scheme can p reven t attackers from ea vesdropping u sers’ location signatures ov er the network. Homomor phi c Cryptosystem. B esides group si gnatures, we em- ploy another cryptographic primitiv e – homomorp hic public key cryptosystem. L et E X ( m 1 ) and E X ( m 2 ) be two cipher texts of message m 1 and m 2 encrypted by agent X ’ s public key . Then we hav e E X ( m 1 ) · E X ( m 2 ) = E X ( m 1 + m 2 ) . There are many cryp- tosystems with such properties, e.g., Paillier cryptosystem [17]. Cryptographic Hash Fun ction. In our system, we also use cryp- tographic hash functions, which are publicly kno wn and satisfy the mi nimum security req uirements – preimage resistance, second preimage resistance and collision resistance. 4. OUR ETP SYST EM W e start with an informal description of our toll process, and pro- ceed with notations and specifications of the protocols in volved . 4.1 Over view Our system is organised in four phases. The first phase i s about the service subscription and set-up. A user fi rst si gns a contract with a toll server . Both users and the server employ public key encryption to secure their mutual communications. When users contact t he authority to join a group, t he authority assigns them to groups according t o a group division policy (see Sect. 6). Clients’ priv ate keys for group signatures are also established during the phase. A fter t his, the server is informed of the grou ps containing its users and the correspondin g group public keys. The se cond phase is about collecting location data. During dri ving, OBUs compute their location and time which, together with t he group name, form what we call a location tup le . OB Us period ically send location tuples and the correspon ding group signatures on the hash va lues of the location tuples (ca ll ed location sig natur es ) to th e server , who stores them in its location database. The third phase is about calculating tolls. At t he end of a toll ses- sion, e ach user co ntacts the server by using a user interface through bro wsers (not OBUs). For any loc ation t uple of a group, the server publishes a fee tuple consisting of the hashed location tuple, and the correspo nding cipher text of its fee using the chosen homomo r- phic cyrptosystem. A user selects his own fee tuples and computes the c ypher t ext o f his toll payments by mu lti plying the correspon d- ing encrypted fees. The server collects and decrypts all users’ toll payments. The fourth phase is about resolving a dispute. This phase takes place only when t he sum of user p ayments in a gr oup is not equal to the sum of all location t uples’ fees. The authority i s in volved to determine the misbehaving user(s). The server sends the fee t uples and location signatures to t he authority . When location signa tures are opened, for each user , the authority identifies the en crypted fees belonging to his location tuples, whose multiplicati on is compared to the committed encryp ted toll from the user . Any inequality indi- cates that the corresponding user has cheated on his toll payment. 4.2 Notations T ab . 1 summarises our notations. W ith c , S , and A we indicate a user, the serv er , and the authority , respecti vely . With f ( ℓ, t ) we indicate the fee to be paid when passing location ℓ at time t , while c ost c is the amount of fees that c comitted to pay after the toll session sid . Sig c ( m ) denotes the signature on message m signe d by c , and Gs c ( m ) denotes the group signature of c on message m . For othe r cryptographic primitiv es, we use standard notations. 4.3 Pr otocol Specifications T able 1: Notations. f ( ℓ , t ) The fee of passing position ℓ at time t c ost c The comitted toll payment of user c sid The identifier of the toll session S ig X ( m ) Signature of message m generated by a prin- cipal X Gs c ( m ) Group sign ature of message m generated by a group member c g pk ( G ) The grou p public key of group G pk ( X ) The public ke y of a principal X sk ( X ) The pri vate ke y of a principal X h ( m ) The hash v alue of message m E X ( m ) The me ssage m encrypted with homo morphic encryption with X ’ s public key Enc pk ( X ) ( m ) The message m encrypted with X ’ s pub lic ke y pk ( X ) Here we specify the four protocols that implement the phases of our system, namely: Set-up , Driving , T oll Ca lculation , and Disp ute Solving . In t he follo wing discussion, we fix a group G . Phase 1: S et-up. This protocol accomplishes two tasks. T he first task i s to establish the public key i nfrastructure between the users, the server and the authority . T he second task is to set up t he group infrastructure. The details are giv en i n Appendix B. Phase 2: Dri ving. The driv ing pro tocol spe cifi es ho w users period- ically transmit location tuples and location sig natures to the server . Let h ℓ, t, G i be a l ocation tuple. A message from user c , a member of group G , is denoted b y ( h ℓ, t, G i , Gs c ( h ( ℓ, t )) . After recei ving this message, the server verifies Gs c ( h ( ℓ, t )) using the group pub- lic key gpk ( G ) . If valid, the recei ved message is stored. Phase 3: T oll Calculation. This protocol aims to reac h an ag ree- ment on toll payments between the server and its users. Let L ′ be the set of fee t uples of group G . An elem ent in L ′ is of the form ( h ( ℓ, t ) , E S ( f ( ℓ, t ))) . W e use R c to denote the locations user c has trav elled and which are stored on the USB stick. W e depict the protocol in Fig. 2. pk ( S ) , sk ( c ) , R c c pk ( c ) , sk ( S ) S G , sid L ′ , sid, S ig S ( L ′ , sid ) Check the correctness of L ′ Compute tol l c = E S ( P ( ℓ,t ) ∈R c f ( ℓ, t )) E nc pk ( S ) ( tol l c , S ig c ( tol l c , S ig S ( L ′ , sid ))) Compute c ost c = D ec ( tol l c , sk ( S )) E nc pk ( c ) ( S ig S ( c ost c , sid, c )) Figure 2: The T oll Calculation protocol. In t he server’ s response to c ’s request, the server’ s signature on L ′ is used to indicate that th e fee tuples originate from the serv er . The user verifies t he v alidit y of this signature. By looking up the hash v alues o f his locations, the user identifies his set of fee tuples. Then, for each of his location tuples ( ℓ, t ) , the user comp utes E S ( f ( ℓ, t )) and co mpares it with the one in the correspondin g f ee tu ple. If th ey are the same then t he fee tuple is correct. If all his fee tuples are correct, he computes tol l c = Y ( ℓ,t ) ∈R c E S ( f ( ℓ, t )) As the encryption is homomorphic, we hav e tol l c = E S ( X ( ℓ,t ) ∈R c f ( ℓ , t )) The user then sends back to the server his signature on t ol l c and S ig S ( L ′ , sid ) . This signature indicates that user c ’ s toll payment in toll session sid is encrypted as tol l c and computed based on L ′ issued by the server . After recei ving the user’ s response, the server verifies c ’ s signature before sending back i ts signature of c ’ s toll payment decrypted from tol l c . Note that the set of fee t uples L ′ can be publishe d in a repository in practice and accessed by authorised users. In this way , users are able to down load and check their validity without the need to be connected to the server . Therefore, the real-time communication ov erhead is small. Phase 4: Dispute Resolving. In this protocol, wi th the help of t he authority , the server finds the c heating users and the amount of t olls unpaid. The server i nitiates the dispute only when, with respect to a group, the sum of committed payme nts is not equal to the sum of f ees of all l ocation tuples. Let L be the set of l ocation tuples of group G . This condition can be formally described as X c ∈G c ost c 6 = X ( ℓ,t ) ∈L f ( ℓ , t ) A dispute will in volv e the authority , who can link a location sig- nature t o its signer . At the beginning of the dispute r esolution, the server constructs two sets S and T . S consists of the hash v alues of the location tuples, the corresponding encryp ted fees, and the sig- natures of the location tuples that t he server has receiv ed i n phase 2, that is: S = { h h ( ℓ , t ) , E S ( fe e ( ℓ, t )) , G S c ( h ( ℓ, t )) i | ∀ ( ℓ, t ) ∈ L , ∀ c ∈ G } T consists of the users’ toll payment, that is: T = {h c, tol l c , S ig c ( tol l c , S ig S ( L ′ , sid )) i | ∀ c ∈ G } W e depict the pro tocol in Fig. 3. Note that fo r the sak e of simplicit y , we do not sho w t he cryptograph ic details. The critical part o f the protocol is the function DisR es , which is sho wn in detail i n A lg. 1. W e use function che cksign ( sig n, m , pk ) to check if the sign is a signature of m using p k and group signa- ture functions V E R I F Y and O P E N work as describe d in S ect. 3. The check on set T (li ne 5–8) and verification of location signatures (line 10–11 ) exclude the poss ibili ty of modifying users’ toll pay- ments by the malicious serv er . Each user’ s r eal toll payment (i.e., tol l c ) is computed in lines 13–14. T he equi valence to the user’ s committed one (i.e., tol l c ) means the user pays the righ t amount, otherwise, he is cheating. pk ( A ) , sk ( S ) S pk ( S ) , sk ( A ) , gpk ( G ) A Mutual authentication Compute S and T S , T , S ig S ( L ′ , sid ) res = DisR es ( S , T , S ig S ( L ′ , sid )) S ig A ( res, s id ) , r es Figure 3: The Dispute Resolving protocol. After acquiring r es from the authority , the server can obtain cheat- ing users’ r eal toll by decrypting tol l c = E S ( c ost c ) and the amount of tolls unpaid as well. Note that after resolving, the authority learns nothing abo ut users’ locations e xcept for the numbe r of l o- cation records of each user in that particular group. Algorithm 1 Function DisR es . 1: Input : S , T , S ig S ( L ′ , sid ) 2: Output : R 3: r es := ∅ ; 4: tol l c := 0 ; 5: for all ( c, t ol l , sign ) ∈ T do 6: if che cksign ( sign , ( tol l , S ig S ( L ′ , sid )) , pk ( c )) = false then 7: return ‘ check of T failed’ ; 8: end for 9: for all ( hashL o c , fe eL o c , gsign ) ∈ S do 10: if V E R I FY ( gsign , hashL o c ) = false then 11: retur n ‘Faked location signatures’ ; 12: else 13: c = O P E N ( gsign ) ; 14: tol l c := ( if t ol l c = 0 th en f e eL o c else tol l c · fe eL o c ); 15: end for 16: for all tol l c 6 = tol l c do 17: r es := r es ∪ { ( c, tol l c ) } ; 18: end for 19: return res 5. SECURITY PR OPER TIES & ANAL YSIS In this section we define precisely what we mean by co rrectness, accountability and unlinkab ili ty , and briefly discuss wh y our sys- tem satisfi es each of them. The full proof of our main theo rem is gi ven in Appendix A. Correctness. Correctness means that the server can collect the right amount of tolls and all users pay their tolls exactly . There are two underlying assumptions according t o practical toll scenar- ios. One is that a user has no intention to pay more than his actual tolls while the other is that the server wan ts no l oss of users’ tolls. Let c ost c be the real amount of tolls that user c should pay and p ay c be the amo unt of tolls that user c actually pays to t he server after phase 4 of o ur system. Let tol l G = P c ∈G c ost c and be the real amount of tolls from al l users and p ay G = P c ∈G p ay c the amount of t olls that the server actually collects from the g roup G after phase 4 of our system. T he property of correctness can be defined as follows: D E FI N I T I O N 1 ( C O R R E C T N E S S ). S uppose the server wants no loss of users’ tolls a nd users have no intention to pay mor e than their tolls, the n for any c ∈ G it holds that ( p ay c = c ost c ), and fo r the server it holds that ( p ay G = c ost G ). In o ur system, whene ver a user h as paid less, the serv er initiates the dispute resolving protocol with the authority who w ould giv e the correct toll of that u ser . Meanwhile, because of the properties of group signatures, e.g., U N F O R G E A B I L I T Y and E X C U L P A B I L I T Y , the server is unable to charge more locations than the ones users submitted. Accountability . This property means that upon detection of mali- cious behaviour , our system can i dentify which principal has mis- behav ed. Let B be the set of all po tential misbehav iours from the attackers in our system. So relation A = B × U represents all possible at tacks and t he corresponding att ackers. In our system, U = C ∪ { S, A } . Let attacker : A → U be the function mapp ing an attack to the attacker , e. g., attacker (( β , c )) = c . Let E be the set of evidences during the run of our system and P ( E ) the power set of E . The definition of accountab ili ty is given as follo ws: D E FI N I T I O N 2 ( A C C O U N T A B I L I T Y ). Let A ′ ⊆ A be t he at- tacks that actually happen during the execution of our system in a toll session. F or any α ∈ A ′ , our system i s able to pr ovide a set of eviden ces E ′ ∈ P ( E ) and t her e exists a function find : P ( E ) × A → U such that find ( E ′ , α ) = atta cke r ( α ) . At steps where attackers may misbehav e, our system provides suffi- cient e vidence to fi nd th e originators. For instanc e, when the server did not send a user’ s toll payment t o t he authority on purpose, the server’ s signature on the user’ s payment could be taken as the evi- dence to prov e the server’ s misbehav iour . Despite the fact that our system assures accountability , resolving disputes is st ill a costly step. W e can establish a proper pun ishment policy to discourage misbehav iours. This in turn also improves the efficiency and performance of our system. For instance, by punishing the cheating users, the f requenc y of dispute resolving can actually be very small in practice. Unlinkabi lity . Unlinkability holds when, from the information learned from the e xecution of our system, the attacher canno t de- cide whether a user has travelled on any l ocation. Pr oving unlink- ability is equiv alent to prove that the adversary cannot d isti nguish the cases when two users swap their loca ti on information. In order to enforce this property , we sho uld consider two aspects. First, f rom all messages learned after the ex ecution of our system, the attackers cannot link any location to its originator . For mali- cious servers and users, the properties of group signature schemes, i.e., A N O N Y M I T Y and U N L I N K A B I L I T Y guarantee this property . W ith r egards to the honest but curious authority , implying that it does not collude wi th other attackers and only learns t he hash val- ues of locations, the property of t he hash function enforces unlink- ability . Second, the communication process of the system does not gi ve any information about linkability . This means, the attacker cannot break unlinkab ili ty through analys ing differences between ex ecutions of the system. In order to check the satisfaction of un- linkability w .r.t. this si tuation, we apply t he approach of formal ver- ification (see Appendix A). W e now giv e the main theorem sho wing that our ETP system sat- isfies the defined properties. The full proof of t he theorem is given in Appendix A. T H E O R E M 1. Our ETP system guarantees corr ectness, acco unt- ability and unlinkability . In Appendix C we also v erifi ed the rele va nt secrec y and authenti- cation properties. 6. DISCUSSION & CONCLUSION In this paper , we have proposed a simple design for ETP systems which preserves users’ anonymity wi thin groups. In our system, our main desig n goal is to balance users’ p riv acy with commun ica- tion and com putation overhead: a large g roup means better priv acy for the users, while this gives rise to more overhead when r unning the system. With t he help of group si gnature schemes and a ho- momorphic cryptosystem, our system is prov ed to guarantee cor- rectness, accountability and unlinkability . T o be complete, we still hav e the following issues to address. Comparison with VP riv . As mentioned in S ect. 1, our system resembles VPri v [18] t hat the server collects locations. Ho wev er, VPriv imposes a relati vely high burd en to users and the server . Compared with VPriv , in ou r system, the communication ov erhead between users and the server is reduce d. Clients are div ided into groups which leads to a smaller set of fee tuples which is r eturned from the server t o users. The t oll calculation protocol also reduces the number and t he size of messages during t he interaction. Second, we apply the princip le o f separation of duties in ou r system, namely the authority tak es the respon sibili ty to find cheaters. Hence, the server and users are released from a hea vy computation o verhead by av oiding running zero-kno wledge proof protoco ls as in V Priv . Resolving disputes ne eds to open all location signa tures, which is time consuming for the authority . Howe ver , with punishment poli- cies and the accountability property of our system, the authority can hav e a very low freque ncy of resolving disputes. Group management. A good group management polic y can im- prov e the protection of users’ pri vac y in our sy stem. In principle, groups sh ould be chosen to max imise the difficulty for t he ad ver- sary to construct users’ traces. One way to achiev e this goal is to group people according to ‘similarity’ criteria based on multi-lev el hierarchical structure, as proposed in [10]. For instance, at the root lev el, we have the group of all users in a city . Subgroups at the next lev el contain t hose t hat usually trav el in the same region. At lo wer lev els the subgroups can i nclude users havin g a similar driv- ing style. Other factors can be considered as well, e.g., driving periods, car mod els, etc. The information needed to group people is collected by the authority at the moment of registration. The provision of such information is n ot compu lsory but users are en- couraged if they desire a better pri vac y protection. Dynamic group management, which enable users to change their group memberships, is also necessary . For instance, users move to another city o r they are not satisfied with their cu rrent group. T o find the optimal group size which can protect u sers’ anon ymity is part of our future w ork. Note tha t if a user has joine d in multiple groups, the si milarity between his travel records of these groups would decrease his anon ymity . T amper resistant d evices vs. spot checks. In order to ensure OBUs are not manipulated by users, e.g., to transmit false l oca- tions, we have to consider possible solutions. One way is to utilise de vices that are tamper resistant. Howe ver , users can always turn of f t he de vice. Therefore, as discussed in VP riv and PrETP , we can use sporadic random spot checks that observe some physical locations of users. A physical observ ation of a spot check includes location, t ime and th e car’ s plate number . Let h ℓ, t, pn i be an obser- v ation of car pn whose o wner is user c ∈ G . T hen there should b e at least one location record h ℓ ′ , t ′ i of group G such that | t, t ′ | < ǫ/ 2 and | ℓ, ℓ ′ | < γ · | t, t ′ | where ǫ is the interval between two trans- missions and γ is the maximum speed of veh icles. If there are no such location tuples, then the server can d etermine that user c has misbehav ed. Otherwise, the server could send the tuples with nearby locations t o the authority t o check if there is one belonging to c . According to [18], a small number of spot checks with a high penalty would suf fice. Future work. In future, we plan t o dev elop a prototype of our system and cond uct an experimental ev aluation to compare it s ef- ficiency with P rETP and VPri v . A recently proposed ETP system Milo [16] provides techniques based on blind identity based en- cryption to streng then spot checks i n PrETP [3] to protect against large-scale driv er collusion. It is i nteresting to see ho w to adopt these techniques into our system. 7. REFERE NCES [1] M. Abadi and C . Fournet. M obile values, ne w names, and secure communica ti on. In P r oc. POPL , pages 104–115 . A CM, 2001. [2] M. Abe, S. S. M. Chow , K. Haralambiev , and M. Ohkubo. Double-trapdoo r anonymous tags for traceable signatures. In Pr oc. ACNS , v olume 6715 of LNCS , pages 183–200 . Springer , 2011. [3] J. Balasch, A. R ial, C. T roncoso, and C. Geuens. PrET P: Priv acy-preserv ing electronic toll pricing. In Pr oc. USENIX Security Symposium , pages 63–78. USENIX Association, 2010. [4] B. Blanchet. An efficient cryptographic protocol verifier based on prolog rules. In Pr oc. CSFW , pages 82–96. IEEE CS, 2001. [5] D. Chaum and E. van Heyst. Group sign atures. In Pr oc. EUR OCRYPT , volume 547 of LNCS , pages 257–265. Springer , 1991. [6] W . de Jonge and B. Jacobs. P riv acy-friendly electron ic tr affic pricing via commits. In Pr oc. F AST , volume 5491 of LNCS , pages 143–161 . Springer, 2008. [7] S . Delaune, S. Kremer , and M. D. Ryan. V erifying priv acy -type properties of electronic voting protocols. J ournal of Computer Security , 17(4):435–487, 2009. [8] D. Dolev and A. C.-C. Y ao. On the secu rit y of public key protocols. IEEE T ransac ti ons on Information Theory , 29(2):198–2 07, 1983. [9] F . Garcia, E. V erheul, and B. Jacobs. Cell-based roadpricing. In Pr oc. Eur oPKI , LNCS. Springer , 2011. t o appear . [10] J. Guo, J. P . Baugh, and S. W ang. A group signature based secure and pri vacy -preserving vehicular communication frame work. In P r oc. INFOCOM W orkshops , pages 103–108. IEEE CS, 2007. [11] B. Hoh, M. Gruteser, H. Xiong , and A. Alr abady . E nhancing security and priv acy in traf fic-monitoring systems. IEEE P ervasive Computing , 5(4):38–46, 2006. [12] B. Hoh, M. Gruteser, H. Xiong , and A. Alr abady . P reserving priv acy in gps traces via unce rtainty-aware path cloaking. In Pr oc. CCS , pages 161–171. ACM , 2007. [13] J. Kr umm. Inference attacks on location tracks. In Pr oc. P ervasive , volume 4480 of LNCS , page s 127–143 . Springer, 2007. [14] G. Lowe. A hierarchy of authentication specification. In Pr oc. CSFW , pages 31–44. IEEE CS, 1997. [15] Z. Ma, F . Kargl, and M. W eber . Measuring long-term location priv acy in v ehicular communication systems. Elsevier Computer Communica tions , 33(12), 2010. [16] S. Meiklejohn, K. Mowery , S. Checko way , and H. Shacham. The phantom tollbooth: P riv acy-prese rving electronic t oll collection in the presence of dri ver collusion. In Proc. USENIX Security Symposium . USENIX Association, 2011. [17] P . Paillier . Public-k ey cryptosystems based on composite degree residuosity classes. In Pr oc. EUR OCRYPT , v olume 1592 of LNCS , pages 223–238 . Springer, 199 9. [18] R. A. P opa, H. Balakrishnan, and A. J. Blumberg . VP riv: Protecting priv acy in location-based v ehicular services. I n Pr oc. USENIX Security Symposium , pages 335–350 . USENIX Association, 2009. [19] B. Przydatek and D. Wikström. Group message authentication. In Pr oc. SCN , volume 6280 of LNCS , pages 399–41 7. Springer, 20 10. [20] R. Shokri, G. Theodorakop oulos, J.- Y . L. Boudec, and J.-P . Hubaux. Quantifying location pri vacy . In Proc . S&P . IE EE CS, 2011. [21] C. Tronc oso, G. D anezis, E. K osta, and B. Preneel. PriP A YD: Pri vac y friendly pay-as-you-dri ve i nsurance. In Pr oc. WPES , pages 99–107. A CM, 2007. APPENDIX Ap pendix A: Proof of Theor em 1 L E M M A 1. Let L ′ be the set of fee tuples. If the server wants to collect no less t olls and user s have no intention to pay mor e than their tolls, then our system guara ntees: 1. for any two sets of fee tuples L ′ 1 and L ′ 2 sent to users c 1 and c 2 fr om G in ph ase 3, L ′ 1 = L ′ 2 ; 2. for each ( h ( ℓ, t ) , E S ( fe e )) ∈ L ′ , fe e = f ( ℓ, t ) ; 3. L ′ consists of all location tuples sent by user s. P RO O F . W e prove the three sub-lemmas on e by one. 1. S uppose L ′ 1 6 = L ′ 2 . The server initiates the dispute resolv- ing protocol when some users paid less. The authority per- form the checks on signatures sign c 1 and sign c 2 (line 6 – 7 in Alg. 1). The server wants to finish reso lution in order no t to lose tolls. This means both of the two checks succeed, which implies S ig S ( L ′ , sid ) = S ig S ( L ′ 1 , sid ) = S ig S ( L ′ 2 , sid ) . So we can get that L ′ = L ′ 1 = L ′ 2 . C ontradiction. 2. Suppose f ee tuple ( h ( ℓ, t ) , E S ( fe e )) with fe e 6 = f ( ℓ, t ) , which belongs to user c . In toll calculation, users check the correct- ness of L ′ before computing their payments. If the server want no loss of the tolls, i t has to ensure L ′ is correctly com- puted. Otherwise, the related users would refuse to pay and take the server’ s signature on L ′ as the evidence of the server’ s misbehav iour . This guarantees that for h ℓ, t i of c , there exists exactly one fee tuple ( h ( ℓ, t ) , E S ( f ( ℓ, t ))) , wh ich contradicts the assumption. 3. Suppose in L ′ , the server remov es a fee tuple ϕ belonging to user c . From the abo ve two lemmas, user c receiv es t he same set of correct fee tup les wit hout ϕ . Then c wo uld give a smaller toll payment. According to our system, user c would not be taken as a cheater and the serv er then h as to tak e the loss. This contradicts our assumption tha t the serv er wants no less tolls, the refore, the server would send the complete set of fee tuples to users. Proof of Theor em 1: W e prove the three pro perties one after another . Correctness. W e pro ve correctness from two aspec ts w . r .t. users and the server , respecti vely . 1. W e start pro ving, by contrad iction, that ou r system enforces that each user pays his real toll, that is ∀ c ∈G p ay c = c ost c . Suppose that a user c that has skipped some fees while ho- momorphically summing up his fees, which ends i n c ost c < c ost c . As users, initially , pay exactly the cost they hav e summed and claimed to the server (which, we recall rev eals them in clear the due), it follows that c ost c = p ay c < c ost c . I t f ol- lo ws (Lemma 1) that P c ∈G c ost c 6 = P ( ℓ,t ) ∈L f ( ℓ , t ) : the server has ground to start the dispute resolving protocol. The authority computes tol l c (i.e., E S ( c ost c )) compares it with tol l c (i.e., E S ( c ost c ) ), discov ers that they are not equal and returns it to the serv er with an accu sation for c . The server kno ws no w the unpaid t olls c ost c − c ost c and he can claim the mi ssing due back. So, even tually , the amount paid by t he user must be p ay c = c ost c + ( c ost c − c ost c ) = c ost c . 2. W e prov e that pay G = P c ∈G c ost c . Straightforwardly fol- lo ws f rom 1. and fro m p ay G = P c ∈G p ay c . Accountability . In our system, the set B consists of the following misbehav iours: • β 1 : dishonest users send small er toll payment to the server; • β 2 : malici ous users refuse to pay tolls; • β 3 : t he server attaches wrong fees to location tuples; • β 4 : t he server send s false location tuples to the autho rit y; • β 5 : t he server send s less toll payments to the authority; W e prove acco untability is secured against misbehaviou rs in B . 1. Assume α = ( β 1 , c ) happens, i .e., c ost c < c ost c . From the proof of L emma 1, we know c would be found out by t he authority . From the signed r e s from the authority , the server learns tol l c = E S ( c ost c ) . T ogether with user c ’ s signature on tol l c and tol l c 6 = tol l c , c canno t deny his misbeha viour . 2. Assume ( β 2 , c ) happens. W e hav e the following two evi- dences – (i) the server does not have user c ’ s signature on sk ( S ) , pin S pk ( A ) , pk ( S ) , sk ( c ) , pin, sn c pk ( S ) , sk ( A ) , sn A pk ( c ) , E nc pk ( S ) ( c, S ig c ( pin )) E nc pk ( c ) ( Sig S ( pk ( c ) , c )) nonce n E nc pk ( A ) ( Sig S ( pk ( c ) , c ) , pk ( c ) , c, sn, S, n ) E nc pk ( c ) ( gp k ( G ) , G , n ) Secure group Join communication Figure 4: The Set-up protocol. tol l c ; (ii) user c cannot provide the server’ s signature on c ost c with E S ( c ost c ) = tol l c . 3. Assume ( β 3 , S ) happens. In this attack, we have at l east one fee tup le with a wrong fee. Let it be ( h ( ℓ, t ) , E S ( f e e )) where f e e 6 = f ( ℓ, t ) , a nd the location h ℓ, t i belongs to user c . W hen c recei ves L ′ , he identifies the corresponding fee to h ℓ, t i is E S ( f e e ) which is not equal to E S ( f ( ℓ, t )) . Then he t ermi- nates the system. As the chargin g policy is public, wi th the server’ s signature on L ′ , t he user can prov e the server origi- nates the attack. 4. Assume ( β 4 , S ) happens. W ith t he properties U N F O R G E - A B I L I T Y , E X C U L PAB I L I T Y and C O A L I T I O N - R E S I S T A N C E of group sig nature schemes, the server cannot for ge an y location with a correct group signature from an y hon est user . There- fore, a failure of function V E R I F Y (line 11 in Alg. 1) suffices to determine the server’ s misbehaviour . 5. Assume ( β 5 , S ) happens. Suppose the server omit user c ’ s payments, i.e., tol l c , which has been committed t o the server during toll c alculation. I n this case, the authority would ret urn r es with tol l c = E S ( c ost c ) . Ho we ver , c has the server’ s sig- nature on c ost c and (if indeed the use r has beha ved c orrectly) he can p rove that c ost c = c ost c . This message is suf ficient to prov e the server’ s misbeha viour . Unlinkabi lity . For attacks on analysis of messages used during the ex ecution of the system, the security of unlinkability i s straightfor- ward. In our system, location tup les are hashed and sig ned through group signature schemes, which can be opened only by the author- ity . Ho weve r, due to preimage resistance of hash primitiv es, the authority cannot learn users’ locations through location signatures. Meanwhile, the properties of A N O N Y M I T Y and U N F O R G E A B I L I T Y ensure unlinkab ili ty against other attackers, i.e., the se rver and ma- licious users. The second type of attacks on unlinkability i s through observing the difference between system executions where users’ locations are v aried. W e start from defining unlinkability w . r .t. this type of attacks and proceed proving our system’ s security using automatic formal verification. W e use processes to denote participants’ be- haviou rs in protocols. Let C h ϕ i be the process represen ting user c originating location record ϕ and let A be the process of the au- thority . W e use C h ϕ i | A to represent the parallel composition of t hese two processes, which admits all possible communications and interleavings. The i ntuition behind unlinkability is that if any two users swap a pair of locations, the adversary cannot observe the differen ce. Observational equivalence , which defines indistin- guishability between two p rocesses [1], giv es us an effecti ve way to formalize unlinkability i n our system. S imilar to t he case of elec- tronic voting [7], we need at least tw o tra veling users. Otherwise, an intruder can easily link all location tuples to one user . D E FI N I T I O N 3 ( U N L I NK A B I L I T Y ) . Assume that G is a gr oup with at least two users c and c ′ and assume any two l ocation r ecor ds ϕ and ϕ ′ . Unlinkab il ity between a location tuple and its genera tor holds if A | C h ϕ i | C ′ h ϕ ′ i ≈ A | C h ϕ ′ i | C ′ h ϕ i ProV erif [4] is an efficient and popular too l for v erifying security properties i n cryptographic protocols. It takes a protocol mod- elled as a process in the applied π calculu s [1] as i nput and checks whether the protocol satisfies a giv en property . Observ ational equiv- alence can be modelled and verified by ProV eri f. Thus, the def- inition of unlinkability leads us to an automatic verification. W e hav e modelled our system and the unlinkability property , and got a po sitive resu lt from ProV erif. This means our system preserves unlinkability . (ProV erif codes are av ailable on request.) Ap pendix B: The Set-up Phase W e make use of two secrets to achie ve the security goal of this phase – the pin code s and the serial numbers . The former is gener - ated by the server for users to prove their legal access to toll service, while a serial numb er is issued with each OBU as a secret between the authority and a use r . W e tak e user c as an ex ample. Let pin be his pin code and sn t he se rial number of his OB U. The Setup protocol is depicted in Fig. 4. Upon recei ving the u ser’ s public ke y , the server che cks c ’ s signature on pin . If valid, the server replies with its signature on t he key , which the user sends t o the authority subsequen tly when joining a group. A replay a ttack on c ’ s me ssage to the authority is not feasible, as the same group w ould be returne d if the same request message arr iv es again. Fig. 4 does not i nclude the last step where the server learns from the authority its users’ groups and the group public keys, since such information can be made public. Ap pendix C: Secrecy and A uthentication W e use P roV erif [4] t o formally pro ve tha t our system as a whole does not suffer from attacks on security and authentication. The results are li sted in T ab . 2. (ProV erif codes are available on request.) W e use setupCS to denote the protocol between the server ( S ) and a user ( c ) in the setup phase and setupCA is between a user and the authority ( A ). W e say a term is secret i f the attacker can- not get it by eave sdropping and sending messages, and perform- ing computations [4]. For authentication, we consider two notions, namely , agr eement and the slightly stron ger notion injective a gre e- ment [14]. Agreement roughly guarantees to an agent A that his T able 2: V erification of authen tication and secr ecy . Protocols Authentication Secrecy injectiv e non-injecti ve setupCS – c & S pin setupCA A c sn T oll Calculation – c & S tol l c Dispute resolving S & A – T , res communication pa rtner B has run the protocol as expected and that A and B agreed on all e xchanged data v alues. Injectivity further requires that each run of A correspon ds to a unique run of B .
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment