Adaptive Privacy of Sequential Data Releases Under Collusion

Adaptive Privacy of Sequential Data Releases Under Collusion
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The fundamental trade-off between privacy and utility remains an active area of research. Our contribution is motivated by two observations. First, privacy mechanisms developed for one-time data release cannot straightforwardly be extended to sequential releases. Second, practical databases are likely to be useful to multiple distinct parties. Furthermore, we can not rule out the possibility of data sharing between parties. With utility in mind, we formulate a privacy-utility trade-off problem to adaptively tackle sequential data requests made by different, potentially colluding entities. We consider both expected distortion and mutual information as measures to quantify utility, and use mutual information to measure privacy. We assume an attack model whereby illicit data sharing, which we call collusion, can occur between data receivers. We develop an adaptive algorithm for data releases that makes use of a modified Blahut-Arimoto algorithm. We show that the resulting data releases are optimal when expected distortion quantifies utility, and locally optimal when mutual information quantifies utility. Finally, we discuss how our findings may extend to applications in machine learning.


💡 Research Summary

The paper tackles the problem of releasing data sequentially to multiple parties while controlling privacy leakage in the presence of potential collusion among the recipients. Traditional privacy‑utility trade‑off studies focus on a single request and a single adversary, which makes them unsuitable for scenarios where a database is queried repeatedly by distinct entities. The authors formalize a framework that simultaneously handles (i) an individual privacy budget for each party, measured by the mutual information I(ĤR_k; X) ≤ ε_k, and (ii) a worst‑case collusion budget that limits the total information that could be obtained if all parties share their released data, measured by I(ĤR_k, Z_{k‑1}; X) ≤ δ_k, where Z_{k‑1} denotes all previously released outputs.

Two utility metrics are considered: (a) expected distortion, where a distortion function d(ĤR_k, R_k) is defined and utility is U = −E


Comments & Academic Discussion

Loading comments...

Leave a Comment