Secure Multi-User Linearly-Separable Distributed Computing
The introduction of the new multi-user linearly-separable distributed computing framework, has recently revealed how a parallel treatment of users can yield large parallelization gains with relatively low computation and communication costs. These gains stem from a new approach that converts the computing problem into a sparse matrix factorization problem; a matrix $F$ that describes the users’ requests, is decomposed as (F = DE), where a (γ)-sparse (E) defines the task allocation across $N$ servers, and a (δ)-sparse (D) defines the connectivity between (N) servers and (K) users as well as the decoding process. While this approach provides near-optimal performance, its linear nature has raised data secrecy concerns. We here adopt an information-theoretic secrecy framework, seeking guarantees that each user can learn nothing more than its own requested function. In this context, our main result provides two necessary and sufficient secrecy criteria; (i) for each user (k) who observes $α_k$ server responses, the common randomness visible to that user must span a subspace of dimension exactly $α_k-1$, and (ii) for each user, removing from (\mathbf{D}) the columns corresponding to the servers it observes must leave a matrix of rank at least (K-1). With these conditions in place, we design a general scheme – that applies to finite and non-finite fields alike – which is based on appending to (\mathbf{E}) a basis of (\mathrm{Null}(\mathbf{D})) and by carefully injecting shared randomness. In many cases, this entails no additional costs. The scheme, while maintaining performance, guarantees perfect information-theoretic secrecy in the case of finite fields, while in the real case, the conditions yield an explicit mutual-information bound that can be made arbitrarily small by increasing the variance of Gaussian common randomness.
💡 Research Summary
The paper studies information‑theoretic secrecy in the multi‑user linearly‑separable distributed computing model introduced in recent work. In this model a master node receives K independent linear requests f₁,…,f_K over a message vector w of length L. It factorizes the request matrix F =
Comments & Academic Discussion
Loading comments...
Leave a Comment