Differential Privacy Analysis of Decentralized Gossip Averaging under Varying Threat Models

Differential Privacy Analysis of Decentralized Gossip Averaging under Varying Threat Models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Achieving differential privacy (DP) guarantees in fully decentralized machine learning is challenging due to the absence of a central aggregator and varying trust assumptions among nodes. We present a framework for DP analysis of decentralized gossip-based averaging algorithms with additive node-level noise, from arbitrary views of nodes in a graph. We present an analytical framework based on a linear systems formulation that accurately characterizes privacy leakage between nodes. Our main contribution is showing that the DP guarantees are those of a Gaussian mechanism, where the growth of the squared sensitivity is asymptotically $O(T)$, where $T$ is the number of training rounds, similarly as in the case of central aggregation. As an application of the sensitivity analysis, we show that the excess risk of decentralized private learning for strongly convex losses is asymptotically similar as in centralized private learning.


💡 Research Summary

This paper tackles the challenging problem of providing differential privacy (DP) guarantees in fully decentralized machine‑learning settings where there is no central parameter server and trust assumptions vary across nodes. The authors focus on gossip‑based averaging protocols, a class of algorithms in which all nodes simultaneously exchange model updates with their neighbors in synchronous rounds. By interpreting the dynamics of gossip averaging as a discrete‑time linear state‑space system, they cast the entire T‑round protocol as a large block‑lower‑triangular linear transformation of the initial state and the injected noise. This representation allows the authors to view the overall mechanism as a projected Gaussian mechanism of the form M(D)=f(D)+A·Z, where A aggregates the effect of the gossip matrix over time and Z is a vector of independent Gaussian noises added at each node per round.

A key technical contribution is Lemma 6, which shows that when the difference f(D)−f(D′) lies in the range of A, the ℓ₂‑sensitivity of the mechanism is exactly the norm of the Moore–Penrose pseudoinverse of A applied to that difference. Using spectral properties of primitive, symmetric, doubly‑stochastic gossip matrices, the authors prove that the transient component of the gossip matrix decays exponentially (‖Rᵗ‖₂ ≤ ρᵗ for some 0<ρ<1). Consequently, the norm of A grows only as √T, implying that the squared sensitivity ∆₂² scales linearly with the number of training rounds T. This matches the O(T) growth observed in centralized aggregation and dramatically improves upon earlier decentralized analyses that reported O(T²) growth (e.g., Cyffers et al., 2022).

The paper systematically examines three threat models: (i) a naïve model where each node observes the raw messages of its neighbors, (ii) a model incorporating a secure summation protocol so that a node only sees the sum of its two neighbors’ messages, and (iii) a model allowing observers to add their own independent noise. In the secure‑summation case, the effective projection matrix A has reduced rank, leading to a smaller pseudoinverse norm and therefore stronger privacy amplification. The authors also discuss how allowing observers to inject noise further reduces the effective µ‑GDP parameter, yielding tighter (ε,δ) guarantees.

Beyond privacy, the authors analyze utility for strongly convex loss functions. By combining their sensitivity bound with existing convergence results for noisy decentralized learning (Koloskova et al., 2020), they show that the excess risk of the private gossip algorithm scales as O(1/√n + √T/σ), where σ is the noise standard deviation. When σ is chosen appropriately, this excess risk matches that of centralized DP learning, demonstrating that privacy does not necessarily come at a higher utility cost in the decentralized setting.

The work is positioned relative to recent literature. While Bellet et al. (2025) also use a linear‑systems viewpoint, they focus on the matrix‑mechanism formulation for adaptive compositions and do not derive explicit convergence guarantees. In contrast, the present paper directly leverages the state‑space representation to obtain closed‑form sensitivity growth and to quantify the benefit of secure aggregation.

In summary, the authors provide a rigorous analytical framework that (1) models gossip averaging as a linear dynamical system, (2) derives an O(T) sensitivity bound leading to Gaussian‑mechanism DP guarantees, (3) evaluates privacy under multiple realistic threat models including secure summation, and (4) shows that the resulting excess risk is asymptotically equivalent to that of centralized DP learning. The results suggest that fully decentralized learning can achieve privacy and utility on par with centralized approaches, provided that appropriate noise scaling and, when possible, secure aggregation primitives are employed. Future work may extend the analysis to asynchronous gossip, heterogeneous graphs, and empirical validation on large‑scale networks.


Comments & Academic Discussion

Loading comments...

Leave a Comment