FuSeFL: Fully Secure and Scalable Federated Learning

FuSeFL: Fully Secure and Scalable Federated Learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Federated Learning (FL) enables collaborative model training without centralizing client data, making it attractive for privacy-sensitive domains. While existing approaches employ cryptographic techniques such as homomorphic encryption, differential privacy, or secure multiparty computation to mitigate inference attacks, including model inversion, membership inference, and gradient leakage, they often suffer from high computational and memory overheads. Moreover, many methods overlook the confidentiality of the global model itself, which may be proprietary and sensitive. These challenges limit the practicality of secure FL, especially in settings that involve large datasets and strict compliance requirements. We present FuSeFL, a Fully Secure and scalable FL scheme, which decentralizes training across client pairs using lightweight MPC, while confining the server’s role to secure aggregation, client pairing, and routing. This design eliminates server bottlenecks, avoids full data offloading, and preserves full confidentiality of data, model, and updates throughout training. Based on our experiment, FuSeFL defends against unauthorized observation, reconstruction attacks, and inference attacks such as gradient leakage, membership inference, and inversion attacks, while achieving up to $13 \times$ speedup in training time and 50% lower server memory usage compared to our baseline.


💡 Research Summary

The paper introduces FuSeFL, a novel federated learning (FL) framework that simultaneously guarantees end‑to‑end confidentiality of both user data and the global model while achieving practical scalability. Existing privacy‑preserving FL approaches—homomorphic encryption (HE), differential privacy (DP), or secure multiparty computation (MPC)—typically protect only the data, leave the model exposed to clients, and impose heavy computational or memory burdens on the central server. Moreover, many solutions assume unrealistic trust relationships or require full off‑loading of datasets to third‑party servers, which contradicts the core principle of FL.

FuSeFL’s core idea is to decentralize the heavy cryptographic work to dynamically formed client pairs. In each training round, the server secret‑shares the current global model between two clients. Each client also secret‑shares its local dataset, and the two clients jointly perform a lightweight MPC protocol to train on the combined secret‑shared model and data. Throughout this process, no participant ever sees the model or data in plaintext. After local training, the pair produces a secret‑shared model update, which is sent to two aggregation servers. One server is assumed to be trusted, the other honest‑but‑curious; they do not collude with each other or with any client. Using MPC‑based secure aggregation, the servers combine the updates without learning their values and return a new secret‑shared global model to the next round.

The trusted server also handles client pairing and anonymous routing. By using metadata‑driven, risk‑aware grouping, it minimizes repeated pairings of the same clients across rounds, thereby reducing the chance of intra‑group collusion. Because each server only stores one secret‑shared update per client pair, memory usage is roughly halved compared with prior schemes such as AriaNN‑FL, which keep a full model copy per client. The computational load on the server is limited to lightweight aggregation and pairing, eliminating the bottleneck that plagues many secure FL systems.

Security analysis shows that FuSeFL protects against gradient leakage, membership inference, and model inversion attacks. Data confidentiality is ensured by secret‑sharing of raw inputs; model confidentiality is guaranteed because the global model never appears in plaintext outside the trusted boundary; and update privacy is maintained through secret‑shared aggregation. The threat model assumes semi‑honest participants and that at least one aggregation server is trustworthy; under these assumptions, the protocol provides provable privacy guarantees. The paper explicitly acknowledges that Byzantine attacks, data poisoning, and side‑channel attacks are out of scope and suggests future work to incorporate robust aggregation or verifiable computation.

Experimental evaluation on standard benchmarks (e.g., MNIST, CIFAR‑10) demonstrates that FuSeFL achieves up to 13× speedup in training time and 9× speedup in aggregation compared with state‑of‑the‑art secure FL systems (AriaNN‑FL, WW‑FL). Server memory consumption is reduced by nearly 50%, and model accuracy is comparable or slightly better (up to 1.1% improvement on MNIST) than the baselines. The results confirm that FuSeFL scales linearly with the number of clients, maintaining high throughput even with thousands of participants.

In summary, FuSeFL presents a practical, fully secure FL solution that protects both data and model assets while overcoming the scalability and performance limitations of prior cryptographic FL approaches. Its client‑pair MPC design, dual‑server secret‑shared aggregation, and lightweight server responsibilities make it a compelling candidate for deployment in regulated domains such as healthcare, finance, and other sectors where both privacy compliance and intellectual‑property protection are critical.


Comments & Academic Discussion

Loading comments...

Leave a Comment