Fedcompass: Federated Clustered and Periodic Aggregation Framework for Hybrid Classical-Quantum Models

Fedcompass: Federated Clustered and Periodic Aggregation Framework for Hybrid Classical-Quantum Models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Federated learning enables collaborative model training across decentralized clients under privacy constraints. Quantum computing offers potential for alleviating computational and communication burdens in federated learning, yet hybrid classical-quantum federated learning remains susceptible to performance degradation under non-IID data. To address this,we propose FEDCOMPASS, a layered aggregation framework for hybrid classical-quantum federated learning. FEDCOMPASS employs spectral clustering to group clients by class distribution similarity and performs cluster-wise aggregation for classical feature extractors. For quantum parameters, it uses circular mean aggregation combined with adaptive optimization to ensure stable global updates. Experiments on three benchmark datasets show that FEDCOMPASS improves test accuracy by up to 10.22% and enhances convergence stability under non-IID settings, outperforming six strong federated learning baselines.


💡 Research Summary

FEDCOMPASS introduces a layered aggregation framework tailored for hybrid classical‑quantum federated learning under non‑IID data distributions. The authors first identify two fundamental challenges: (1) classical feature extractors suffer from client‑wise distribution drift, and (2) quantum circuit parameters are periodic, making naïve arithmetic averaging unstable. To mitigate the first issue, each client computes a C‑dimensional class‑proportion vector of its local data and sends it to the server. The server builds a similarity matrix that combines Jensen‑Shannon divergence of the class vectors with a term reflecting sample‑size disparity, weighted by hyperparameters λ₁ and λ₂. Spectral clustering on the normalized Laplacian followed by K‑means yields M client clusters that share similar data statistics. Within each cluster, classical parameters are aggregated using a sample‑size‑weighted mean, preserving intra‑cluster homogeneity while still allowing inter‑cluster diversity.

For the quantum part, the framework treats each rotation angle as a point on the unit circle. Client weights ωᵢ (proportional to local sample counts) are applied to the sine and cosine components, and the circular mean is computed via atan2 of the summed components. This operation respects the periodic nature of quantum angles, avoiding the wrap‑around errors that plague ordinary averaging. The aggregated quantum vector is then refined with a FedAdam‑style adaptive optimizer: momentum and second‑moment estimates are updated, bias‑corrected, and used to adjust the global quantum parameters with a learning‑rate η and a small ε for numerical stability.

Experimental evaluation uses three benchmark image datasets (MNIST, Fashion‑MNIST, CIFAR‑10) each reduced to a four‑class task. Data are partitioned with a Dirichlet distribution using α = 0.3 (high heterogeneity) and α = 0.7 (moderate heterogeneity). Ten simulated clients perform five local epochs per communication round, with a batch size of 32, and the system runs for five global rounds. Classical backbones are LeNet for MNIST and the first two layers of ResNet‑18 for the other two datasets; quantum backbones are parameterized quantum circuits implemented in PennyLane. Baselines include FedAvg, FedProx, FedBN, FedPer, FedNova, and Scaffold.

Results show that FEDCOMPASS consistently outperforms all baselines. On MNIST it reaches near‑perfect accuracy (≈99.7 %). On CIFAR‑10, the method achieves 77.00 % accuracy for α = 0.3—a 10.22 % absolute gain over FedAvg—and 80.10 % for α = 0.7, a 3.80 % gain. Fashion‑MNIST also sees modest improvements. Ablation studies reveal that removing the clustering step dramatically degrades performance, especially in later rounds, while omitting the circular mean leads to highly unstable convergence and accuracy hovering around 25 %. Convergence curves illustrate faster early‑stage learning and reduced variance across communication rounds, confirming the stability benefits of both the clustering and circular‑mean mechanisms.

The paper contributes a practical solution for federated learning in heterogeneous environments where quantum acceleration is desired. By aligning classical aggregation with data‑driven client clusters and respecting the periodic geometry of quantum parameters, FEDCOMPASS bridges a gap between privacy‑preserving distributed training and quantum‑enhanced computation. Limitations include the need to pre‑specify the number of clusters, the lack of real‑hardware quantum communication cost analysis, and evaluation over a relatively small number of communication rounds. Future work could explore adaptive cluster number selection, deeper quantum circuit designs, integration with actual NISQ devices, and scaling to larger client populations and more communication rounds. Overall, FEDCOMPASS demonstrates that thoughtful, domain‑specific aggregation strategies can unlock the synergistic potential of classical and quantum models in federated settings.


Comments & Academic Discussion

Loading comments...

Leave a Comment