Toward Enhancing Representation Learning in Federated Multi-Task Settings

Toward Enhancing Representation Learning in Federated Multi-Task Settings
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Federated multi-task learning (FMTL) seeks to collaboratively train customized models for users with different tasks while preserving data privacy. Most existing approaches assume model congruity (i.e., the use of fully or partially homogeneous models) across users, which limits their applicability in realistic settings. To overcome this limitation, we aim to learn a shared representation space across tasks rather than shared model parameters. To this end, we propose Muscle loss, a novel contrastive learning objective that simultaneously aligns representations from all participating models. Unlike existing multi-view or multi-model contrastive methods, which typically align models pairwise, Muscle loss can effectively capture dependencies across tasks because its minimization is equivalent to the maximization of mutual information among all the models’ representations. Building on this principle, we develop FedMuscle, a practical and communication-efficient FMTL algorithm that naturally handles both model and task heterogeneity. Experiments on diverse image and language tasks demonstrate that FedMuscle consistently outperforms state-of-the-art baselines, delivering substantial improvements and robust performance across heterogeneous settings.


💡 Research Summary

**
The paper tackles a fundamental limitation of existing federated multi‑task learning (FMTL) methods: they assume that all participants either share the same model architecture or at least a partially homogeneous one. In realistic scenarios, especially with the rise of foundation models, users may select vastly different architectures (e.g., CNNs, Vision Transformers, lightweight MLPs) and train them on heterogeneous tasks such as image classification, semantic segmentation, or text classification. The authors therefore reformulate the FMTL objective from “parameter sharing” to “learning a shared representation space” that can bridge both model and task heterogeneity.

To achieve this, they introduce Muscle loss (Multi‑task/systematic Contrastive Learning), a novel contrastive learning objective that simultaneously aligns representations from all participating models. Unlike traditional InfoNCE‑based pairwise alignment, Muscle loss treats an N‑tuple of representations (one from each model) as a positive sample when all components correspond to the same public data instance, and any N‑tuple containing at least one mismatched instance as a negative. The loss is defined as

\


Comments & Academic Discussion

Loading comments...

Leave a Comment