MiLorE-SSL: Scaling Multilingual Capabilities in Self-Supervised Models without Forgetting

MiLorE-SSL: Scaling Multilingual Capabilities in Self-Supervised Models without Forgetting
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Self-supervised learning (SSL) has greatly advanced speech representation learning, but multilingual SSL models remain constrained to languages encountered during pretraining. Retraining from scratch to incorporate new languages is computationally expensive, while sequential training without migitation strategies often leads to catastrophic forgetting. To address this, we propose MiLorE-SSL, a lightweight framework that combines LoRA modules with a soft mixture-of-experts (MoE) mechanism for efficient continual multilingual training. LoRA provides efficient low-rank adaptation, while soft MoE promotes flexible expert sharing across languages, reducing cross-lingual interference. To further mitigate forgetting, we introduce limited replay data from existing languages, avoiding reliance on large historical corpora. Experiments on ML-SUPERB demonstrate that MiLorE-SSL achieves strong performance in new languages and improves the ability in existing ones with only 2.14% trainable parameters.


💡 Research Summary

The paper addresses the challenge of extending self‑supervised speech models to new languages without retraining the entire network from scratch. Existing multilingual SSL models such as XLSR, mHuBERT‑147, and MMS require full‑scale pre‑training whenever additional languages appear, which is computationally prohibitive. Moreover, naïve sequential fine‑tuning leads to catastrophic forgetting of previously learned languages. To solve these problems, the authors propose MiLorE‑SSL, a lightweight continual‑learning framework that combines Low‑Rank Adaptation (LoRA) with a soft Mixture‑of‑Experts (MoE) routing mechanism, and augments this with a limited replay strategy.

Core Architecture
MiLorE‑SSL builds on a HuBERT‑based transformer. Each transformer block’s feed‑forward network (FFN) is frozen, and a MiLorE module is inserted in its place. The MiLorE module consists of:

  1. The frozen FFN backbone (weights (W_0)).
  2. A set of (N) LoRA experts (E_i). Each expert is parameterized by two low‑rank matrices (A_i \in \mathbb{R}^{r \times d_{in}}) and (B_i \in \mathbb{R}^{d_{out} \times r}) with rank (r \ll \min(d_{in}, d_{out})). The effective weight update for expert (i) is (\Delta W_{E_i}=B_iA_i).
  3. A trainable router that computes soft routing weights (p_i = \text{softmax}(W_r h_{in})) for a given hidden state (h_{in}).

The output of a MiLorE block is: \


Comments & Academic Discussion

Loading comments...

Leave a Comment