ABBA-Adapters: Efficient and Expressive Fine-Tuning of Foundation Models
Large Language Models have demonstrated strong performance across a wide range of tasks, but adapting them efficiently to new domains remains a key challenge. Parameter-Efficient Fine-Tuning (PEFT) methods address this by introducing lightweight, trainable modules while keeping most pre-trained weights fixed. The prevailing approach, LoRA, models updates using a low-rank decomposition, but its expressivity is inherently constrained by the rank. Recent methods like HiRA aim to increase expressivity by incorporating a Hadamard product with the frozen weights, but still rely on the structure of the pre-trained model. We introduce ABBA, a new PEFT architecture that reparameterizes the update as a Hadamard product of two independently learnable low-rank matrices. In contrast to prior work, ABBA fully decouples the update from the pre-trained weights, enabling both components to be optimized freely. This leads to significantly higher expressivity under the same parameter budget, a property we validate through matrix reconstruction experiments. Empirically, ABBA achieves state-of-the-art results on arithmetic and commonsense reasoning benchmarks, consistently outperforming existing PEFT methods by a significant margin across multiple models. Our code is publicly available at: https://github.com/CERT-Lab/abba.
💡 Research Summary
The paper introduces ABBA‑Adapters, a novel parameter‑efficient fine‑tuning (PEFT) architecture designed to overcome the expressivity limits of existing methods such as LoRA and HiRA. LoRA models the weight update ΔW as a low‑rank product B·A, which restricts the update to a subspace of rank r. HiRA lifts this restriction by element‑wise multiplying a low‑rank update with the frozen pretrained matrix W₀ (ΔW = W₀ ⊙ (B·A)), allowing a nominal rank up to r·rank(W₀) but still tying the update to the structure of W₀, which can hinder generalization.
ABBA re‑parameterizes the update as the Hadamard (element‑wise) product of two independently learnable low‑rank matrices: \
Comments & Academic Discussion
Loading comments...
Leave a Comment