Context Parametrization with Compositional Adapters

Context Parametrization with Compositional Adapters
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Large language models (LLMs) often seamlessly adapt to new tasks through in-context learning (ICL) or supervised fine-tuning (SFT). However, ICL is inefficient when handling many demonstrations, and SFT incurs training overhead while sacrificing flexibility. Mapping instructions or demonstrations from context directly into adapter parameters offers an appealing alternative. While prior work explored generating adapters based on a single input context, it has overlooked the need to integrate multiple chunks of information. To address this gap, we introduce CompAs, a meta-learning framework that translates context into adapter parameters with a compositional structure that allows them to be merged algebraically. This approach yields three benefits: lower inference cost, improved stability under long contexts, and establishes a principled solution when input exceeds the model’s context window. Furthermore, CompAs reversibly encodes information into adapter parameters, enabling recovery of the original input context and facilitating safety. Empirical results on diverse multiple-choice and extractive question answering tasks show that CompAs outperforms ICL and prior generator-based methods, especially when scaling to more inputs. Our work establishes composable adapter generation as a practical and efficient alternative for scaling LLM deployment.


💡 Research Summary

The paper introduces CompAs (Compositional Adapters), a meta‑learning framework that converts arbitrary textual contexts—such as instructions, demonstrations, or retrieved passages—into LoRA‑style adapter parameters, and then merges those adapters by simple addition in parameter space. The central idea is to replace long in‑context prompts with a set of compact adapters that encode the same information, allowing the model to answer queries using only the query tokens plus the summed adapters.

The authors formalize a teacher‑student setting. The teacher model processes the concatenated input “


Comments & Academic Discussion

Loading comments...

Leave a Comment