Investigating Modality Contribution in Audio LLMs for Music
Audio Large Language Models (Audio LLMs) enable human-like conversation about music, yet it is unclear if they are truly listening to the audio or just using textual reasoning, as recent benchmarks suggest. This paper investigates this issue by quantifying the contribution of each modality to a model’s output. We adapt the MM-SHAP framework, a performance-agnostic score based on Shapley values that quantifies the relative contribution of each modality to a model’s prediction. We evaluate two models on the MuChoMusic benchmark and find that the model with higher accuracy relies more on text to answer questions, but further inspection shows that even if the overall audio contribution is low, models can successfully localize key sound events, suggesting that audio is not entirely ignored. Our study is the first application of MM-SHAP to Audio LLMs and we hope it will serve as a foundational step for future research in explainable AI and audio.
💡 Research Summary
This paper tackles the pressing question of whether audio large language models (Audio LLMs) truly “listen” to the audio input or rely predominantly on textual cues when answering music‑related queries. To move beyond accuracy‑only evaluations, the authors adapt the Multi‑Modal SHAP (MM‑SHAP) framework—originally designed for vision‑language models—to quantify the relative contribution of each modality (text vs. audio) to a model’s prediction using Shapley values.
Methodology
The authors treat each text token and each short segment of the raw waveform as individual features. By randomly permuting feature order and masking subsets (text tokens replaced by
Comments & Academic Discussion
Loading comments...
Leave a Comment