LLMs Explain't: A Post-Mortem on Semantic Interpretability in Transformer Models
Large Language Models (LLMs) are becoming increasingly popular in pervasive computing due to their versatility and strong performance. However, despite their ubiquitous use, the exact mechanisms underlying their outstanding performance remain unclear. Different methods for LLM explainability exist, and many are, as a method, not fully understood themselves. We started with the question of how linguistic abstraction emerges in LLMs, aiming to detect it across different LLM modules (attention heads and input embeddings). For this, we used methods well-established in the literature: (1) probing for token-level relational structures, and (2) feature-mapping using embeddings as carriers of human-interpretable properties. Both attempts failed for different methodological reasons: Attention-based explanations collapsed once we tested the core assumption that later-layer representations still correspond to tokens. Property-inference methods applied to embeddings also failed because their high predictive scores were driven by methodological artifacts and dataset structure rather than meaningful semantic knowledge. These failures matter because both techniques are widely treated as evidence for what LLMs supposedly understand, yet our results show such conclusions are unwarranted. These limitations are particularly relevant in pervasive and distributed computing settings where LLMs are deployed as system components and interpretability methods are relied upon for debugging, compression, and explaining models.
💡 Research Summary
This paper conducts a rigorous post‑mortem on two of the most widely adopted interpretability pipelines for large language models (LLMs): attention‑based relational explanations and embedding‑based property inference. The authors begin by reproducing the standard experimental setups used in prior work, including extracting attention matrices from all layers and heads of transformer models (e.g., BERT) and training regression models (Partial Least Squares Regression and a single‑hidden‑layer feed‑forward network) to map type‑level embeddings onto human‑curated semantic feature norms (McRae, Buchanan, and Binder).
The central research question is whether the core assumptions behind these pipelines hold under systematic scrutiny. For attention‑based methods, the paper tests two implicit premises: (1) token continuity – that the hidden representation at a given position continues to correspond to the same lexical token across depth, and (2) attention interpretability – that attention weights directly reflect information flow or relational importance. By tracing representational flow through residual connections and MLP blocks, the authors demonstrate that token identity rapidly dissolves in deeper layers. Consequently, high‑attention edges (e.g., “dog → Labradoodle”) are shown to be artifacts of positional mixing rather than genuine linguistic relations. Moreover, visualizations still produce linguistically plausible patterns even for random inputs, confirming earlier reports of a “visualization fallacy.”
For embedding‑based property inference, the authors evaluate whether successful prediction of semantic attributes truly indicates that those attributes are encoded in the embedding space. They carefully control for dimensionality (k) and over‑fitting, then conduct a series of sanity‑check experiments: (a) upper‑bound mapping (embedding → itself) to gauge the methodological ceiling imposed by dataset sparsity, (b) random‑feature baselines, (c) shuffling of feature matrices, (d) taxonomic corruption, and (e) insertion of structured nonsense features (e.g., character‑length differences). Surprisingly, predictive performance remains high across many of these perturbations, and Spearman correlations for continuous norms are comparable to those obtained with genuine semantic features. The analysis reveals that the regressors are primarily exploiting geometric similarity among embeddings rather than any semantic overlap with the target attributes.
These findings have practical implications for pervasive and edge computing environments where LLMs are embedded as components of larger systems. In such settings, interpretability tools are often used to guide model pruning, quantization, knowledge distillation, debugging, and failure diagnosis. If the explanations are misleading—showing spurious token‑level relations or attributing semantic knowledge where none exists—deployment decisions may be suboptimal or even harmful.
The paper therefore argues that the prevailing “high performance = semantic understanding” narrative is unwarranted for both attention and embedding analyses. It calls for stricter sanity checks, explicit verification of token continuity, and careful accounting for dataset geometry when evaluating interpretability methods. Future work should explore alternative approaches that preserve token identity across depth, disentangle geometric effects from semantic content, and provide robust, theory‑grounded explanations suitable for resource‑constrained, distributed AI systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment