Instruction-Tuned, but Not More Verifiable Instruction-Following: A Cross-Task Diagnosis for LoRA Adapters
Adapters are often selected and deployed based on nominal labels (e.g., instruction-tuned), which implicitly suggest what capability improves after adaptation. We test whether nominal training objectives reliably align with realized cross-task capability gains by evaluating the same LoRA adapter across tasks. Our strongest evidence is tied to strict, automatically verifiable instruction following as measured by IFEval: across multiple seeds, base models, and LoRA settings, nominal labels recurrently but not universally fail to predict improvements on this verifiable target, with clear configuration sensitivity including a near-zero or negative case. As an illustrative strongest-case example in a controlled instruction-versus-numeric setting, an instruction-tuned adapter substantially improves off-target NM-based numeric benchmark performance from 0.133 to 0.632 while not improving verifiable instruction following on IFEval (ILA: 0.313 to 0.271; PLA: 0.250 to 0.143; values rounded to three decimals). We refer to this nominal-versus-realized mismatch pattern as capability drift as a descriptive label. The mismatch is visible in the raw cross-task performance matrix; we use a drift score only as a compact summary in the same units as the underlying metrics, not as a new formal metric contribution. Evidence from broader instruction-following benchmarks is benchmark-dependent and mixed, reflecting heterogeneity in how instruction following is operationalized; we therefore do not treat cross-benchmark agreement as a premise. Overall, the practical takeaway is to perform routine cross-task evaluation before deployment and to avoid treating nominal labels as reliable capability proxies.
💡 Research Summary
The paper investigates a critical mismatch between the nominal training objectives assigned to LoRA adapters (e.g., “instruction‑tuned”, “numeric‑reasoning‑tuned”) and the actual capability gains observed when those adapters are evaluated across multiple downstream tasks. The authors focus on a strict, automatically verifiable notion of instruction following measured by IFEval, which provides two metrics: instruction‑level accuracy (ILA) and prompt‑level accuracy (PLA). PLA is taken as the primary target because it reflects end‑to‑end compliance with all constraints. Numeric reasoning ability is measured by a simple numeric match (NM) score.
To quantify the mismatch, the authors introduce a “drift score”: TargetGain = metric on the nominal target task minus the base model’s metric; OffTargetGain = metric on an off‑target task minus the base model’s metric; DriftScore = OffTargetGain – TargetGain. A positive drift score indicates that the adapter improves more on the off‑target task than on its intended target, or that it fails to improve the target while gaining elsewhere. This formulation keeps the diagnostic in the same units as the underlying metrics and avoids additional normalization choices.
Experiments are conducted on several base models (Qwen‑3‑8B, Qwen‑3‑14B, LLaMA‑3‑8B) with multiple LoRA configurations (rank‑16 and rank‑8, attention+MLP vs. attention‑only). For each configuration, five random seeds are used to assess variability. The same adapters are evaluated on three families of tasks: (i) numeric reasoning (NM), (ii) verifiable instruction following (IFEval ILA/PLA), and (iii) a domain QA set (reported in the appendix).
The central empirical finding is illustrated in Table 1. An “instruction‑tuned” adapter dramatically raises NM from 0.133 to 0.632—a large off‑target gain—while its IFEval PLA drops from 0.250 to 0.143, and ILA also declines. Conversely, a “numeric‑reasoning‑tuned” adapter improves NM modestly (0.133 → 0.309) and shows no substantial change on IFEval. This concrete example demonstrates that the nominal label does not guarantee improvement on the task it suggests.
Robustness analysis (Table 2, Figure 2) shows that this drift phenomenon recurs across seeds, models, and LoRA settings. Most drift scores are positive, with means ranging from ~0.39 to ~0.67 and standard deviations around 0.1–0.2, indicating a consistent pattern rather than a seed‑specific artifact. However, one configuration (LLaMA‑3‑8B with an attention‑only LoRA) yields a slightly negative drift score (‑0.040), highlighting configuration sensitivity and the possibility of near‑zero drift.
The authors also evaluate additional verifiable instruction‑following benchmarks (FollowBench, IFBench). While IFEval and IFBench, both emphasizing strict automatic verification, show no improvement for the instruction‑tuned adapter, FollowBench— which operationalizes a looser notion of instruction following—does show gains. This heterogeneity underscores that “instruction following” is not a monolithic construct and that cross‑benchmark agreement should not be assumed.
A deeper dive into IFEval’s internal categories and types reveals selective shifts: language constraints and detectable_format categories suffer large drops, and the keyword‑existence type drops by 1.0, whereas some punctuation‑related types improve slightly. These findings suggest that the adapter’s learning is not uniformly distributed across all constraint types; rather, it may reinforce certain patterns while degrading others.
The paper does not claim to provide a new universal metric; the drift score is presented solely as a compact summary of the observed mismatch, which is already evident in the raw cross‑task performance matrix. The contributions are: (1) framing a deployment‑relevant cross‑task diagnostic, (2) demonstrating robust evidence of capability drift across multiple experimental axes, and (3) providing a controlled counterexample where an instruction‑tuned adapter improves an off‑target numeric benchmark while degrading verifiable instruction following.
Practical implications are clear: practitioners should not rely on nominal adapter labels as proxies for capability. Before deploying an adapter, a low‑cost cross‑task evaluation should be performed to detect potential capability drift, especially when strict compliance (e.g., safety‑critical instruction following) is required. The study calls for more nuanced reporting of adapter capabilities and for broader adoption of verifiable benchmarks in the model‑selection pipeline.
Comments & Academic Discussion
Loading comments...
Leave a Comment