Short-length Adversarial Training Helps LLMs Defend Long-length Jailbreak Attacks: Theoretical and Empirical Evidence
Jailbreak attacks against large language models (LLMs) aim to induce harmful behaviors in LLMs through carefully crafted adversarial prompts. To mitigate attacks, one way is to perform adversarial training (AT)-based alignment, i.e., training LLMs on some of the most adversarial prompts to help them learn how to behave safely under attacks. During AT, the length of adversarial prompts plays a critical role in the robustness of aligned LLMs. While long-length adversarial prompts during AT might lead to strong LLM robustness, their synthesis however is very resource-consuming, which may limit the application of LLM AT. This paper focuses on adversarial suffix jailbreak attacks and unveils that to defend against a jailbreak attack with an adversarial suffix of length $Θ(M)$, it is enough to align LLMs on prompts with adversarial suffixes of length $Θ(\sqrt{M})$. Theoretically, we analyze the adversarial in-context learning of linear transformers on linear regression tasks and prove a robust generalization bound for trained transformers. The bound depends on the term $Θ(\sqrt{M_{\text{test}}}/M_{\text{train}})$, where $M_{\text{train}}$ and $M_{\text{test}}$ are the numbers of adversarially perturbed in-context samples during training and testing. Empirically, we conduct AT on popular open-source LLMs and evaluate their robustness against jailbreak attacks of different adversarial suffix lengths. Results confirm a positive correlation between the attack success rate and the ratio of the square root of the adversarial suffix length during jailbreaking to the length during AT. Our findings show that it is practical to defend against “long-length” jailbreak attacks via efficient “short-length” AT. The code is available at https://github.com/fshp971/adv-icl.
💡 Research Summary
The paper tackles the problem of defending large language models (LLMs) against jailbreak attacks that use adversarial suffixes to coerce the model into producing harmful outputs. Existing work suggests that longer adversarial prompts during adversarial training (AT) yield stronger robustness, but synthesizing long prompts is computationally expensive because it requires solving high‑dimensional discrete optimization problems. The authors ask whether short‑length adversarial prompts can suffice to protect against long‑length jailbreak attempts.
To answer this, they focus on suffix‑based jailbreak attacks, where an attacker appends a token sequence (the suffix) to a harmful instruction. The length of the suffix is denoted by M. The key theoretical contribution is an analysis of in‑context learning (ICL) for linear transformers (linear self‑attention models) on linear regression tasks. They introduce a novel “suffix adversarial attack” in the ICL setting: after the clean in‑context training samples, an adversarial suffix of length M is concatenated, and each token in the suffix may be perturbed within an ℓ₂ ball of radius ε, mimicking the limited vocabulary space of real token attacks.
The authors derive a robust generalization bound for a model trained with adversarial suffixes of length M_train and evaluated on attacks with suffix length M_test:
R_adv(θ, M_test) ≤ O(√M_test / M_train).
This bound shows that the error scales with the ratio of the square root of the test‑time suffix length to the training‑time suffix length. Consequently, if the training suffix length is on the order of √M_test, the model’s error remains bounded by a constant, implying that short‑length AT can defend against much longer attacks.
Empirically, the authors conduct AT on five popular open‑source LLMs (including LLaMA‑7B, Vicuna‑13B, and Mistral‑7B) using the Greedy Coordinate Gradient (GCG) jailbreak method to generate adversarial suffixes of varying lengths (20, 40, 80, and 120 tokens). They train each model with suffix lengths of 20, 40, and 80 tokens and then evaluate jailbreak success rates (ASR) against attacks with suffix lengths up to 120 tokens. The results confirm the theoretical prediction: ASR grows roughly linearly with √M_test / M_train. Notably, training with a 20‑token suffix already reduces the ASR of a 120‑token attack by at least 30 % across all models, demonstrating a substantial practical benefit while avoiding the heavy cost of synthesizing long adversarial prompts.
The paper’s contributions are threefold: (1) it provides a rigorous theoretical justification that short‑length adversarial training can yield robustness against long‑length jailbreak attacks; (2) it validates the theory with extensive experiments on real LLMs, showing that a modest increase in training suffix length yields diminishing returns, thus offering a cost‑effective defense strategy; and (3) it releases reproducible code, enabling the community to adopt short‑suffix AT in safety pipelines.
Limitations include the reliance on linear transformer models for the analysis, which may not capture all dynamics of deep, non‑linear LLMs, and the assumption of ℓ₂‑bounded perturbations that approximate but do not fully replicate discrete token constraints. Future work could extend the theory to non‑linear architectures, explore multi‑step or compositional jailbreak strategies, and integrate dynamic AT schedules that adapt suffix length based on observed attack patterns. Overall, the study offers a compelling argument that efficient, short‑suffix adversarial training is a viable and scalable approach to harden LLMs against sophisticated jailbreak threats.
Comments & Academic Discussion
Loading comments...
Leave a Comment