A2D: Any-Order, Any-Step Safety Alignment for Diffusion Language Models

A2D: Any-Order, Any-Step Safety Alignment for Diffusion Language Models
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Diffusion large language models (dLLMs) enable any-order generation, but this flexibility enlarges the attack surface: harmful spans may appear at arbitrary positions, and template-based prefilling attacks such as DIJA bypass response-level refusals. We introduce A2D (Any-Order, Any-Step Defense), a token-level alignment method that aligns dLLMs to emit an [EOS] refusal signal whenever harmful content arises. By aligning safety directly at the token-level under randomized masking, A2D achieves robustness to both any-decoding-order and any-step prefilling attacks under various conditions. It also enables real-time monitoring: dLLMs may begin a response but automatically terminate if unsafe continuation emerges. On safety benchmarks, A2D consistently prevents the generation of harmful outputs, slashing DIJA success rates from over 80% to near-zero (1.3% on LLaDA-8B-Instruct, 0.0% on Dream-v0-Instruct-7B), and thresholded [EOS] probabilities allow early rejection, yielding up to 19.3x faster safe termination.


💡 Research Summary

The paper “A2D: Any‑Order, Any‑Step Safety Alignment for Diffusion Language Models” addresses a critical safety gap in diffusion‑based large language models (dLLMs). Unlike autoregressive models, dLLMs generate text by iteratively denoising a fully masked sequence, allowing tokens to be filled in any order and at any step. This flexibility dramatically expands the attack surface: harmful content can appear at arbitrary positions and later stages of generation, rendering traditional response‑level safety methods— which only refuse after a full response is produced—ineffective. Template‑based prefilling attacks such as DIJA exploit this by interleaving adversarial text with


Comments & Academic Discussion

Loading comments...

Leave a Comment