Self-Aware Markov Models for Discrete Reasoning
Standard masked discrete diffusion models face limitations in reasoning tasks due to their inability to correct their own mistakes on the masking path. Since they rely on a fixed number of denoising steps, they are unable to adjust their computation to the complexity of a given problem. To address these limitations, we introduce a method based on learning a Markov transition kernel that is trained on its own outputs. This design enables tokens to be remasked, allowing the model to correct its previous mistakes. Furthermore, we do not need a fixed time schedule but use a trained stopping criterion. This allows for adaptation of the number of function evaluations to the difficulty of the reasoning problem. Our adaptation adds two lightweight prediction heads, enabling reuse and fine-tuning of existing pretrained models. On the Sudoku-Extreme dataset we clearly outperform other flow based methods with a validity of 95%. For the Countdown-4 we only need in average of 10 steps to solve almost 96% of them correctly, while many problems can be solved already in 2 steps.
💡 Research Summary
The paper tackles two fundamental shortcomings of standard masked discrete diffusion models when applied to reasoning tasks such as Sudoku and the Countdown number‑puzzle: (1) the inability to correct mistakes made during the masking‑to‑unmasking trajectory, and (2) the reliance on a fixed number of denoising steps regardless of problem difficulty. To overcome these issues, the authors propose a self‑aware Markov model that learns its forward transition kernel directly from its own outputs and incorporates a learned stopping criterion.
The core of the method is a conditional transition kernel (P_{\theta}) that, for each token position, outputs three quantities: (i) a categorical distribution over possible tokens (p_{\theta,i}^{1}(\cdot|y)), (ii) a confidence score (c_{\theta,i}(y) \in
Comments & Academic Discussion
Loading comments...
Leave a Comment