From Collapse to Improvement: Statistical Perspectives on the Evolutionary Dynamics of Iterative Training on Contaminated Sources
The problem of model collapse has presented new challenges in iterative training of generative models, where such training with synthetic data leads to an overall degradation of performance. This paper looks at the problem from a statistical viewpoint, illustrating that one can actually hope for improvement when models are trained on data contaminated with synthetic samples, as long as there is some amount of fresh information from the true target distribution. In particular, we consider iterative training on samples sourced from a mixture of the true target and synthetic distributions. We analyze the entire iterative evolution in a next-token prediction language model, capturing how the interplay between the mixture weights and the sample size controls the overall long-term performance. With non-trivial mixture weight of the true distribution, even if it decays over time, simply training the model in a contamination-agnostic manner with appropriate sample sizes can avoid collapse and even recover the true target distribution under certain conditions. Simulation studies support our findings and also show that such behavior is more general for other classes of models.
💡 Research Summary
This paper tackles the increasingly pressing issue of model collapse that arises when large generative models, especially large language models (LLMs), are repeatedly fine‑tuned on data that includes a mixture of human‑generated (real) text and synthetic text produced by earlier versions of the model. While prior work has documented the phenomenon empirically, a rigorous statistical understanding of when and how iterative training can actually improve the model rather than degrade it has been lacking.
The authors formalize the iterative training process as a population‑level mixture: at iteration t the training distribution is
(P_{t}= \alpha_{t} P^{} + (1-\alpha_{t}) \hat P_{t-1}),
where (P^{}) is the true human language distribution, (\hat P_{t-1}) is the model fitted at the previous step, and (\alpha_{t}\in
Comments & Academic Discussion
Loading comments...
Leave a Comment