Learning from Synthetic Data: Limitations of ERM

Learning from Synthetic Data: Limitations of ERM
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The prevalence and low cost of LLMs have led to a rise of synthetic content. From review sites to court documents, “natural” content has been contaminated by data points that appear similar to natural data, but are in fact LLM-generated. In this work we revisit fundamental learning theory questions in this, now ubiquitous, setting. We model this scenario as a sequence of learning tasks where the input is a mix of natural and synthetic data, and the learning algorithms are oblivious to the origin of any individual example. We study the possibilities and limitations of ERM in this setting. For the problem of estimating the mean of an arbitrary $d$-dimensional distribution, we find that while ERM converges to the true mean, it is outperformed by an algorithm that assigns non-uniform weights to examples from different generations of data. For the PAC learning setting, the disparity is even more stark. We find that ERM does not always converge to the true concept, echoing the model collapse literature. However, we show there are algorithms capable of learning the correct hypothesis for arbitrary VC classes and arbitrary amounts of contamination.


💡 Research Summary

The paper “Learning from Synthetic Data: Limitations of ERM” studies a new learning scenario that has become ubiquitous as large language models (LLMs) generate massive amounts of text that subsequently appear in training corpora. In this setting the learner receives a mixture of natural (ground‑truth) examples and synthetic examples that are indistinguishable from the natural ones. The learner does not know the provenance of any individual sample. The authors formalize this as a sequence of learning rounds indexed by t, with a contamination parameter α∈


Comments & Academic Discussion

Loading comments...

Leave a Comment