Recursive Resemblance
Briefly

Recursive Resemblance
"As AI-generated content increasingly floods the Web, the concern is that these models will train on their own output, favoring similarities that are more likely to reappear at the expense of less common, more marginal elements in the original training set. The problem is not simply that the dominant content will begin to crowd out more diverse outliers, but that the recursive training on a finite sample size will introduce increasingly improbable sequences."
"Model collapse is the result of random probability distribution run amok in a kind of machinic death drive. Generative AI may be new—even if its novelty lies in vampiric schemes of intellectual property extraction and ecocide—but the underlying principle of representation grounded in conditions of probability is as old as the Western concept of representation itself."
Generative AI models trained on massive datasets face a critical threat called model collapse. As AI-generated content proliferates online, these models increasingly train on their own output rather than original data. This recursive training on finite samples favors dominant patterns while eliminating diverse, marginal elements. The result extends beyond simple degradation; it creates compounding approximation errors and statistically improbable sequences that degrade model quality. This phenomenon reflects deeper historical principles about representation grounded in probability, connecting modern AI challenges to ancient concepts of mimesis and eikōn that governed how humans understood imitation and likeness.
Read at Artforum
Unable to calculate read time
[
|
]