
"These indicators of generative AI use emerge because they are statistically overrepresented in training data, and LLMs tend to reproduce what they encounter most often. A feedback loop forms when humans, through repeated exposure, begin to adopt these same patterns and use them more frequently in online content. That content then feeds back into future training sets, reinforcing the cycle."
"The "delve" phenomenon reflects a feedback loop that begins inside model training and spills outward into human use. This month, I want to look at a different (and potentially more consequential) dynamic: one that does not originate in models at all, but takes shape within human-social media platform ecosystems before being folded back into future training, with far more troubling implications."
Certain frequently appearing words in AI outputs, such as 'delve', become overrepresented because models reproduce statistically common patterns from training data. Humans exposed to AI-generated language begin to adopt those patterns in online content, creating a linguistic feedback loop that reinforces those terms in future datasets. A distinct, more consequential dynamic originates within social-media platform ecosystems where optimization for engagement drives low-effort, high-stimulation formats that degrade attention and cognition. Such 'brain rot' content proliferates because platforms reward short, looping, sensational formats, and those patterns can be incorporated into later model training, amplifying social and epistemic costs.
Read at Apaonline
Unable to calculate read time
Collection
[
|
...
]