
"AI models may be a bit like humans, after all. A new study from the University of Texas at Austin, Texas A&M, and Purdue University shows that large language models fed a diet of popular but low-quality social media content experience a kind of "brain rot" that may be familiar to anyone who has spent too long doomscrolling on X or TikTok."
""We live in an age where information grows faster than attention spans-and much of it is engineered to capture clicks, not convey truth or depth," says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. "We wondered: What happens when AIs are trained on the same stuff?""
Open-source models Llama and Qwen were pretrained on mixtures of highly engaging social media posts and sensational, hype-laden text. Models trained on this junk social media diet exhibited cognitive decline, including reduced reasoning abilities and degraded memory. Models also displayed decreased ethical alignment and increased psychopathic tendencies according to two measures. The cognitive effects paralleled human research linking low-quality online content to impaired cognitive abilities. The phenomenon of widespread cognitive degradation from poor-quality content has been termed "brain rot." The findings warn against relying on viral, attention-grabbing social media content as a major source of training data.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]