
"Turns out something similar happens to AI. Last month, a team of AI researchers from the University of Texas at Austin, Texas A&M, and Purdue University published a paper advancing what they call "the LLM Brain Rot Hypothesis" -- basically, that the output of AI chatbots like ChatGPT, Gemini, Claude, and Grok will degrade the more they're exposed to "junk data" found on social media."
"Oxford University Press, publisher of the Oxford English Dictionary, named "brain rot" as its 2024 Word of the Year, defining it as "the supposed deterioration of a person's mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging." Drawing on recent research which shows a correlation in humans between prolonged use of social media and negative personality changes,"
Large language models ingesting vast internet data, including social media, can suffer progressive output degradation when exposed to low-quality "junk" content. The phenomenon resembles human "brain rot" linked to overconsumption of trivial online material and can reduce model accuracy, coherence, and helpfulness. Models trained on noisy social media data may be "poisoned" by similar content patterns, producing poorer responses across tasks. Observable indicators include measurable underperformance after ingesting junk data and behavioral shifts mirroring negative personality effects seen in humans. Users can detect degradation using specific warning-sign tests to assess model health and response quality.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]