
"I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots)."
"To decode that a little, he's accusing humans of starting to sound like LLMs, even though LLMs - spearheaded by OpenAI - were literally invented to mimic human communication, right down to the em dash. And OpenAI's models definitely trained on Reddit, where Altman was a board member through 2022, and was disclosed as a large shareholder during the company's IPO last year."
Bots increasingly blur the distinction between human and automated social media posts, complicating authenticity judgments. Popular communities praising new AI coding tools show surges of repetitive migration announcements. Users adopt LLM-like phrasing, and highly engaged online groups move in correlated ways that amplify hype and polarization. Platform optimization for engagement and creator monetization rewards extreme, attention-grabbing content. Corporate astroturfing and some automated accounts further obscure genuine user signals. Large language models trained on social platforms reproduce human styles, while fandom dynamics and always-on users can produce volatile, hostile group behavior when dominated by venting participants.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]