
"I recently vacationed in Italy. As one does these days, I ran my itinerary past GPT-5 for sightseeing suggestions and restaurant recommendations. The bot reported that the top choice for dinner near our hotel in Rome was a short walk down Via Margutta. It turned out to be one of the best meals I can remember. When I got home, I asked the model how it chose that restaurant,"
"Something was required from my end as well: trust. I had to buy into the idea that GPT-5 was an honest broker, picking my restaurant without bias; that the restaurant wasn't shown to me as sponsored content and wasn't getting a cut of my check. I could have done deep research on my own to double-check the recommendation (I did look up the website), but the point of using AI is to bypass that friction."
"Writer and tech critic Cory Doctorow calls that erosion "enshittification." His premise is that platforms like Google, Amazon, Facebook, and TikTok start out aiming to please users, but once the companies vanquish competitors, they intentionally become less useful to reap bigger profits. After WIRED republished Doctorow's pioneering 2022 essay about the phenomenon, the term entered the vernacular, mainly because people recognized that it was totally on the mark."
An AI recommendation produced an excellent restaurant suggestion in Rome, combining local reviews, press coverage, and culinary style with convenience. The user relied on trust that the model acted as an unbiased, honest broker rather than promoting sponsored options. That reliance bypassed time-consuming independent research, which is a key benefit of AI assistants. Concerns arise about whether dominant AI companies will later prioritize investor returns over user value, repeating a pattern where platforms degrade utility once they achieve market power. The term 'enshittification' describes that pattern among major tech platforms, which could predict similar risks for AI services.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]