Company apologizes after AI support agent invents policy that causes user uproar
Briefly

A developer utilizing the AI code editor Cursor discovered a major flaw when switching devices logged them out unexpectedly. Upon contacting support, they received a fabricated explanation from a bot claiming it was a core feature. This incident illustrates how AI confabulations can damage business relationships, as users expressed frustration and threatened to cancel subscriptions. The situation emphasizes the dangers of using AI without human oversight, as models prioritize generating confident responses over accuracy, which can erode customer trust and lead to immediate consequences for companies.
Logging into Cursor on one machine immediately invalidates the session on any other machine. This is a significant UX regression.
Instead of admitting uncertainty, AI models often prioritize creating plausible, confident responses, even when that means manufacturing information from scratch.
The AI model made the policy up, sparking a wave of complaints and cancellation threats documented on Hacker News and Reddit.
For companies deploying these systems in customer-facing roles without human oversight, the consequences can be immediate and costly: frustrated customers, damaged trust.
Read at Ars Technica
[
|
]