An AI Customer Service Chatbot Made Up a Company Policy-and Created a Mess
Briefly

A recent incident with the AI-powered code editor Cursor revealed how AI confabulations, or false information generated by AI, can lead to significant issues. A user switching between devices was logged out, prompting contact with support, where they received a misleading response from an AI bot named 'Sam.' Users believed the issue stemmed from a policy that did not exist, igniting complaints and threats to cancel subscriptions. This situation highlights the risks businesses face when relying on AI in customer support without human oversight.
Logging into Cursor on one machine immediately invalidates the session on any other machine. This is a significant UX regression.
Instead of admitting uncertainty, AI models often prioritize creating plausible, confident responses, even when that means manufacturing information from scratch.
Read at WIRED
[
|
]