A.I. Hallucinations Are Getting Worse, Even as New Systems Become More Powerful
Briefly

The recent incident involving Cursor's A.I. support bot showcases the ongoing challenges with A.I. systems. Customers were misinformed about policy changes that didn't exist, leading to frustration and account cancellations. While A.I. tools like ChatGPT have advanced, they still struggle with accuracy, occasionally fabricating information or 'hallucinating'. With newer A.I. systems producing high hallucination rates, the problem lies in their inability to distinguish fact from fiction, despite improvements in their mathematical capabilities. This highlights a critical flaw in A.I. technology that affects user trust and reliability.
"Unfortunately, this is an incorrect response from a front-line A.I. support bot. More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks."
"Today's A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not - and cannot - decide what is true and what is false."
Read at www.nytimes.com
[
|
]