In Defense of AI Hallucinations
Briefly

"No one knows whether artificial intelligence will be a boon or curse in the far future. But right now, there's almost universal discomfort and contempt for one habit of these chatbots and agents: hallucinations, those made-up facts that appear in the outputs of large language models like ChatGPT."
"The nature of compression is that the fine details can get lost," [Vectara CTO Amin] Ahmad says. A model ends up primed with the most likely answers to queries from users but doesn't have the exact facts at its disposal."
Read at WIRED
[
add
]
[
|
|
]