
"The expression "AI hallucination" is well-known to anyone who's experienced ChatGPT or Gemini or Perplexity spouting obvious falsehoods, which is pretty much anyone who's ever used an AI chatbot. Only, it's an expression that's incorrect. The proper term for when a large language model or other generative AI program asserts falsehoods is not a hallucination but a "confabulation." AI doesn't hallucinate, it confabulates."
"The word confabulation is also from the psychology literature, just like hallucination, but they mean very different things. A hallucination is a conscious sensory perception that is at variance with the stimuli in the environment. A confabulation, on the other hand, is the making of assertions that are at variance with the facts, such as "the president of France is Francois Mitterrand," which is currently not the case."
Humans often label incorrect outputs from large language models as hallucinations when the proper term is confabulations. Confabulation denotes making assertions that are at variance with facts, while hallucination denotes a conscious sensory perception that conflicts with environmental stimuli. Generative AI produces inaccurate statements rather than conscious sensory experiences, so confabulation better characterizes AI errors. Confabulations can occur with or without consciousness in humans and can describe utterances that are simply factually wrong. Psychologists are urging that the terminology shift away from hallucination toward confabulation to reduce misconceptions about AI behavior.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]