AI hallucinations: What are they?
Briefly

At a high level, hallucinations occur when an AI system provides factually incorrect or false information. We see them most commonly in generative AI systems like general-purpose that have ingested vast amounts of data from across the entire internet. The goal of generative AI in these generalist systems is to always provide an answer, even if that means plausible sounding but incorrect output.
Sometimes algorithms produce outputs that are not based on training data and do not follow any identifiable pattern. In other words, it hallucinates the response.
In a new study published in , researchers found LLMs give different answers each time they are asked a question, even if the wording is identical, known as 'confabulating'. This means it can be difficult to tell when they know an answer versus when they are simply making something up.
AI hallucinations can have huge consequences, including contributing to the spread of misinformation. For instance, if hallucinating news bots respond to que
Read at ITPro
[
|
]