Microsoft claims its new tool can correct AI hallucinations, but experts advise caution | TechCrunch
Briefly

"Correction is powered by a new process of utilizing small language models and large language models to align outputs with grounding documents," a Microsoft spokesperson told TechCrunch. "We hope this new feature supports builders and users of generative AI in fields such as medicine, where application developers determine the accuracy of responses to be of significant importance."
"Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water," said Os Keyes, a Ph.D. candidate at the University of Washington who studies the ethical impact of emerging tech. "It's an essential component of how the technology works."
Google introduced a similar feature this summer in Vertex AI, its AI development platform, to let customers "ground" models by using data from third-party providers, their own datasets, or Google Search.
Text-generating models hallucinate because they don't actually "know" anything. They're statistical systems that identify patterns in a series of words and predict which words come next based on the countless examples they are trained on.
Read at TechCrunch
[
|
]