Huawei executive says 'we need to embrace AI hallucinations'
Briefly

Huawei executive says 'we need to embrace AI hallucinations'
"Hallucinations have commonly been considered a problem for generative AI, with chatbots such as ChatGPT, Claude, or Gemini prone to producing 'confidently incorrect' answers in response to queries. This can pose a serious problem for users. There are several cases of lawyers, for example, citing non-existent cases as precedent or presenting the wrong conclusions and outcomes from cases that really do exist. Unfortunately for said lawyers, we only know about these instances because they're embarrassingly public, but it's an experience all users will have had at some point."
""AI hallucinations and the black box nature of AI make it challenging for businesses and enterprises, especially businesses from the manufacturing sector, to trust and control, raising new issues around predictability and explainability," he told delegates. "From my point of view, well first of all we need to embrace AI hallucinations," Tao added. "Without hallucinations, AI wouldn't be what it is today. But there's still a need to find effective ways to control and mitigate hallucinations.""
Hallucinations are common in generative AI and can produce confidently incorrect answers from chatbots. Such errors can cause real-world harm, including professionals citing non-existent precedents or misrepresenting case outcomes. Enterprises hope that models trained solely on proprietary data may reduce hallucinations, but manual verification remains necessary where accuracy is critical. Embracing hallucinations as an inherent characteristic of generative AI can coexist with efforts to control and mitigate them. Manufacturers and other businesses face particular challenges around trust, control, predictability, explainability, and integrating AI with existing decades of digitalization and automation.
Read at IT Pro
Unable to calculate read time
[
|
]