Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said
Briefly

Experts have noted that Whisper’s tendency to produce false information is particularly alarming given its increasing use in sensitive areas like healthcare and media, where accuracy is paramount. Instances of algorithmically generated misinformation include fabricated medical treatments and violent rhetoric, raising ethical concerns about relying on AI for critical applications. Beyond user experience, the potential harm from the spread of inaccurate information underscores the urgent need for developers and institutions to implement safeguards against these hallucinations.
While OpenAI promotes Whisper for its robustness, the reality faced by developers and researchers reveals a troubling aspect of AI advancements: hallucinations. The phenomenon involves the AI generating inaccurate data, which can range from minor errors to major fabrications, impacting the credibility and safety of the information. Healthcare providers, in particular, face unprecedented risk when integrating such technology without rigorous testing and validation, necessitating a thorough scrutiny of AI capabilities before broader adoption.
Read at www.independent.co.uk
[
|
]