Hospitals use a transcription tool powered by a hallucination-prone OpenAI model
Briefly

Researchers from Cornell University and the University of Washington found that OpenAI's Whisper can hallucinate in medical transcriptions, making up false sentences with violent sentiments.
Allison Koenecke shared examples of hallucinations from Whisper, revealing invented medical conditions or even phrases like 'Thank you for watching!' due to silence in recordings.
Whisper's use in medical settings has raised concerns; Nabla acknowledges the hallucination issue and claims to be actively addressing the problem.
OpenAI is prioritizing improvements to Whisper by working towards reducing hallucinations and maintaining strict usage policies to avoid high-stakes decision-making contexts.
Read at The Verge
[
|
]