OpenAI's Transcription Tool Hallucinates. Hospitals Are Using It Anyway
Briefly

The AP investigation demonstrates that OpenAI's Whisper tool is prone to fabricating text, leading to significant risks, especially in medical contexts.
Despite OpenAI's claims of 'human level robustness', research reveals Whisper generated false text in 80% of analyzed public meeting transcripts, indicating profound limitations.
Over 30,000 medical professionals use Whisper-based transcription tools, risking serious inaccuracies in patient records, with severe implications for patient safety, particularly for deaf individuals.
Researchers found that Whisper doesn’t just hallucinate benign text; it can introduce violent language and racial commentary into otherwise neutral dialogues.
Read at WIRED
[
|
]