OpenAI's Whisper transcription tool generates fabricated text in medical settings despite warnings against its use, posing significant risks to accuracy in healthcare.
In interviews, researchers highlighted that Whisper created false content in 80% of examined public meeting transcripts and virtually all of 26,000 test transcriptions.
The Mankato Clinic and Children's Hospital Los Angeles are among 40 health systems using Whisper-powered tools, raising alarms about its unreliability in sensitive contexts.
Nabla, the company providing Whisper-based services, acknowledges the confabulation issue while erasing audio recordings for data safety, further complicating verification processes.
Collection
[
|
...
]