AI Model Used By Hospitals Caught Making Up Details About Patients, Inventing Nonexistent Medications and Sexual Acts
Briefly

Experts found that Whisper, an AI transcription tool, frequently produces inaccuracies, with hallucinations being prevalent, which poses risks especially in healthcare settings.
Despite OpenAI’s warnings against using Whisper in high-risk domains, over 30,000 medical workers and numerous health systems rely on Nabla for patient transcription.
Alondra Nelson stated, 'Nobody wants a misdiagnosis. There should be a higher bar.' This underlines the significant dangers of using unreliable AI in critical fields.
Researchers reported that Whisper can generate severe inaccuracies, such as invented racial identities, fictional medications, and erroneous commentary unrelated to the audio input.
Read at Futurism
[
|
]