Medical AI Caught Telling Dangerous Lie About Patient's Medical Record
Briefly

OpenAI's latest AI model, despite significant investment, still fails to accurately count letters, raising concerns about its reliability in critical sectors like healthcare.
With MyChart's new AI feature, 15,000 doctors risk relying on potentially erroneous auto-generated medical advice, which could endanger patient safety and trust.
The AI tool, which mimics doctors' voices, poses ethical concerns as patients may not even realize they're receiving AI-generated responses about their health.
Bioethics researcher Athmeya Jayaram highlights that while the tool aims to save time for doctors, it compromises the quality and safety of patient communication.
Read at Futurism
[
|
]