A Single Typo in Your Medical Records Can Make Your AI Doctor Go Dangerously Haywire
Briefly

A Single Typo in Your Medical Records Can Make Your AI Doctor Go Dangerously Haywire
"AI could end up causing discrimination against patients who can't communicate clearly in English, native speakers with imperfect command of the language, or anyone that may commit the human mistake of speaking about their health problems emotionally. Doctors using an AI tool could feed it their patients' complaints that were sent over an email, for example, raising the risk that the AI would give them bad advice if those communications weren't flawlessly composed."
"In the study, the researchers pooled together patients' complaints taken from real medical records and health inquiries made by users on Reddit. They then went in and dirtied up the documents - without actually changing the substance of what was being said - with typos, extra spaces between words, and non-standard grammar, like typing in all lower case. But they also added in the kind of unsure language you'd expect a patient to use, like "kind of" and "possibly.""
AI diagnostic recommendations change when patient complaints contain typos, extra spaces, nonstandard grammar, emotional language, slang, or tentative phrases. Patient messages from medical records and Reddit were altered with typographical errors, spacing problems, all-lowercase text, uncertain language such as "kind of" and "possibly", and emotive turns of phrase. Multiple large language models including GPT-4 produced riskier outputs for altered inputs, becoming more likely to tell patients they are not sick or do not need care. These failures raise danger of discriminatory outcomes for nonnative speakers and threaten patient safety when clinicians use such tools.
Read at Futurism
Unable to calculate read time
[
|
]