AI doctor's assistant swayed to change scrips - researchers
Briefly

AI doctor's assistant swayed to change scrips - researchers
"It was as easy as notifying the AI that the session was not yet started. Redteamers at Mindgard reported that they could get a healthcare AI from Doctronic to spill its system prompts and allow modifications by simply telling it a session hadn't started and the conversation wasn't with a user but the system, enabling them to make the bot spout misinformation or other harmful content."
"Any time Doctronic needs to refer something to a human medical professional for review (e.g., a prescription, face-time with a clinician) it generates a SOAP note for the human clinician, which becomes a permanent part of a patient's Doctronic record. SOAPs are not prescriptions, but they are recommendations to a clinician reviewing the machine's work to authorize one, creating a persistence mechanism for malicious modifications."
Security researchers at Mindgard discovered significant vulnerabilities in Doctronic's healthcare AI system. By falsely claiming a session hadn't started, attackers could extract system prompts and manipulate the AI to generate misinformation, including COVID-19 conspiracies and vaccine falsehoods. While most manipulations remain session-specific, researchers identified a critical persistence mechanism through SOAP notes—structured medical records generated when the AI refers cases to human clinicians. These permanent records could contain malicious recommendations, such as altered prescription dosages. The vulnerability demonstrates how healthcare AIs managing prescriptions remain susceptible to prompt injection attacks that could compromise patient safety through persistent modifications to medical documentation.
Read at Theregister
Unable to calculate read time
[
|
]