Agentic AI Could Push Healthcare Into a Legal Gray Area, Attorney Says - MedCity News
Briefly

Agentic AI Could Push Healthcare Into a Legal Gray Area, Attorney Says - MedCity News
"AI agents - autonomous, task-specific systems designed to perform functions with little or no human intervention - are gaining traction in the healthcare world. The industry is under massive pressure to lower costs without compromising care quality, and health tech experts believe agentic AI could be a scalable solution that can help with this arduous goal. However, this AI category comes with greater risk than that of its AI predecessors, according to one cybersecurity and data privacy attorney."
"Lily Li, founder of law firm Metaverse Law, noted that agentic AI systems, by definition, are designed to handle actions on a consumer or organization's behalf - and this takes the human out of the loop for potentially important decisions or tasks. "If there are hallucinations or errors in the output, or bias in training data, this error will have a real-world impact," she declared."
Agentic AI systems are autonomous, task-specific systems designed to perform functions with minimal human intervention and are being deployed to reduce healthcare costs while maintaining quality. These systems remove humans from decision loops, increasing the risk that hallucinations, output errors, or biased training data will produce real-world harm. Potential errors include incorrect prescription refills or mismanaged emergency triage, which could cause injury or death. Responsibility and malpractice coverage become unclear when licensed clinicians are not involved. Cybercriminals may exploit agentic AI to create new attack vectors. Healthcare organizations should include agentic AI-specific risks in risk assessments and mitigation planning.
Read at MedCity News
Unable to calculate read time
[
|
]