
"One of the biggest examples in the commercial consumer industry is GPS maps. Once those were introduced, when you study cognitive performance, people would lose spatial knowledge and spatial memory in cities that they're not familiar with - just by relying on GPS systems. And we're starting to see some of those things with AI in healthcare," Amarasingham explained."
"One of the things that we're doing with our clients is to look at the acceptance rate of the recommendations. Are there patterns that suggest that there's not really any thought going into the acceptance of the AI recommendation?"
"Automation bias can lead to "de-skilling," or the gradual erosion of clinicians' human expertise, he added. He pointed to research from Poland that was published in August showing that gastroenterologists using AI tools became less skilled at identifying polyps."
Healthcare organizations are deploying AI more widely while facing unresolved questions about safe, responsible model use and liability for incorrect recommendations. Industry leaders prioritize vendor responsibility, stronger regulatory compliance, and clinician engagement for governance frameworks that mitigate algorithmic bias and unintended harm. Automation bias describes users' tendency to overtrust machine-generated recommendations, which can erode clinicians' skills and spatial knowledge similar to GPS reliance. Research cited shows gastroenterologists using AI tools became less skilled at identifying polyps. Vendors should monitor user behavior and acceptance rates of AI recommendations to detect patterns suggesting uncritical acceptance. Ongoing governance work focuses on balancing human oversight with AI benefits.
Read at MedCity News
Unable to calculate read time
Collection
[
|
...
]