"In one of my courses at Stanford Medical School, my classmates and I were tasked with using a secure AI model for a thought experiment. We asked it to generate a clinical diagnosis from a fictional patient case: "Diabetic retinopathy," the chatbot said. When we asked for supporting evidence, it produced a tidy list of academic citations. The problem? The authors didn't actually exist. The journals were fabricated. The AI chatbot had hallucinated."
"This delicate relationship between AI and medicine was a major reason I chose Stanford for medical school. Just a short drive from Silicon Valley's ecosystem of innovation and AI startups, it felt like the right place to learn about the future of medicine. I eventually realized this assignment was never about breeding cynicism toward AI. It was Stanford's attempt at teaching us how to work with it, to recognize its limitations, reflect on our own, and remember that humanity must always come first."
"There are some real fears with AI's integration into the medical field At a dinner with a friend's parents, they asked me if I worried about threats to the future of medicine. They mentioned headlines of various deep learning models -which are layered networks that resemble the human brain - making advancements in healthcare. Under certain contexts, AI can diagnose cardiac arrest and abnormal heart rhythm with a higher accuracy than board-certified cardiologists. Some of these breakthroughs were born right here at Stanford."
A Stanford medical student described a course exercise using a secure AI model that generated a diagnosis of diabetic retinopathy and then produced fabricated academic citations and nonexistent authors, revealing an AI hallucination. The exercise emphasized recognizing AI limitations and the necessity of human judgment and patient-first priorities. The student's choice of Stanford was influenced by proximity to Silicon Valley and its AI innovation. There are real concerns about AI integration in medicine, including both improved diagnostic accuracy in some contexts and serious harms, exemplified by a teenager who died after confiding in an AI chatbot.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]