In one of my courses at Stanford Medical School, my classmates and I were tasked with using a secure AI model for a thought experiment. We asked it to generate a clinical diagnosis from a fictional patient case: "Diabetic retinopathy," the chatbot said. When we asked for supporting evidence, it produced a tidy list of academic citations. The problem? The authors didn't actually exist. The journals were fabricated. The AI chatbot had hallucinated.
Last week, Google AI pioneer Jad Tarifi sparked controversy when he told Business Insider that it no longer makes sense to get a medical degree - since, in his telling, artificial intelligence will render such an education obsolete by the time you're a practicing doctor. Companies have long touted the tech as a way to free up the time of overworked doctors and even aid them in specialized skills, including scanning medical imagery for tumors. Hospitals have already been rolling out AI tech to help with administrative work.
Google's Med-Gemini showcases advanced multimodal functions that potentially enhance workflows for clinicians, researchers, and patients, according to leaders Greg Corrado and Joëlle Barral.
Uncontrolled deployment of artificial intelligence in medicine could lead to contamination of electronic health records, resulting in data distortions and long-term deterioration in data interpretation models.
My time with Doctors Without Borders, as well as my ongoing humanitarian and anti-war advocacy, deeply informs this mission. Whether designing decision support or evaluating system impacts, I constantly reflect on how technology can serve, not replace, human care, especially in fragile or underserved contexts.