Recently, an acquaintance of an acquaintance (let's call her Dina) heard that I was a therapist and an educator and asked if she could chat with me (she approved this write-up). She shared that she had discovered her therapist was using AI to partially conduct their sessions. While I won't go into how the issue came to light, Dina mentioned that she felt shock and anguish. She was terrified that her protected health information (PHI) and feelings were "out on the internet."
With artificial intelligence integrating - or infiltrating - into every corner of our lives, some less-than-ethical mental health professionals have begun using it in secret, causing major trust issues for the vulnerable clients who pay them for their sensitivity and confidentiality. As MIT Technology Review reports, therapists have used OpenAI's ChatGPT and other large language models (LLMs) for everything from email and message responses to, in one particularly egregious case, suggesting questions to ask a patient mid-session.