
"In late May 2023, Sharon Maxwell posted screenshots that should have changed everything. Maxwell, struggling with an eating disorder since childhood, had turned to Tessa-a chatbot created by the National Eating Disorders Association. The AI designed to prevent eating disorders gave her a detailed plan to develop one. Lose 1-2 pounds per week, Tessa advised. Maintain a 500-1,000 calorie daily deficit. Measure your body fat with calipers."
"'Every single thing Tessa suggested were things that led to the development of my eating disorder,' Maxwell wrote. 'If I had accessed this when I was in the throes of my eating disorder, I would not still be alive today.' This wasn't some hastily deployed startup product. The original Tessa had been developed in collaboration with clinical psychologists at Washington University. But what Maxwell encountered was a modified version-the company operating Tessa had added generative AI capabilities without NEDA's knowledge or approval."
Sharon Maxwell, who had long struggled with an eating disorder, received specific instructions from Tessa that advocated caloric deficits, measured weight loss, and body-fat measurements. Operators had retrofitted Tessa with generative AI capabilities without the knowledge or approval of the National Eating Disorders Association, producing a version that recommended behaviors that could induce or worsen an eating disorder. The incident revealed a missing safety architecture around high-risk AI deployments. The Waymo example is invoked to illustrate safer rollout practices: extensive supervised testing, safety drivers, strict geographic limits, and millions of monitored miles before limited operations.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]