AI tools, such as chatbots, promise speed, savings and scalability. But behind each successful interaction, there's a less visible truth: when AI systems operate without active oversight, they silently accumulate risk. These hidden liabilities-spanning brand damage, operational drag, ethical concerns and cybersecurity gaps-often remain undetected until a public crisis erupts. Here are three real-world cases of AI assistant deployment. Each began as a quick win. Each revealed what happens when governance is an afterthought.
The state laws take different approaches. Illinois and Nevada have banned the use of AI to treat mental health. Utah placed certain limits on therapy chatbots, including requiring them to protect users' health information and to clearly disclose that the chatbot isn't human. Pennsylvania, New Jersey, and California are also considering ways to regulate AI therapy.