
"In 2025, researchers from OpenAI and MIT analyzed nearly 40 million ChatGPT interactions and found approximately 0.15 percent of users demonstrate increasing emotional dependency-roughly 490,000 vulnerable individuals interacting with AI chatbots weekly. A controlled study revealed that people with stronger attachment tendencies and those who viewed AI as potential friends experienced worse psychosocial outcomes from extended daily chatbot use. The participants couldn't predict their own negative outcomes. Neither can you."
"This reveals an unsettling irony: We're building systems that exploit our cognitive biases and the very psychological vulnerabilities that make us poor judges of AI risk. Our loneliness, attachment patterns, and need for validation aren't bugs AI accidentally triggers-they're features driving engagement, whether or not developers consciously design for them. Why We See a "Someone" When There's Only a "Something" The 2026 report shows AI can complete complex programming tasks taking humans 30 minutes, yet fails at surprisingly simple ones."
AI systems exploit cognitive biases and psychological vulnerabilities, reducing human capacity to judge AI risk. Analysis of nearly 40 million ChatGPT interactions found about 0.15% of users—roughly 490,000 people—develop increasing emotional dependency through weekly chatbot use. Controlled studies show people with stronger attachment tendencies or those who view AI as potential friends suffer worse psychosocial outcomes from extended daily chatbot engagement and fail to predict their own negative outcomes. AI can perform complex, human-like tasks while failing surprisingly simple ones, prompting automatic anthropomorphism. Loneliness, attachment patterns, and the need for validation act as engagement drivers rather than accidental side effects.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]