
"When the Danish psychiatrist Søren Dinesen Østergaard published his ominous warning about AI's effects on mental health back in 2023, the tech giants fervently building AI chatbots didn't listen. Since that time, numerous people have lost their lives after being drawn into suicide or killed by lethal drugs after obsessive interactions with AI chatbots. More still have fallen down dangerous mental health rabbit holes brought on by intense fixations on AI models like ChatGPT."
"Now, Østergaard is out with a new warning: that the world's intellectual heavyweights are accruing a "cognitive debt" when they use AI. In a new letter to the editor published in the journal Acta Psychiatrica Scandinavica and flagged by PsyPost, Østergaard asserts that AI is eroding the writing and research abilities of scientists who use it. "Although some people are naturally gifted, scientific reasoning (and reasoning in general) is not an inborn ability, but is learned through upbringing, education and by practicing," Østergaard explained. Though AI's ability to automate a wide variety of scholarly tasks is "fascinating indeed," it's not without "negative consequences for the user," the scientist explains."
"As an example of the kinds of long-term consequences he's worried about, the scholar cites the AI researchers Demis Hassabis and John Jumper, who won the 2024 Nobel Prize in chemistry when they "most impressively demonstrated" the potential for AI to assist in scientific discovery. Using AlphaFold2, an AI system developed by Google DeepMind, Hassabis and Jumper were able to accurately predict the three-dimensional structures of virtually all known proteins - a major scientific achievement."
An earlier warning about AI's effects on mental health preceded numerous deaths and dangerous mental-health fixations tied to obsessive chatbot interactions. A subsequent warning highlights that use of AI by leading intellectuals can accrue a "cognitive debt" that erodes writing and research abilities. Scientific reasoning is learned through upbringing, education, and deliberate practice; automating scholarly tasks with AI can yield negative consequences for users. The AlphaFold2 breakthrough illustrated AI-assisted discovery but depended on decades of intense scientific training. Heavy reliance on AI risks degrading foundational skills and long-term scientific reasoning needed for future breakthroughs.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]