
""dangerous""
""struggle to get useful advice from it""
""People share information gradually", "They leave things out, they don't mention everything. So, in our study, when the AI listed three possible conditions, people were left to guess which of those can fit. "This is exactly when things would fall apart.""
University of Oxford researchers tested 1,300 people using realistic medical scenarios and split participants into groups that either used AI help or did not. Participants using AI often did not know what questions to ask and received varied answers depending on phrasing. Chatbot responses mixed useful information with misleading or incomplete advice, making it difficult for users to identify appropriate next steps such as whether to see a GP or attend A&E. Many users left out details when interacting with AI, increasing the chance of misinterpretation and potential harm. Polling showed over one in three UK residents use AI for mental health or wellbeing support (November 2025).
Read at www.bbc.com
Unable to calculate read time
Collection
[
|
...
]