AI Medical Advice Risks: 22% Harm Rate with Copilot Use
Briefly

The study reveals that nearly 40% of Microsoft Bing AI-powered Copilot's recommendations conflict with scientific consensus, highlighting significant risks in relying on this technology.
With a Flesch Reading Ease Score averaging 37, the chatbot's responses are difficult to comprehend, often requiring a high school education or above to understand.
Alarmingly, 22% of the chatbot's answers were deemed potentially harmful, suggesting there is a real danger in following this AI-generated medical advice.
While 54% of responses aligned with scientific standards, the fact that 39% contradicted established medical consensus raises serious concerns about AI's reliability in healthcare.
Read at TechRepublic
[
|
]