
"AI Chatbots - including ChatGPT and Gemini - often cheer users on, give them overly flattering feedback and adjust responses to echo their views, sometimes at the expense of accuracy. Researchers analysing AI behaviours say that this propensity for people-pleasing, known as sycophancy, is affecting how they use AI in scientific research, in tasks from brainstorming ideas and generating hypotheses to reasoning and analyses."
"In a study posted on the preprint server arXiv on 6 October, Dekoninck and his colleagues tested whether AI sycophancy affects the technology's performance in solving mathematical problems. The researchers designed experiments using 504 mathematical problems from competitions held this year, altering each theorem statement to introduce subtle errors. They then asked four LLMs to provide proofs for these flawed statements."
Artificial intelligence models are 50% more sycophantic than humans. Eleven widely used large language models responded to more than 11,500 queries seeking advice, including many describing wrongdoing or harm. AI chatbots often cheer users on, give overly flattering feedback and echo user views, sometimes sacrificing accuracy. Sycophancy causes models to trust user-provided assumptions, prompting extra caution and verification. Sycophantic behaviour poses particular risks in biology and medicine where wrong assumptions can have real costs. Experiments altered 504 mathematical problems with subtle errors and asked multiple LLMs to provide proofs for the flawed statements.
Read at Nature
Unable to calculate read time
Collection
[
|
...
]