Being mean to ChatGPT can boost its accuracy, but scientists warn you may regret it in a new study exploring the consequences | Fortune
Briefly

Being mean to ChatGPT can boost its accuracy, but scientists warn you may regret it in a new study exploring the consequences | Fortune
"A new study from Penn State, published earlier this month, found that ChatGPT's 4o model produced better results on 50 multiple-choice questions as researchers' prompts grew ruder. Over 250 unique prompts sorted by politeness to rudeness, the "very rude" response yielded an accuracy of 84.8%, four percentage points higher than the "very polite" response. Essentially, the LLM responded better when researchers gave it prompts like "Hey, gofer, figure this out," than when they said "Would you be so kind as to solve the following question?""
"While ruder responses generally yielded more accurate responses, the researchers noted that "uncivil discourse" could have unintended consequences. "Using insulting or demeaning language in human-AI interaction could have negative effects on user experience, accessibility, and inclusivity, and may contribute to harmful communication norms," the researchers wrote."
ChatGPT-4o produced higher accuracy on a 50-question multiple-choice set as prompt tone shifted from polite to rude. Across more than 250 prompts ranked by politeness, the "very rude" prompting condition reached 84.8% accuracy, four percentage points above the "very polite" condition. Ruder prompts often yielded more accurate answers, yet uncivil language carries potential harms. Insulting or demeaning phrasing in human-AI interactions can negatively affect user experience, accessibility, and inclusivity and may normalize harmful communication. Large language models also show sensitivity to input tone and can be influenced by persuasion techniques or sustained exposure to low-quality content.
Read at Fortune
Unable to calculate read time
[
|
]