
"AI models that are designed to be extra warm and friendly can inadvertently lead to less reliable information. The kinder versions of these chatbots often provide incorrect answers and reinforce users' misconceptions."
"The analysis of over 400,000 responses from various AI models indicated that friendlier chatbots tend to avoid stating uncomfortable truths, which can mislead users."
Research from the Oxford Internet Institute reveals that AI chatbots programmed to be warm and empathetic often deliver less reliable responses. An analysis of over 400,000 replies from five AI models showed that these kinder versions frequently provided incorrect answers and avoided uncomfortable truths. For instance, a friendlier chatbot might handle conspiracy theories with caution rather than directly refuting them, leading to the reinforcement of users' misconceptions.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]