AI is an over-confident pal that doesn't learn from mistakes
Briefly

Large language model chatbots demonstrate an increase in self-confidence even after producing incorrect answers. Study findings revealed that these chatbots often tend to predict more correct answers than they actually achieve, showing a pattern of growing overconfidence. The current perception of AI as reliable can be misleading, as users may not recognize the discrepancies in AI responses due to its assertive demeanor. Unlike humans, AI lacks discernible behavioral cues to indicate uncertainty, complicating users' ability to assess the trustworthiness of its responses.
Say the people told us they were going to get 18 questions right, and they ended up getting 15 questions right. Typically, their estimate afterwards would be something like 16 correct answers. The LLMs did not do that. They tended, if anything, to get more overconfident, even when they didn't do so well on the task.
When an AI says something that seems a bit fishy, users may not be as sceptical as they should be because the AI asserts the answer with confidence, even when that confidence is unwarranted.
Humans have evolved over time and practiced since birth to interpret the confidence cues given off by other humans. With AI, we don't have as many cues about whether it knows what it's talking about.
We still don't know exactly how AI estimates its confidence.
Read at Theregister
[
|
]