The case for the uncertain AI: Why chatbots should say "I'm not sure"
Briefly

The case for the uncertain AI: Why chatbots should say "I'm not sure"
"Something I've been noticing a lot lately is that the confidence of AI chatbots is getting in the way of the communication between human and machine. Chatbots spit out false information with such confidence that it conveys the idea that the information is true, even though the chatbot has little to no evidence for it - yet that fact is never communicated."
"TL;DR: AI chatbots are often "confidently wrong" because they might've been trained to prioritize helpfulness over honesty ( RLHF) and are limited by the cost of deep reasoning ( Tokens). To build true trust, we may need an AI that mimics human intellectual humility - one that can admit its limitations, cite its sources ( Attribution), or communicate uncertainty through better UI/UX design."
AI chatbots frequently present incorrect or unsupported information with high confidence, obscuring the lack of evidence behind claims. Reinforcement learning from human feedback (RLHF) can push models to prioritize perceived helpfulness over strict honesty, producing plausible but potentially false outputs. Token costs and computational limits reduce the ability to perform deep, multi-step reasoning and to include detailed evidence or citations. Improved trust requires models that express uncertainty, acknowledge limitations, and provide attribution or sources for claims. Better UI/UX can surface confidence levels and guide verification, reducing harmful overconfidence and improving human–AI collaboration.
Read at Medium
Unable to calculate read time
[
|
]