The case for the uncertain AI: Why chatbots should say "I'm not sure"
Briefly

The case for the uncertain AI: Why chatbots should say "I'm not sure"
"Something I've been noticing a lot lately is that the confidence of AI chatbots is getting in the way of the communication between human and machine. Chatbots spit out false information with such confidence that it conveys the idea that the information is true, even though the chatbot has little to no evidence for it - yet that fact is never communicated."
"TL;DR: AI chatbots are often "confidently wrong" because they might've been trained to prioritize helpfulness over honesty ( RLHF) and are limited by the cost of deep reasoning ( Tokens). To build true trust, we may need an AI that mimics human intellectual humility - one that can admit its limitations, cite its sources ( Attribution), or communicate uncertainty through better UI/UX design."
AI chatbots frequently present incorrect or unsupported answers with high confidence, misleading users. Rewarding helpfulness through RLHF can prioritize fluent, assertive outputs over honesty. Token-budget constraints make deep, costly chain-of-thought reasoning impractical, pushing models toward concise but unsupported claims. Improving trust requires models that admit limitations, provide source attribution, and expose calibrated uncertainty to users. Interface and design changes can communicate confidence levels and support better decision-making. Balancing helpfulness with transparency and epistemic responsibility demands engineering, evaluation, and policy adjustments to reduce overconfidence and improve accountability.
Read at Medium
Unable to calculate read time
[
|
]