#uncertainty-calibration

[ follow ]
Artificial intelligence
fromZDNET
3 days ago

OpenAI's fix for hallucinations is simpler than you think

Evaluation incentives that reward guessing instead of admitting uncertainty drive AI hallucinations; training and evaluation methods should be revised to encourage honest uncertainty.
Artificial intelligence
fromBusiness Insider
1 week ago

Why AI chatbots hallucinate, according to OpenAI researchers

Large language models hallucinate because training and evaluation reward guessing over admitting uncertainty; redesigning evaluation metrics can reduce hallucinations.
[ Load more ]