#uncertainty-calibration

[ follow ]
Artificial intelligence
fromInfoQ
1 day ago

OpenAI Study Investigates the Causes of LLM Hallucinations and Potential Solutions

LLM hallucinations largely result from pretraining exposure and evaluation metrics that reward guessing; penalizing confident errors and rewarding uncertainty can reduce hallucinations.
Artificial intelligence
fromZDNET
1 month ago

OpenAI's fix for hallucinations is simpler than you think

Evaluation incentives that reward guessing instead of admitting uncertainty drive AI hallucinations; training and evaluation methods should be revised to encourage honest uncertainty.
Artificial intelligence
fromBusiness Insider
1 month ago

Why AI chatbots hallucinate, according to OpenAI researchers

Large language models hallucinate because training and evaluation reward guessing over admitting uncertainty; redesigning evaluation metrics can reduce hallucinations.
[ Load more ]