AI hallucinations can't be stopped - but these techniques can limit their damageAI chatbots frequently provide incorrect references, leading to significant misinformation risks in scholarly communication.