Researchers describe how to tell if ChatGPT is confabulating
Briefly

LLMs fluently make claims that are both wrong and arbitrary—by which we mean that the answer is sensitive to irrelevant details such as random seed.
LLMs aren't trained for accuracy; they're simply trained on massive quantities of text and learn to produce human-sounding phrasing through that.
Read at Ars Technica
[
]
[
|
]