Can a technology called RAG keep AI models from making stuff up?
Briefly

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called "hallucination"), which is a creative gap-filling technique AI language models use when they encounter holes in their knowledge that weren't present in their training data.
Relying on confabulating AI models gets people and companies in trouble, as seen in instances of legal cases cited that didn't exist and chatbots inventing policies and regulations.
"RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process" to help LLMs stick to the facts.
Read at Ars Technica
[
]
[
|
]