The large language models (LLMs) in generative AI applications may produce faulty outputs called hallucinations, not grounded in input data.
Retrieval augmented generation (RAG) is a technique used to ground LLMs by providing facts from third-party datasets; new Vertex AI features include dynamic retrieval for cost efficiency.
Collection
[
|
...
]