fromTheregister
18 hours agoResearchers poison stolen data to make AI results wrong
Large language models (LLMs) base their predictions on training data and cannot respond effectively to queries about other data. The AI industry has dealt with that limitation through a process called retrieval-augmented generation (RAG), which gives LLMs access to external datasets. Google's AI Overviews in Search, for example, use RAG to provide the underlying Gemini model with current, though not necessarily accurate, web data.
Artificial intelligence










