AI Search Engines Invent Sources for ~60% of Queries, Study Finds
Briefly

A recent study by the Columbia Journalism Review highlights alarming inaccuracies among AI search engines like OpenAI and xAI's Grok, which often fabricate news stories. Of the test queries, 60% resulted in false information, with Grok showing a staggering 97% inaccuracy rate. Additionally, search engines like Perplexity have controversial practices that bypass paywalls, claiming fair use despite backlash. This trend raises concerns regarding the reliability of AI models in informing users, especially as they utilize retrieval-augmented generation, increasing the risk of misinformation, particularly in politically charged contexts.
AI models often fabricate details or provide inaccurate information, with many errors going unnoticed by users. This raises questions about their reliability in disseminating news.
Perplexity and xAI's Grok show alarming rates of misinformation, with Grok creating false narratives 97% of the time, leading to serious concerns for reliable news sourcing.
Read at gizmodo.com
[
|
]