A Columbia Journalism Review study has revealed that a significant number of AI search results are erroneous, with more than 60% of AI-generated answers being incorrect. Eight AI models were tested, and the most accurate one, Perplexity, was wrong 37% of the time. The study critiques generative search tools for bypassing original content sources, potentially misinforming users. With traditional search engines guiding users to reliable content, the alarming rate of inaccuracy in AI models raises concerns about their adoption as search solutions.
Conducted by researchers at the Tow Center for Digital Journalism, the analysis probed eight AI models including OpenAI's ChatGPT search and Google's Gemini, finding that overall, they gave an incorrect answer to more than 60 percent of queries.
While traditional search engines typically operate as an intermediary, guiding users to news websites and other quality content, generative search tools parse and repackage information themselves, cutting off traffic flow to original sources.
Collection
[
|
...
]