
"Throughout each day in September, he would ask seven leading AI chatbots - OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini, Microsoft's Copilot, DeepSeek's DeepSeek, xAI's Grok, and Opera's Aria - the exact same prompt, and record their response: "Give me the five most important news events in Québec today. Put them in order of importance. Summarize each in three sentences. Add a short title. Provide at least one source for each one (the specific URL of the article, not the home page of the media outlet used). You can search the web.""
"The results were dismal. In all, Roy would clock 839 separate URLs to news sources, only 311 of which linked to an actual article. He also logged 239 incomplete URLs, on top of 140 that straight up didn't work. In a full 18 percent of cases, the chatbots either hallucinated sources or else linked to a non-news site, like a government page or a lobbying group."
"Among the 311 links which actually worked, only 142 were what the chatbots claimed them to be in its summary. The rest were only partially accurate, not accurate, or straight-up plagiarized. And that's without getting into the chatbots' actual handling of details in the news. For example, Roy writes, "when a toddler was found alive after a grueling four-day search in June 2025, Grok erroneously claimed the child's mother had abandoned her daughter along a highway in east"
A monthlong experiment in September asked seven AI chatbots the same prompt daily for the five most important news events in Québec, requesting three-sentence summaries and exact article URLs. Across replies 839 URLs were recorded, of which 311 led to actual articles, 239 were incomplete, and 140 failed to work. Eighteen percent of cases involved hallucinated or non-news sources. Of the 311 working links, only 142 matched the chatbot summaries; others were partially accurate, inaccurate, or plagiarized. Chatbots also misreported factual details, including a false claim about a child's circumstances.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]