AI models misrepresent news events nearly half the time, study says
Briefly

AI models misrepresent news events nearly half the time, study says
"Overall, 45 percent of responses had at least one significant issue, according to the research. Sourcing was the most common problem, with 31 percent of responses including information not supported by the cited source, or incorrect or unverifiable attribution, among other issues. A lack of accuracy was the next biggest contributor to faulty answers, affecting 20 percent of responses, followed by the absence of appropriate context, with 14 percent."
"Gemini had the most significant issues, mainly to do with sourcing, with 76 percent of responses affected, according to the study. All the AI models studied made basic factual errors, according to the research. The cited errors include Perplexity claiming that surrogacy is illegal in Czechia and ChatGPT naming Pope Francis as the sitting pontiff months after his death. OpenAI, Google, Microsoft and Perplexity did not immediately respond to requests for comment."
The European Broadcasting Union and the BBC assessed the accuracy of more than 2,700 responses from OpenAI's ChatGPT, Google's Gemini, Microsoft's Copilot, and Perplexity. Twenty-two public media outlets across 18 countries and 14 languages posed a common set of questions between late May and early June. Overall, 45 percent of responses contained at least one significant issue. Sourcing problems affected 31 percent of responses, accuracy errors affected 20 percent, and absence of appropriate context affected 14 percent. Gemini had the highest rate of significant issues at 76 percent, mainly due to sourcing. All models produced basic factual errors and calls were made for greater transparency and error reduction by tech firms.
Read at www.aljazeera.com
Unable to calculate read time
[
|
]