The Columbia Journalism Review reported that OpenAI's ChatGPT search often misidentifies quotes from news outlets, impacting even those with licensing agreements, leading to significant accuracy issues.
The Tow Center's analysis showed that ChatGPT's responses included a mix of correct and incorrect attributions, revealing that the chatbot sometimes fabricates source material when confronted with access restrictions.
The review indicated that over a third of ChatGPT's replies contained errors, with some publications' quotes being partially or wholly misattributed despite permissions being granted for AI access.
Publications such as The New York Times have restricted OpenAI's crawlers, reflecting a broader concern within journalism about the accuracy of AI-generated content from reputable sources.
Collection
[
|
...
]