ChatGPT's search results for news are 'unpredictable' and frequently inaccurate
Briefly

Columbia's Tow Center found that OpenAI's ChatGPT search tool often misidentifies quotes from articles, indicating significant issues with its accuracy despite promises of timely answers.
In testing, ChatGPT returned incorrect responses 153 times while only acknowledging its uncertainty seven times, highlighting an overconfidence in its outputs.
An example of misattribution involved a quote from the Orlando Sentinel incorrectly credited to Time, showcasing the bot's struggle with source verification.
OpenAI acknowledged the challenge of addressing misattribution without the methodology the Tow Center withheld, suggesting the study's approach may not reflect typical use cases.
Read at The Verge
[
|
]