Apple has halted its AI tool for summarizing news headlines amidst criticism over frequent inaccuracies that raised concerns about misinformation. The company acknowledged the problem and committed to revamping the feature in future updates. High-profile incidents included false alerts regarding a murder case, prompting calls from journalism advocates for stricter controls on AI developments to ensure reliable information delivery. The discourse around this issue highlighted the broader challenges AI poses, such as 'hallucinations,' which can undermine public trust in news media.
Innovation must never come at the expense of the right of citizens to receive reliable information, especially in an era where misinformation can spread rapidly.
Hallucinations are a real concern, and as yet firms don't have a way of systematically guaranteeing that AI models will never hallucinate, apart from human oversight.
Collection
[
|
...
]