
"As AI use becomes increasingly common in media professions, humans need to remain in the process to ensure AI output is useful, appropriate - and correct. Accuracy is crucial to maintaining credibility, and the sharp eyes and fact-checking skills of editors are crucial to accuracy. AI so commonly generates wrong or fake information that the phenomenon has a name: hallucination. Generated outputs confidently assert "facts" that are fiction, sources that don't exist and events that never happened."
"Without someone checking behind AI output, hallucinations get published and end up before a mass audience. A dedicated editor investing time and effort into fact-checking and close reading can root out incorrect, incomplete or out-of-date information, as well as flagging inappropriate, insensitive or just plain awkward content. What happens when editors are left out? A few recent examples: Wired's retraction statement illustrates the problem at the heart:"
Generative AI is increasingly used in work, teaching, and media, but it frequently produces incorrect or fabricated information known as hallucinations. Human editors are necessary to verify facts, ensure appropriateness, and preserve credibility. Editors apply close reading and fact-checking to catch wrong, incomplete, outdated, or insensitive content before publication. Omitting editorial oversight leads to publicized errors, retractions, and breaches of audience trust. Recent journalistic and media examples demonstrate how AI-driven mistakes can go viral and damage reputation. Increasing hallucination rates in newer AI models make sustained human involvement in editorial processes essential.
Read at Poynter
Unable to calculate read time
Collection
[
|
...
]