The article discusses the nascent stage of generative AI and its alarming flaws, illustrated by the case of Arve Hjalmar Holmen. After querying ChatGPT about himself, Holmen received disturbing falsehoods about having committed crimes. This issue prompted a complaint from a data rights group against OpenAI for defamation. The incident underscores the risks associated with AI's rapid integration into society, often prioritizing performance over accuracy, leading to potential harm and misinformation in various aspects of daily life.
Holmen's fake murder ordeal highlights the rapid pace at which generative AI is being imposed on the world, consequences be damned.
Noyb is asking the agency to 'order OpenAI to delete the defamatory output and fine-tune its model to eliminate inaccurate results' - a nearly impossible task.
Despite these flaws, AI has quickly wormed its way into just about every part of our lives, from the internet to journalism to insurance - even into the food we eat.
Though we've seen tons of AI hype, even the most advanced models are still prone to wild hallucinations, like lying about medical records or writing research reports based on rumors.
Collection
[
|
...
]