OpenAI continues to grapple with issues related to hallucinations from ChatGPT, particularly concerning false information presented as fact. Austrian advocacy group Noyb has filed a complaint against OpenAI, citing an incident where ChatGPT inaccurately labeled a Norwegian man as a murderer. This issue raises concerns about compliance with GDPR, emphasizing the necessity for accurate personal data. Noyb argues that merely providing a disclaimer that the AI can make mistakes is inadequate. Other accusations highlight similar instances of misinformation, further compounding the challenges faced by OpenAI in ensuring the reliability of its chatbot.
Noyb claims that this response put OpenAI in violation of GDPR. "The GDPR is clear. Personal data has to be accurate. And if it's not, users have the right to have it changed to reflect the truth," Noyb data protection lawyer Joakim Söderberg stated.
Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough. You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.
OpenAI had rebuffed the complainant's request to erase or update their birthdate, claiming it couldn't change information already in the system, just block its use on certain prompts.
Other notable instances of ChatGPT's hallucinations include accusing one man of fraud and embezzlement, a court reporter of child abuse and a law professor of sexual harassment.
Collection
[
|
...
]