ChatGPT Can Reveal Personal Information From Real People, Google Researchers Show
Briefly

A team of Google researchers have unveiled a novel attack on ChatGPT, showing that OpenAI's popular AI chatbot will divulge personal information from real people.
Using only $200 USD worth of queries to ChatGPT (gpt-3.5- turbo), we are able to extract over 10,000 unique verbatim memorized training examples.
The Google researchers focused on asking ChatGPT to repeat certain words ad infinitum, for example, the word poem. The goal is to cause ChatGPT to diverge from its training to be a chatbot and fall back to its original language modeling objective.
Read at www.vice.com
[
add
]
[
|
|
]