
"After ChatGPT prompts were found surfacing in Google's search index in August, OpenAI clarified that users had clicked a box making those prompts public, which OpenAI defended as "sufficiently clear." The AI firm later scrambled to remove the chats from Google's SERPs after it became obvious that users felt misled into sharing private chats publicly."
"In the more recent case, "nobody clicked share" or had a reasonable way to prevent their chats from being exposed. "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog. Perhaps most troubling to some users-whose identities are not linked in chats unless their prompts perhaps share identifying information-there does not seem to be any way to remove the leaked chats from GSC, unlike the prior scandal."
ChatGPT prompts appeared in Google's search index after OpenAI said users had clicked a box making prompts public, which OpenAI called "sufficiently clear." OpenAI later removed chats from Google's search results after users reported feeling misled into sharing private chats. Security researchers compared those leaks to Google Search Console (GSC) leaks and said earlier leaks required users to actively share, whereas recent exposures occurred without any user clicking share or having a way to prevent exposure. Users cannot apparently remove leaked chats from GSC. Questions remain about whether OpenAI stopped routing raw prompts to Google or stopped scraping Google Search entirely.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]