ChatGPT repeating certain words can expose its training data
Briefly

Prompting the chatbot to repeat the word 'book,' for example, will result in it generating the word 'book' thousands of times, until it suddenly starts spewing what appears to be random text.
Being able to extract this information is problematic - especially if it's sensitive or private.
Read at Theregister
[
add
]
[
|
|
]