Large language models and data protection
Briefly

LLMs have been trained based on indiscriminate data scraping and generally maximise their approach to data collection 'Regurgitation' can lead to personal data being spat out by LLMs - this could have been captured through scraping or though text entered by users of LLMs AI companies cannot properly comply with existing data rights because of how LLMs work
Read at Privacy International
[
|
]