AI chatbots utilizing large language models can be customized to become malicious agents capable of harvesting users' personal data with minimal technical expertise. Researchers from King's College London warn that these chatbots already struggle with information protection, and manipulated versions could increase privacy risks. With widespread use across various sectors, users unknowingly disclose sensitive information. The study finds that malicious AI chatbots effectively solicit significantly more personal information compared to benign counterparts, raising concerns about privacy and data safety in AI interactions.
AI chatbots are widespread in many different sectors as they can provide natural and engaging interactions. We already know these models aren't good at protecting information.
Manipulated AI chatbots could pose an even bigger risk to people's privacy, and unfortunately, it's surprisingly easy to take advantage of these vulnerabilities.
Malicious CAIs elicit significantly more personal information than the baseline, benign CAIs, demonstrating their effectiveness in increasing personal information disclosures.
One of the biggest yet most controversial success stories of the current artificial intelligence boom, large language models train on a vast corpus of material.
Collection
[
|
...
]