Surfshark Reveals How AI Chatbots Exploit Your Personal Data
Briefly

A study by Surfshark found that nearly one-third of popular AI chatbot applications share user data with third parties. This situation raises critical privacy and data security concerns as AI chatbots—more integral to daily life—gather significant amounts of sensitive data, including browsing history, geolocation, and contact details. As these apps collect on average 11 out of 35 potential data types, transparency in data handling is increasingly needed. Users must be informed of potential risks in using these technologies to prevent misuse of their personal information.
A recent study by Surfshark reveals that 40% of AI chatbot applications share user data with third parties, prompting urgent privacy concerns and calls for transparency.
AI chatbots collect an average of 11 out of 35 possible data types, posing risks to user privacy as sensitive information, including browsing history and geolocation, is often included.
Data sharing often occurs for targeted advertising and lacks transparency, leaving users unaware of how their information is being collected and used.
The global nature of AI chatbots requires clearer international standards to address regulatory challenges and safeguard user data more effectively.
Read at Geeky Gadgets
[
|
]