Nearly 40% of AI chatbot applications share user data with third parties, raising significant privacy and security concerns.
AI chatbots collect an average of 11 out of 35 possible data types, including sensitive information like geolocation, browsing history, and contact details.
Data sharing with third parties, often for targeted advertising, lacks transparency, leaving users unaware of how their information is handled.
Data breaches, such as the DeepSeek incident, highlight the risks of extensive data collection and the need for stronger cybersecurity measures.
The global nature of AI chatbots complicates regulatory oversight, emphasizing the need for clearer international standards and user vigilance in protecting personal data.