I work in AI security at Google. There are some things I would never tell chatbots.
Briefly

I work in AI security at Google. There are some things I would never tell chatbots.
"Sometimes, a false sense of intimacy with AI can lead people to share information online that they never would otherwise. AI companies may haveemployees who work on improving the privacy aspects of their models, but it's not advisable to share credit card details, Social Security numbers, your home address, personal medical history, or other personally identifiable information with AI chatbots."
"But my job means I'm very aware of the privacy concerns associated with using AI. I've worked at Google since 2023 and spent two years as a software engineer on the privacy team, building infrastructure to protect user data. I then switched to the Chrome AI security team, where I help secure Google Chrome from malicious threats, like hackers and those who use AI agents to conduct phishing campaigns."
A Google privacy and Chrome AI security engineer relies on AI daily for research, note-taking, coding, and searches while remaining cautious about privacy. Work on privacy infrastructure and Chrome security highlights risks from hackers and AI-driven phishing campaigns. Sensitive data such as credit card details, Social Security numbers, home addresses, and medical history should not be shared with public chatbots. Information shared with some public models can be used to train future systems, risking training leakage and memorization of personal data. Favor well-known AI tools, limit personal details, and assume chatbot interactions are not private.
Read at Business Insider
Unable to calculate read time
[
|
]