
"Employees could be opening up to OpenAI in ways that put sensitive data at risk. According to a study by security biz LayerX, a large number of corporate users paste Personally Identifiable Information (PII) or Payment Card Industry (PCI) numbers right into ChatGPT, even if they're using the bot without permission. In its Enterprise AI and SaaS Data Security Report 2025, LayerX blames the growing, largely uncontrolled usage of generative AI tools for exfiltrating personal and payment data from enterprise environments."
"With 45 percent of enterprise employees now using generative AI tools, 77 percent of these AI users have been copying and pasting data into their chatbot queries, the LayerX study says. A bit more than a fifth (22 percent) of these copy and paste operations include PII/PCI. "With 82 percent of pastes coming from unmanaged personal accounts, enterprises have little to no visibility into what data is being shared, creating a massive blind spot for data leakage and compliance risks," the report says."
45 percent of enterprise employees use generative AI tools. Seventy-seven percent of those users copy and paste data into chatbot queries, and 22 percent of copy-and-paste operations include PII or PCI numbers. Eighty-two percent of pastes originate from unmanaged personal accounts, leaving enterprises little to no visibility and creating a massive blind spot for data leakage and compliance risks. About 40 percent of file uploads to generative AI sites include PII/PCI, with 39 percent of these uploads coming from non-corporate accounts. Monitoring is performed via browser-based enterprise extensions, capturing web interactions but not API calls. Data leakage can cause geopolitical, regulatory, compliance, and model-training exposure risks; corporate incidents have prompted temporary bans.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]