Model Security Is the Wrong Frame - The Real Risk Is Workflow Security
Briefly

Model Security Is the Wrong Frame - The Real Risk Is Workflow Security
"Two Chrome extensions posing as AI helpers were recently caught stealing ChatGPT and DeepSeek chat data from over 900,000 users. Separately, researchers demonstrated how prompt injections hidden in code repositories could trick IBM's AI coding assistant into executing malware on a developer's machine. Neither attack broke the AI algorithms themselves. They exploited the context in which the AI operates. That's the pattern worth paying attention to."
"Businesses now rely on it to connect apps and automate tasks that used to be done by hand. An AI writing assistant might pull a confidential document from SharePoint and summarize it in an email draft. A sales chatbot might cross-reference internal CRM records to answer a customer question. Each of these scenarios blurs the boundaries between applications, creating new integration pathways on the fly."
Two Chrome extensions posing as AI helpers exfiltrated ChatGPT and DeepSeek chat data from over 900,000 users, and prompt injections in code repositories tricked an AI coding assistant into executing malware on a developer's machine. Neither attack required compromising the AI algorithms; both manipulated the context and channels surrounding the models. Businesses increasingly use AI to connect applications and automate tasks, enabling assistants to fetch confidential documents, draft emails, and query internal systems. Probabilistic AI decision-making lacks native trust boundaries and can be nudged by crafted inputs. The result is an expanded attack surface across every input, output, and integration point, requiring workflow-level security.
Read at The Hacker News
Unable to calculate read time
[
|
]