
"Microsoft's warning on Tuesday that an experimental AI Agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained? As reported Tuesday, Microsoft introduced Copilot Actions, a new set of "experimental agentic features""
"The admonition is based on known defects inherent in most large language models, including Copilot, as researchers have repeatedly demonstrated. One common defect of LLMs causes them to provide factually erroneous and illogical answers, sometimes to even the most basic questions. This propensity for hallucinations, as the behavior has come to be called, means users can't trust the output of Copilot, Gemini, Claude, or any other AI assistant and instead must independently confirm it."
"Another common LLM landmine is the prompt injection, a class of bug that allows hackers to plant malicious instructions in websites, resumes, and emails. LLMs are programmed to follow directions so eagerly that they are unable to discern those in valid user prompts from ones contained in untrusted, third-party content created by attackers. As a result, the LLMs give the attackers the same deference as users."
Microsoft introduced Copilot Actions as experimental agentic features that can perform everyday tasks and act as an active digital collaborator when enabled. Microsoft warned that users should enable Copilot Actions only if they understand the security implications. Large language models exhibit known defects such as hallucinations that produce erroneous outputs and prompt injection vulnerabilities that allow attackers to embed malicious instructions. These flaws reduce trust in AI outputs, require independent verification, and can be exploited to exfiltrate sensitive data, run malicious code, and steal cryptocurrency or other assets.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]