Microsoft tries to head off the "novel security risks" of Windows 11 AI agents
Briefly

Microsoft tries to head off the "novel security risks" of Windows 11 AI agents
"If you're not familiar, "agentic" is a buzzword that Microsoft has used repeatedly to describe its future ambitions for Windows 11-in plainer language, these agents are meant to accomplish assigned tasks in the background, allowing the user's attention to be turned elsewhere. Microsoft says it wants agents to be capable of "everyday tasks like organizing files, scheduling meetings, or sending emails," and that Copilot Actions should give you "an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.""
"But like other kinds of AI, these agents can be prone to error and confabulations and will often proceed as if they know what they're doing even when they don't. They also present, in Microsoft's own words, "novel security risks," mostly related to what can happen if an attacker is able to give instructions to one of these agents. As a result, Microsoft's implementation walks a tightrope between giving these agents access to your files and cordoning them off from the rest of the system."
Windows 11 is receiving deeper integration of generative and agentic AI features, including an experimental agentic features toggle and Copilot Actions. Agents are designed to accomplish assigned tasks in the background, such as organizing files, scheduling meetings, and sending emails, acting as active digital collaborators to carry out complex tasks and enhance productivity. These agents can be prone to errors and confabulations and may behave as if they know answers when they do not. The agents introduce novel security risks, particularly if an attacker can give them instructions. Microsoft addresses risks by separating agent privileges into distinct user accounts, restricting permissions, and isolating agent desktops to avoid interfering with personal accounts.
Read at Ars Technica
Unable to calculate read time
[
|
]