A critical security vulnerability has been revealed in Anthropic's Model Context Protocol (MCP) Inspector project, which could allow remote code execution (RCE) and complete access to hosts. This vulnerability, designated CVE-2025-49596 with a CVSS score of 9.4, poses serious risks for AI teams by enabling attackers to steal data, install backdoors, and navigate networks. The MCP, introduced in November 2024, offers a standardized integration for large language model applications, but misconfigurations and default settings significantly compound security concerns for developers.
"This is one of the first critical RCEs in Anthropic's MCP ecosystem, exposing a new class of browser-based attacks against AI developer tools," Oligo Security's Avi Lumelsky said in a report published last week.
"With code execution on a developer's machine, attackers can steal data, install backdoors, and move laterally across networks - highlighting serious risks for AI teams, open-source projects, and enterprise adopters relying on MCP."
MCP, introduced by Anthropic in November 2024, is an open protocol that standardizes the way large language model (LLM) applications integrate and share data with external data sources and tools.
A key security consideration to keep in mind is that the server should not be exposed to any untrusted network as it has permission to spawn local processes and can connect to any specified MCP server.
Collection
[
|
...
]