MCP 'design flaw' puts 200k servers at risk: Researcher
Briefly

MCP 'design flaw' puts 200k servers at risk: Researcher
"The Ox research team says they 'repeatedly' asked Anthropic to patch the root issue, and were repeatedly told the protocol works just fine, thank you, despite 10 (so far) high- and critical-severity CVEs issued for individual open source tools and AI agents that use MCP."
"Anthropic 'declined to modify the protocol's architecture, citing the behavior as 'expected,' according to Ox researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar."
"A week after their initial report to Anthropic, the AI vendor quietly released an updated security policy - as seems to be the pattern when faced with AI bugs."
"According to the security sleuths, the root issue lies in MCP, an open source protocol originally developed by Anthropic that LLMs, AI applications, and agents use to connect to external data, systems, and one another."
Anthropic's Model Context Protocol (MCP) has a design flaw that endangers 200,000 servers, according to security researchers. The Ox research team requested a patch for the root issue but was told the protocol functions correctly. Ten high- and critical-severity CVEs have been issued for tools using MCP. A root patch could have mitigated risks for over 150 million downloads. Anthropic declined to change the protocol's architecture, labeling the behavior as expected. An updated security policy was released, but it did not resolve the underlying issues.
Read at Theregister
Unable to calculate read time
[
|
]