Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration
Briefly

Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration
"The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables - executing arbitrary shell commands and exfiltrating Anthropic API keys when users clone and open untrusted repositories."
"If a user started Claude Code in an attacker-controller repository, and the repository included a settings file that set ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code would issue API requests before showing the trust prompt, including potentially leaking the user's API keys."
"A code injection vulnerability stemming from a user consent bypass when starting Claude Code in a new directory that could result in arbitrary code execution without additional confirmation via untrusted project hooks defined in .claude/settings.json."
Cybersecurity researchers discovered three significant security vulnerabilities in Anthropic's Claude Code AI-powered coding assistant. These vulnerabilities exploit configuration mechanisms including Hooks, Model Context Protocol servers, and environment variables to execute arbitrary shell commands and steal Anthropic API keys. The vulnerabilities include code injection flaws that bypass user consent during initialization and allow arbitrary code execution without confirmation, automatic shell command execution when starting Claude Code in untrusted directories, and information disclosure vulnerabilities enabling malicious repositories to exfiltrate API keys. Simply opening a crafted repository is sufficient to compromise developer credentials and redirect authenticated API traffic to attacker-controlled endpoints. Anthropic has released patches addressing all three vulnerabilities across multiple versions.
Read at The Hacker News
Unable to calculate read time
[
|
]