The AI security nightmare is here and it looks suspiciously like lobster
Briefly

The AI security nightmare is here and it looks suspiciously like lobster
"The hacker took advantage of a vulnerability in Cline, an open-source AI coding agent popular among developers, that security researcher Adnan Khan had surfaced just days earlier as a proof of concept. Simply put, Cline's workflow used Anthropic's Claude, which could be fed sneaky instructions and made to do things that it shouldn't, a technique known as a prompt injection."
"They may look like clever wordplay - one group wooed chatbots into committing crimes with poetry - but in a world of increasingly autonomous software, prompt injections are massive security risks that are very difficult to defend against. Acknowledging this, some companies instead lock down what AI tools can do if they're hijacked. OpenAI, for example, recently introduced a new Lockdown Mode for ChatGPT preventing it from giving your data away."
An attacker exploited a prompt-injection vulnerability in the Cline AI coding agent's workflow that relied on Anthropic's Claude to feed malicious instructions. The attacker pushed instructions to automatically install software on users' machines, choosing the open-source OpenClaw agent; installations occurred but the agents were not activated. The incident illustrates how autonomous agents with control over systems can be hijacked to perform unwanted actions and how prompt injections pose significant security challenges. Some vendors mitigate risk by restricting agent capabilities, such as adding Lockdown Mode. Reports indicate the vulnerability was privately reported weeks earlier but fixed only after public disclosure.
Read at The Verge
Unable to calculate read time
[
|
]