New 'Rules File Backdoor' Attack Lets Hackers Inject Malicious Code via AI Code Editors
Briefly

Cybersecurity researchers have identified a new attack vector, termed the 'Rules File Backdoor', that impacts AI-powered code editors like GitHub Copilot and Cursor. This method allows threat actors to inject hidden malicious instructions into configuration files, leading to the AI generating compromised code. By utilizing invisible characters and exploiting the AI’s natural language processing capabilities, attackers can embed prompts that nudge the AI into creating code with vulnerabilities. This silent propagation of malicious code threatens the integrity of software projects, necessitating stringent code review practices by users of these AI tools.
"This technique enables hackers to silently compromise AI-generated code by injecting hidden malicious instructions into seemingly innocent configuration files used by Cursor and GitHub Copilot."
"By exploiting hidden unicode characters and sophisticated evasion techniques in the model facing instruction payload, threat actors can manipulate the AI to insert malicious code that bypasses typical code reviews."
Read at The Hacker News
[
|
]