AI in CI/CD pipelines can be tricked into behaving badly
Briefly

AI in CI/CD pipelines can be tricked into behaving badly
"Researchers at Aikido Security have traced the problem back to workflows that pair GitHub Actions or GitLab CI/CD with AI tools such as Gemini CLI, Claude Code Actions, OpenAI Codex Actions or GitHub AI Inference. They found that unsupervised user-supplied strings such as issue bodies, pull request descriptions, or commit messages, could be fed straight into prompts for AI agents in an attack they are calling PromptPwnd."
"AI agents embedded in CI/CD pipelines can be tricked into executing high-privilege commands hidden in crafted GitHub issues or pull request texts. Researchers at Aikido Security have traced the problem back to workflows that pair GitHub Actions or GitLab CI/CD with AI tools such as Gemini CLI, Claude Code Actions, OpenAI Codex Actions or GitHub AI Inference. They found that unsupervised user-supplied strings such as issue bodies, pull request descriptions, or commit messages, could be fed straight into prompts for AI agents in an attack they are calling PromptPwnd. Depending on what the workflow lets the AI do, this can lead to unintended edits to repository content, disclosure of secrets, or other high-impact actions."
The vulnerability arises when CI/CD workflows feed unsanitized user-supplied strings directly into prompts consumed by AI tools. Workflows that pair GitHub Actions or GitLab CI/CD with AI tools such as Gemini CLI, Claude Code Actions, OpenAI Codex Actions, or GitHub AI Inference are affected. Maliciously crafted issue bodies, pull request descriptions, or commit messages can contain hidden high-privilege commands that the AI executes. The attack, labeled PromptPwnd, can cause unintended repository edits, leak secrets, or trigger other high-impact actions depending on the permissions granted to the AI agent.
Read at InfoWorld
Unable to calculate read time
[
|
]