
"Cybersecurity researchers have disclosed a critical flaw impacting Salesforce Agentforce, a platform for building artificial intelligence (AI) agents, that could allow attackers to potentially exfiltrate sensitive data from its customer relationship management (CRM) tool by means of an indirect prompt injection. The vulnerability has been codenamed ForcedLeak (CVSS score: 9.4) by Noma Security, which discovered and reported the problem on July 28, 2025. It impacts any organization using Salesforce Agentforce with the Web-to-Lead functionality enabled."
""This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response systems," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. One of the most severe threats facing generative artificial intelligence (GenAI) systems today is indirect prompt injection, which occurs when malicious instructions are inserted into external data sources accessed by the service, effectively causing it to generate otherwise prohibited content or take unintended actions."
"The attack path demonstrated by Noma is deceptively simple in that it coaxes the Description field in Web-to-Lead form to run malicious instructions by means of a prompt injection, allowing a threat actor to leak sensitive data and exfiltrate it to a Salesforce-related allowlisted domain that had expired and become available for purchase for as little as $5. This takes place over five steps - Attacker submits Web-to-Lead form with a malicious Description"
Noma Security discovered ForcedLeak (CVSS 9.4), a critical vulnerability affecting Salesforce Agentforce when Web-to-Lead is enabled. The flaw allows malicious instructions to be embedded in the Description field of incoming leads, causing Agentforce to execute hidden commands during normal AI processing. An attacker can trigger data retrieval from the CRM and exfiltrate sensitive lead information to an attacker-controlled domain that had been allowlisted and later purchased for as little as $5. The exploit chain leverages weak context validation, overly permissive model behavior, and content security weaknesses, expanding the attack surface of generative AI agents beyond traditional prompt-response systems.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]