Advisory

Critical flaw in Salesforce Agentforce enables data exfiltration through AI agent exploitation

Take action: Enabling AI agents IS A TERRIBLE IDEA. They have completely new vectors of attack, including all input field, are not well tested and are not mature. Everyone is pushing out new half-baked AI products instead of making them work well. Don't try to be on the cutting edge of this bubble, you will most certainly be hacked. And by the way, patch your Salesforce Agentforce AI garbage.


Learn More

Researchers at Noma Security are reporting a critical vulnerability in Salesforce dubbed ForcedLeak and tracked as a CVE-pending issue (CVSS score 9.4) that could enable external attackers to exfiltrate sensitive CRM data through indirect prompt injection attacks on the autonomous AI platform.

The vulnerability exploits a weakness in how AI agents process and execute commands embedded within trusted data sources. Unlike traditional chatbots that operate on simple prompt-response mechanisms, Agentforce tries to offer autonomous AI agents capable of independent reasoning, planning, and executing complex business tasks within CRM environments. 

This advanced functionality creates an expanded attack surface that extends well beyond simple input prompts to include knowledge bases, executable tools, internal memory, and all autonomous components the AI can access.

ForcedLeak targets organizations using Salesforce Agentforce with Web-to-Lead functionality enabled, with sales, marketing, and customer acquisition workflows where external lead data is regularly processed by AI agents. 

Salesforce's Web-to-Lead feature is commonly used at conferences, trade shows, and marketing campaigns to capture potential customer information from external sources. Researchers discovered that the Description field in Web-to-Lead forms, with 42,000-character limit, is an optimal injection point for complex, multi-step instruction sets. Attackers can embed malicious instructions within this field, which then get stored in the system's database as legitimate customer data:

  1. Attackers submit Web-to-Lead forms containing malicious instructions hidden within the Description field.
  2. These submissions appear as standard business inquiries but contain embedded commands designed to manipulate the AI agent's behavior.
  3. When internal employees subsequently process these leads using standard AI queries - such as requesting Agentforce to check a specific lead and respond to their questions - the AI system processes both the legitimate employee instruction and the attacker's embedded payload simultaneously.
  4. The AI agent, operating as a straightforward execution engine, lacks the ability to distinguish between legitimate data loaded into its context and malicious instructions that should only be executed from trusted sources.
  5. The AI agent queries CRM databases for sensitive lead information as directed by the malicious commands, transmitting the extracted data to attacker-controlled servers through various exfiltration methods.

A component enabling this vulnerability was a Content Security Policy flaw within Salesforce's infrastructure. Analysis revealed that an expired domain, my-salesforce-cms.com, remained whitelisted in the security policy despite becoming available for purchase. For just $5, researchers were able to register this expired domain, creating a trusted exfiltration channel that bypassed security controls. 

The potential impact of ForcedLeak extends far beyond simple data theft. The vulnerability's includes potential lateral movement to connected business systems and APIs through Salesforce's integrations. Time-delayed attacks can remain dormant until triggered by routine employee interactions, making detection and containment particularly challenging.

After responsible disclosure of the vulnerability on July 28, 2025, Salesforce investigated the issue and has released patches that prevent output in Agentforce agents from being sent to untrusted URLs. The company implemented Trusted URLs Enforcement for Agentforce and Einstein AI on September 8, 2025, ensuring that underlying services powering Agentforce enforce URL allowlists to prevent malicious links from being called or generated through potential prompt injection attacks. 

Organizations using affected Salesforce and Agentforce should immediately apply the company's recommended actions to enforce Trusted URLs for Agentforce and Einstein AI to avoid operational disruption. Security teams must audit all existing lead data for suspicious submissions containing unusual instructions or formatting that may indicate compromise attempts. Implementation of strict input validation and prompt injection detection on all user-controlled data fields is essential, along with data sanitization from untrusted sources.

Critical flaw in Salesforce Agentforce enables data exfiltration through AI agent exploitation