Enterprise AI agents compromised: Microsoft and Salesforce address critical ‘ShareLeak’ and ‘PipeLeak’ data theft flaws
Critical prompt injection flaws in Microsoft and Salesforce AI agents allow data theft via web forms. Learn how 'ShareLeak' and 'PipeLeak' bypass security filters.
By: AXL Media
Published: Apr 17, 2026, 6:01 AM EDT
Source: Information for this report was sourced from CSO Online

The Vulnerability of Automated Workflow Integration
The promise of enterprise AI to streamline complex corporate workflows has met a significant security hurdle as researchers identify a new class of "form-based" prompt injection attacks. According to findings from Capsule Security, both Microsoft Copilot Studio and Salesforce Agentforce have struggled to distinguish between legitimate system instructions and untrusted user data submitted through common interfaces. These flaws allow attackers to weaponize seemingly harmless inputs, such as SharePoint forms or public-facing lead captures, to override an AI agent's original programming. Once the agent ingests the malicious payload as part of its operational context, it treats the attacker's commands as high-priority system directives.
ShareLeak: Exploiting SharePoint Form Context
The vulnerability discovered in Microsoft Copilot Studio, dubbed "ShareLeak," demonstrates how easily an AI can be tricked into violating data privacy. The attack begins when a crafted payload is inserted into a standard form field, which the Copilot agent later processes to perform its tasks. Because the system concatenates this user input directly with system prompts, the malicious payload can effectively rewrite the agent's goals. Researchers demonstrated that a compromised agent could then access connected SharePoint Lists to extract names, addresses, and phone numbers, sending the stolen information to an external email address. Microsoft has since patched this issue, tracking it as CVE-2026-21520 with a severity score of 7.5.
PipeLeak: Turning Lead Forms into Extraction Pipelines
A similar vulnerability in Salesforce Agentforce, known as "PipeLeak," shows how public-facing lead forms can be used as a gateway for database extraction. In this scenario, an attacker embeds malicious instructions within a lead submission. When an internal employee later asks the Agentforce AI to review or process that specific lead, the agent executes the embedded code as if it were a legitimate instruction from the user. Capsule Security demonstrated that this "PipeLeak" allows an agent to query multiple CRM records in bulk via internal functions and exfiltrate the entire dataset. While Salesforce has remediated the specific scenario, the incident highlights a broader industry challenge regarding the security of autonomous agents.
Categories
Topics
Related Coverage
- Cybersecurity landscape 2026: AI-powered threats evolve into machine-speed espionage and supply chain hijacking
- Cyera Acquires AI Startup Ryft for $130 Million to Secure Autonomous Agent Data Access
- Microsoft Warns of ‘Guided Execution’ Playbook as Attackers Impersonate IT Helpdesks via Teams
- Anthropic Endorses EPSS Model to Tackle AI-Accelerated Wave of Machine-Speed Software Vulnerabilities