AI Agents Spill Salesforce Data: Security Flaws Exposed in Copilot Studio Attack
A security test exposed how a customer service AI agent could leak complete Salesforce records via prompt injection attacks. Experts warn this vulnerability is inherent and requires strict governance.

A Customer Service AI Agent Exposes Complete Salesforce Records in Security Research Attack
Introduction
Microsoft’s Copilot Studio enables businesses to build AI agents that handle multi-step tasks without human intervention. One example, demonstrated by McKinsey & Co, features a customer service AI that autonomously interacts with customers, searching internal knowledge bases and data systems to answer queries. This approach moves beyond rigid decision trees, allowing chatbots to adapt to varied customer interactions. Industry experts predict that agentic AI will solve 80% of customer problems by 2029, transforming customer support operations.
Security Research Findings
Researchers at Zenity, a security and governance platform provider, tested the safety of AI agents built on Copilot Studio. They replicated McKinsey’s customer service AI, connected it to a Salesforce sandbox, and launched aggressive attacks during DEF CON 2025. Their tests revealed that the AI agent could be manipulated to act without human oversight, revealing private knowledge, internal tools, and even complete Salesforce CRM records.
Microsoft responded by patching the vulnerability, and the injection method no longer works on current Copilot Studio agents. However, Zenity warns that over 3,500 public-facing AI agents remain susceptible to similar prompt injection attacks. This means more "agent aijacking" incidents are likely, and the next attacks might not come from security researchers.
Expert Opinions
Michael Bargury, Co-Founder & CTO of Zenity, states: “Agent aijacking is not a vulnerability you can fix. It’s inherent to agentic AI systems, a problem we’re going to have to manage. If businesses can’t manage this vulnerability while granting AI agents access to internal systems, they risk large-scale data breaches.”
The demo highlights a critical point: without strong governance, AI agents can become tools for data extraction, attacking CRMs, internal communications, and billing systems.
David Villalon, Co-founder & CEO of Maisa, adds a warning for enterprises deploying autonomous AI: “Every autonomous agent with data access is a potential attack vector. The convenience of ‘no human in the loop’ becomes a catastrophic vulnerability when security fails.” He points out that the gap between AI capability and AI security keeps widening, as hackers exploit new attack surfaces with cleverly crafted prompts.
Villalon suggests enterprises reconsider what “autonomous” means for AI in customer-facing roles, emphasizing the need for tighter security controls.
More Attacks Targeting Salesforce Data
While the ethical attack on the Copilot-built AI agent didn’t expose real Salesforce records, recent unethical breaches have targeted Salesforce data through other means.
One recent attack involved bad actors impersonating HR or IT personnel to trick Workday employees into revealing information from a third-party CRM platform, reportedly Salesforce. Another breach at Google involved tricking admins into installing a malicious version of Salesforce Data Loader. This fake tool extracted, updated, and deleted Salesforce data, quietly lifting sensitive information.
These incidents underline a critical fact: no customer database is immune. The threat landscape is growing, fueled by AI-generated deepfakes and novel attack vectors. Cybersecurity teams must stay vigilant as attackers exploit both human and technical vulnerabilities.
What Customer Support Teams Should Take Away
- AI agents can streamline customer interactions but introduce new security risks if not properly managed.
- Prompt injection attacks can trick AI agents into leaking sensitive information without human oversight.
- Regular security audits and governance frameworks are essential when deploying autonomous AI with access to internal systems.
- Training on AI security best practices is critical for both IT and customer support teams.
- Stay informed about emerging threats targeting CRM platforms like Salesforce to anticipate and mitigate risks.
Customer support professionals can benefit from understanding these risks and adopting secure AI practices. For those looking to build skills in AI and automation while maintaining security awareness, exploring specialized courses can be a valuable step. Consider checking out AI courses tailored for customer support roles to stay ahead in this evolving landscape.