Guardrails Needed to Prevent Shadow AI from Exposing Sensitive Company Data

Organizations must control AI tools used on their networks to prevent data leaks. Shadow AI use risks exposing sensitive data as employees enter information into unsanctioned AI platforms.

Categorized in: AI News Human Resources
Published on: Aug 06, 2025
Guardrails Needed to Prevent Shadow AI from Exposing Sensitive Company Data

Tighter Controls Over AI Systems Needed to Stop Data Exposure

Organisations must take charge of which AI tools are used within their company networks to prevent sensitive data leaks. The rise of shadow AI—employees using AI tools without IT approval—is creating significant risks for data security.

A recent survey by TELUS Digital revealed that 68% of enterprise employees access generative AI assistants like ChatGPT, Microsoft Copilot, or Google Gemini through personal accounts. More concerning, 57% of these users admitted to entering sensitive information into these tools, raising alarm bells for data exposure.

The Risk of Shadow AI

Menlo Security, a browser security firm, warns that shadow AI use can lead to both data loss and data leakage. While data loss is a known threat, data leakage—where sensitive information is unintentionally exposed—can be even more damaging. Users often share confidential data while trying to summarise or reword content, unaware of the risks.

With web traffic to generative AI sites increasing 50% to 10.53 billion visits in January 2025, and 80% of that traffic coming via browsers, the opportunity for unmonitored data sharing grows.

Implementing AI Guardrails in the Workplace

Devin Ertel, Chief Information Security Officer at Menlo Security, stresses the need for clear AI governance. This means giving employees secure ways to use AI tools while safeguarding sensitive corporate data. Simply telling employees about policies isn't enough.

Enterprises must select trusted, sanctioned AI systems and require their exclusive use to eliminate shadow AI risks. However, controlling AI use on personal devices remains a challenge. When organisations can't monitor tools used outside the network, they need to strictly control what enters the network to block potential threats.

  • Choose and mandate the use of approved AI tools within the company network.
  • Educate employees on the risks of sharing sensitive data through AI tools.
  • Monitor and restrict web traffic to unsanctioned AI platforms.
  • Implement network-level controls to prevent data leakage from personal devices.

For HR professionals, this means collaborating closely with IT security teams to establish clear AI use policies and support training that promotes responsible AI adoption. Practical knowledge about AI tools and their risks can help shape effective policies that protect both employees and company data.

Learn more about training options that help employees safely leverage AI at work at Complete AI Training.