Your Prompts Are Leaking: Credentials, Contracts, and Trust at Risk

Sensitive data is slipping into AI prompts as workers paste contracts, client names, and even passwords. 26% shared sensitive data and 19% pasted credentials.

Published on: Sep 29, 2025
Your Prompts Are Leaking: Credentials, Contracts, and Trust at Risk

Sensitive Data Is Slipping Into AI Prompts - And Most Teams Don't See It

Rushed deadlines lead to shortcuts. Copy a contract into a chatbot. Paste a username and password to "see what it does." That's how sensitive data leaves your environment - through everyday prompts that feel harmless.

A new Smallpdf survey of 1,000 U.S. professionals shows the scale of the problem. AI is embedded in daily work, but guardrails aren't. The result: leaks, policy violations, and a growing trust problem with clients and stakeholders.

What's Actually Being Shared

  • 26% have entered sensitive company information into a generative AI tool.
  • 19% have entered actual login credentials (personal/work email, cloud storage, even financial accounts).
  • 38% have shared proprietary product details or internal financials.
  • 17% don't remove or anonymize sensitive details before prompting.
  • Nearly 1 in 10 have misled their employer about how they use AI at work.

Many assume prompts are private. In reality, provider settings, retention policies, and third-party integrations affect where your data goes. Unless you've enforced enterprise controls, prompts can be stored, reviewed, or surfaced in ways you don't expect.

Prompts Are the New Leak Surface

Workers trust chat boxes, browser extensions, and built-in copilots. That trust is now an attack surface. Contracts, client names, and credentials get pasted into systems outside your security boundary. Convenience becomes the breach.

  • 24% believe prompts are private.
  • 75% would still use AI even if every prompt were permanently stored.

Traditional DLP rarely inspects prompts in real time. Policies lag. Training is missing. And copy-paste is still the default workflow for sensitive documents.

Prompt Hygiene: The Achilles' Heel

  • 19% entered credentials; half of those shared email logins, a quarter shared cloud-storage logins, and some shared financial accounts.
  • 17% skip redaction or anonymization entirely.
  • 70% report no formal training on safe AI use; 44% say there's no AI policy.

That's not a tooling problem alone. It's process and behavior. Without guardrails, people do what's fast.

The Readiness Gap

Awareness is up. Preparation isn't. Many employees aren't confident they can use AI without breaking rules. Some have already been warned or disciplined. The culture sends mixed signals: "move faster" but "don't mess up." That confusion drives risky improvisation.

What Good Looks Like: Governed, Privacy-First AI

Treat every prompt as data in motion. Build controls around it before it leaves your environment.

  • Enterprise over consumer: Use enterprise AI with SSO, tenant isolation, audit logs, and retention off by default.
  • Real-time guardrails: Block credentials, client names, and financial fields at entry. Auto-redact PII/IP before text reaches external models.
  • DLP + secret scanning: Scan prompts and uploads for PII, contract terms, source code secrets, and tokens. Quarantine risky submissions.
  • Context-aware approvals: Flag actions like "summarize contract" or "analyze internal financials" for manager or legal review.
  • Routing: Keep sensitive workloads on approved models/endpoints; restrict external calls for high-risk content.
  • Minimum retention: Turn off training/retention where possible; log metadata for audit, not payloads.

Implementation Blueprint (IT, Security, Engineering)

  • Access: Enforce SSO, MFA, least privilege. Isolate tenants by team or data sensitivity.
  • Network: Proxy all AI traffic through a model gateway that applies policy, redaction, and logging.
  • Secrets: Integrate secret scanners for prompts and attachments. Block known patterns (tokens, keys, passwords) at the edge.
  • DLP: Extend DLP to chat prompts and file uploads. Use pattern + context rules (names + contract terms, financial tables, code blocks).
  • Data controls: Mask identifiers by default. Tokenize where needed. Keep raw data on trusted systems; send summaries/snippets only.
  • Model governance: Maintain an approved model list. Separate dev/test from production. Version prompts and system instructions.
  • Audit: Log who prompted what, when, and why (without storing raw payloads unless required).

Useful references: OWASP Top 10 for LLM Applications, NIST AI Risk Management Framework.

Practical Checklist for Everyone

  • Never paste secrets: passwords, API keys, tokens, recovery codes, 2FA backups.
  • Strip identifiers: replace names, emails, account numbers, addresses with placeholders.
  • Summarize first: share a non-sensitive summary instead of full contracts or financials.
  • Use enterprise chat: only in approved tools with retention disabled and audit enabled.
  • Ask before you upload: if a file contains client data, legal or security should approve the workflow.
  • Assume prompts persist: if it would be a problem on a public forum, don't paste it into a chatbot.

Culture, Trust, and Reputation

The damage isn't just technical. Clients expect discretion. If their data shows up in an external system, trust takes a hit that's hard to repair. Clear policies, visible controls, and honest training protect more than compliance - they protect relationships.

Why Many Orgs Stay Exposed

  • Policy vacuum + training deficit: people fill gaps with guesswork.
  • Misplaced trust: prompts feel private; habit wins over caution.
  • Fragmented ownership: AI spreads faster than governance; documents live outside DLP.

These blockers are solvable. But they require budget, mandate, and shared ownership across security, legal, IT, and the business.

Looking Ahead

The future of AI at work depends on this shift: from casual prompting to governed workflows. Treat prompts like any sensitive data flow - subject to redaction, routing, and audit. Resource it properly and set expectations that stick.

If your team needs structured upskilling on safe prompting and AI workflows, explore training options: Prompt courses and AI courses by job.

Methodology

This analysis draws on a September 2025 survey commissioned by Smallpdf of 1,000 full-time U.S. professionals across industries, job levels, and demographics. Questions covered anonymization habits, credential sharing, policy awareness, training, and tool usage frequency to map risk inside everyday AI-assisted tasks.