ChatGPT Security Flaw Could Leak Your Sensitive Data Without Warning
Israeli cybersecurity firm Check Point discovered a vulnerability in ChatGPT that allowed attackers to extract data without triggering any visible alerts. OpenAI confirmed it had already identified the issue internally and deployed a fix on February 20, 2026.
The flaw sat in the runtime ChatGPT uses for data analysis and code execution-a sealed workspace where files are processed. While normal outbound web traffic was blocked, one background function remained available: Domain Name System (DNS) resolution, the system computers use to find websites.
Attackers could exploit this by using "DNS tunneling"-hiding data inside what appears to be routine website lookup requests. This allowed them to move information out of the secure environment without being detected.
Why This Matters for Customer Support Teams
About 30% of consumer ChatGPT usage is tied to work. Customer support professionals regularly use AI for Customer Support to draft responses, summarize customer issues, and analyze sensitive information.
If you've been pasting customer details, account information, or internal notes into ChatGPT, those summaries could have been extracted without your knowledge. The vulnerability didn't just expose raw files-it exposed the model's summaries and conclusions, which are often more valuable than the original documents.
A lawyer uploading a draft agreement, a manager rewriting a performance review, or a support agent asking for help with a sensitive customer case would see no warning that their information had left the system.
How the Attack Worked
The attack could begin with an ordinary prompt-the text instructions users paste into ChatGPT. Prompt-sharing is common. People copy prompts from LinkedIn, Reddit, newsletters, and Slack groups without questioning where they came from.
A malicious prompt could be framed as a writing shortcut or productivity trick. Once in place, later messages in the conversation could become a source of leaked information-what you typed, text from uploaded files, and the model's own summaries.
From your perspective, everything looked normal. The assistant responded as expected. No approval prompt appeared. No signal informed you that data had left the session.
Custom GPTs Increased the Risk
The vulnerability became more serious when embedded in a custom GPT. Attackers wouldn't need to persuade someone to paste a suspicious prompt. The malicious behavior could be built directly into a specialized GPT's instructions or files.
Custom GPTs feel safer because they're purpose-built: legal drafting, budgeting, interview prep, or health guidance. Users trust them more, not less. Check Point demonstrated this with a "personal doctor" GPT. A user uploaded lab results and asked the system to interpret symptoms. The GPT responded normally and even said the data hadn't been sent anywhere. Behind the scenes, an attacker's server received both the patient's identity and the medical assessment.
Beyond Data Theft
Check Point said the same covert channel could be used for something more aggressive than data extraction. Once a two-way path existed between the runtime and an attacker-controlled server, commands could be sent into the Linux container where ChatGPT performed analysis tasks.
This meant attackers could potentially operate inside the environment where ChatGPT was working. Activity would take place outside the normal chat flow and beyond the assistant's usual safeguards.
What to Do Now
OpenAI deployed the fix on February 20, 2026. The immediate risk is lower, but the broader lesson remains: AI assistants are no longer just chat windows. They're working environments where files are uploaded, code is run, and sensitive conclusions are generated.
When you hand private information to an AI system, you assume the walls around it are solid. The reality is that those walls depend on technical layers most users never see.
- Think twice before pasting customer details, account information, or internal notes into ChatGPT
- Be cautious with custom GPTs, especially those handling sensitive data
- Question where prompts come from before using them
- Consider whether the information you're sharing could harm a customer if it leaked
You don't need to stop using AI entirely. But assume your "personal assistant" may be communicating with someone else.
Your membership also unlocks: