Government warns employees: Avoid ChatGPT and AI tools for official use
21 Dec 2025 * 12:19 IST
The Government of India has issued strict directives to all central government officials to avoid using AI platforms like ChatGPT for any official or sensitive information. The concern is clear: data leaks, national security threats, and misuse of personal information.
- Ban on sharing official data via AI platforms
- Risk of national data exposure to foreign entities highlighted
- Personal information sharing discouraged due to long-term misuse risks
Why this matters
Information entered into public AI tools can be stored, processed, and used to improve those systems. That creates a direct risk of sensitive data reaching foreign servers or third parties. For government work, this is a security issue, not a convenience issue.
What the directive requires
- Do not upload, paste, or reference official, confidential, or internal data in any AI tool.
- Do not share personal details, photos, or videos with AI platforms.
- Assume anything entered into a public AI system can be retained or learned from.
- Strict compliance is expected across departments, units, and agencies.
Use approved channels instead
- Rely on department-sanctioned systems for communication, file handling, and analysis.
- Use official email, secure document repositories, and tools vetted by your IT/security cell.
- For translations, summaries, or drafting, use internal tools cleared by your department-not public AI platforms.
- When in doubt, check with your nodal security officer before using any third-party software.
Handling personal data
Do not upload personal identifiers, images, or videos to AI tools. Cybersecurity experts warn that such data can be retained and later misused. Keep personal and professional data off public AI platforms-especially anything that could reveal identity, location, or official role.
Immediate actions for departments
- Circulate this directive and record acknowledgement from staff.
- Update SOPs to explicitly ban AI tools for official or sensitive work.
- Coordinate with IT to block access to public AI platforms on official networks and devices, where feasible.
- Run quick awareness briefings for all teams, including contractors and vendors with access.
- Define an escalation path for suspected data exposure and near-misses.
Day-to-day safeguards for officials
- Treat prompts as data disclosures-if it's not public, don't type it.
- Sanitize documents before sharing anywhere: remove names, identifiers, and internal references.
- Keep work on official devices and accounts; avoid shadow IT and unapproved apps.
- If a task seems to require AI assistance, request an approved alternative through your IT/security cell.
Incident reporting
- If any official or personal data was entered into an AI tool, report it immediately to your department's IT/security team.
- Preserve evidence (timestamps, screenshots, text entered) and avoid further interaction with the tool.
- Follow your department's incident response process and coordinate with the nodal officer.
Authoritative references
- Indian Computer Emergency Response Team (CERT-In)
- Ministry of Electronics and Information Technology (MeitY)
Bottom line: Do not use public AI tools for any official, confidential, or personal data. Follow approved channels, keep information sealed within government systems, and report incidents without delay.
Your membership also unlocks: