China curbs OpenClaw AI in government over data security concerns

China is curbing OpenClaw AI in government over data and network risks. It's not a blanket ban-some agencies allow case-by-case use while ordering removals on official networks.

Categorized in: AI News Government
Published on: Mar 13, 2026
China curbs OpenClaw AI in government over data security concerns

China tightens use of OpenClaw AI in government over cyber risks

Chinese authorities have restricted OpenClaw AI applications inside government agencies and state-owned enterprises, citing data security risks. Notices circulated across departments warn employees not to install the software on office systems-or on personal phones that connect to government networks.

This is not a public nationwide ban. Some institutions require prior approval, and in several cases employees were told to report existing installs for security checks or remove the app from devices on official networks. The Ministry of Industry and Information Technology and the State-owned Assets Supervision and Administration Commission have not commented.

What changed

Government banks and institutions instructed staff to avoid installing OpenClaw on work devices. In a few cases, restrictions extended to personal devices used on government networks and even to families of military personnel.

The message is clear: limit exposure from AI agents that connect externally and touch sensitive data. Agencies are moving first with internal controls while leaving room for case-by-case approvals.

What is OpenClaw

OpenClaw is an agentic AI platform that can execute tasks across digital tools. Instead of only answering prompts, it can manage messages, organize files, make reservations, and interact with workplace software-often with persistent access and autonomy.

It gained quick traction after its late-2025 launch, as developers showcased small but useful automations that stack up into real workflow changes.

Why agencies are cautious

AI agents blend three sensitive elements: broad data access, external communications, and exposure to untrusted content. That combination raises clear risks for public networks and regulated data.

  • Data exposure: agents often request wide permissions across email, files, and internal apps.
  • Outbound connections: integrations and API calls can create exfiltration paths.
  • Content risks: prompts, links, and files from the open internet increase the chance of malicious inputs.
  • Supply chain: frequent model updates, plugins, and third-party connectors expand the attack surface.

China's approach aligns with its tighter controls on data security and critical infrastructure. Tools that bridge internal systems with external services will draw closer scrutiny.

How Chinese organizations are responding

Despite restrictions in official networks, major tech firms like Tencent and JD.com are testing OpenClaw-style applications. Some local governments are offering incentives to startups building on the platform.

Market interest remains high. AI developer MiniMax, which launched an agent system called MaxClaw, has seen its value climb since listing, though gains cooled after reports of tighter controls.

Practical guidance for government teams

  • Establish an approval gate: define who can request, review, and authorize agent use; document permissible use cases.
  • Segment and sandbox: run agents in isolated environments; restrict access to only the data needed per task.
  • Control egress: enforce outbound network rules, proxy inspection, and data loss prevention on agent traffic.
  • Tighten identity: require service accounts with least-privilege scopes and short-lived tokens; enable MFA for admins.
  • Classify data: block access to classified or sensitive datasets unless a formal risk assessment is completed.
  • Audit everything: log prompts, actions, API calls, and file accesses; review anomalies weekly.
  • Vet plugins and connectors: maintain an approved list; forbid unreviewed third-party integrations.
  • BYOD and MDM: if personal devices connect to government networks, enforce mobile device management and app controls.
  • Start small: pilot with low-risk workflows and synthetic data; expand only after red-team testing.
  • Update policy: align procurement, privacy, and cybersecurity policies to explicitly cover agentic AI behavior.

What to watch next

Expect tighter oversight frameworks before agents are cleared for broad use in official networks. Global standards and regulations are evolving, and they will influence procurement and deployment decisions.

For reference, see the U.S. NIST AI Risk Management Framework for control design and evaluation here, and China's 2023 measures on generative AI services, which preview regulatory themes around safety, data, and accountability here.

Related resource

For structured guidance on adoption, governance, and risk controls in the public sector, explore AI for Government.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)