Data Visibility and Identity Controls Are Now Critical to Managing AI Security Risks
Seven out of 10 organisations across Asia Pacific cite AI as their top data security risk. The problem is not the technology itself, but how it has been deployed: AI systems are being granted access to enterprise data with far fewer controls than those applied to human users.
This structural vulnerability sits at the heart of what security leaders now face. AI systems authenticate, access data, and make decisions at speeds and scales no human team can match. Yet most organisations lack clear visibility into where their data lives or governance frameworks that match the pace of AI adoption.
The Insider Risk Has Changed Shape
For years, security teams focused on human users as the main insider threat. In 2026, that picture has shifted fundamentally. AI systems now operate with substantial autonomy inside corporate environments, but they do so without the control mechanisms designed for human employees.
Identity infrastructure has become the primary attack surface across the region. Seven out of 10 organisations report that credential theft is their leading attack technique against cloud infrastructure. At the same time, the average organisation manages 89 SaaS applications, creating dozens of integration points where attackers can gain entry.
Encryption and Access Controls Remain Largely Absent
Nearly half of sensitive cloud data in Asia Pacific remains unencrypted. This is a significant exposure point that requires urgent attention.
Only one third of organisations across the region know where all their data resides. In an environment where AI agents continuously ingest and act on data, that gap becomes critical. Without data classification, organisations cannot make informed decisions about what an AI system should access.
The Investment Gap Is Widening
Only about one third of organisations in the region have dedicated budgets for AI security. The majority are still trying to cover AI risks using security programs designed for a fundamentally different operating model-one built around human users and perimeter defenses.
Singapore and Hong Kong lead the Asia Pacific average in dedicated AI security budgets, suggesting awareness exists. The challenge is translating that awareness into action fast enough to match the exposure.
Tool Sprawl Creates Its Own Risks
Three quarters of Asia Pacific organisations now run five or more data protection and monitoring tools simultaneously. Only about one third say they have high confidence in understanding the tools they already have.
Adding more monitoring layers without addressing underlying complexity creates coverage gaps and increases the burden on already stretched security teams. Monitoring alone does not constitute governance. The most effective approach combines monitoring with clear data governance frameworks and consolidated tooling.
What Organisations Must Do Before Deploying AI
Data classification must be foundational. If an organisation does not know what data it holds and how sensitive it is, it cannot make informed decisions about what an AI system should access.
Identity governance comes next. AI systems need access controls and audit trails just as human users do. Least-privileged access-granting only strictly necessary rights to any user or system-must apply to AI systems as rigorously as it applies to employees.
Encryption must be consistent across all environments where AI operates, including cloud and SaaS platforms. Encryption should be treated as a baseline, not an optional layer.
Organisations that get this right will find that strong governance accelerates AI adoption by building internal confidence to move quickly. Those that skip it will eventually face an incident that forces the conversation under far less favourable conditions.
Agentic AI Will Make Attacks Faster and Harder to Stop
Cyber criminals adopting AI are doing so precisely because it allows them to scale operations against targets that have not yet adjusted their defenses. Agentic AI introduces a new dimension: attackers can deploy AI agents that operate continuously, adapt based on what they encounter, and act across multiple systems simultaneously.
For larger enterprises, the response requires investment in AI-aware identity security, encryption infrastructure, and data governance. For smaller enterprises with resource constraints, the approach must focus on the highest-impact fundamentals: understanding where sensitive data lives, applying multi-factor authentication consistently, and choosing cloud providers who offer strong encryption and key management rather than requiring organisations to build that capability from scratch.
The threat is not going to wait for security programs to mature. Starting with clear data visibility and identity controls gives organisations a foundation effective against a broad range of attack types, including the AI-powered ones becoming increasingly common across the region.
For management teams looking to strengthen their understanding of these risks, AI learning resources for cybersecurity professionals provide practical frameworks for governance and threat detection in AI environments.
Your membership also unlocks: