AI risks and cyber fraud are outpacing ransomware on the C-suite agenda
Ransomware isn't gone, but it's losing headline status in boardrooms. The latest World Economic Forum Global Cybersecurity Outlook shows executives are more worried about AI-driven risks, fraud, phishing, supply chain disruption, and software vulnerability exploitation.
Across 804 leaders surveyed in 92 countries, 87% said AI risks increased over the past year. Fraud and phishing rose for 77%, supply chain disruption for 66%, and vulnerability exploitation for 58%. Ransomware still matters, but only 54% saw it rising, with 39% neutral and 7% seeing it decline.
What's driving the shift
WEF leadership points to a simple pattern: AI progress, geopolitical friction, and complex supply networks are accelerating cyber risk faster than traditional defenses can adapt. Speed and scale favor attackers, and quiet failures (like data leakage) can be just as damaging as loud extortion events.
The AI risk picture, by the numbers
Board-level concerns around AI are maturing. The top fear has moved from "what attackers can do with AI" to "what our data might expose." In this year's results, 34% put data exposure first (up from 22%), while concern over adversarial capabilities dropped to 29% (from 47%).
- Data exposure and leakage
- Adversarial capabilities (offensive use of AI)
- Technical security of AI systems
- Governance complexity and oversight
- Legal risks: IP, compliance, and liability
- Software supply chain and code development concerns
The takeaway: attention is moving from splashy offensive tricks to quieter, enterprise-wide weaknesses-governance, data control, and engineering hygiene.
Fraud climbs; ransomware stays dangerous
Fraud is the daily tax on digital business. 77% saw increases in cyber-enabled fraud and phishing, and 72% said they or someone in their network was hit-mainly phishing, payment fraud, and identity theft. Ransomware remains a top operational risk for CISOs, even if CEOs weigh broader impacts across fraud, brand, and revenue.
What to do next: practical moves by role
Executives and boards
- Treat AI as an enterprise risk, not just an IT issue. Assign a single executive owner for AI risk with clear metrics tied to loss scenarios (data leakage, fraud losses, downtime).
- Fund resilience: backup integrity audits, incident response rehearsals, and faster recovery targets that include AI-enabled threats.
- Set policy for data exposure: define what data can train models, who approves use, and how you'll detect and respond to leaks.
CISOs and security leaders
- Stand up an AI security program: inventory models and data flows, threat model misuse/abuse, run red-team exercises, and enforce guardrails for data handling.
- Raise the fraud bar: phishing-resistant MFA, strict payment verification, least-privilege access, and continuous identity monitoring.
- Tighten third-party risk: SBOMs, dependency checks, access reviews, and contractual security requirements for vendors that touch your data or models.
IT and engineering teams
- Secure SDLC for AI: dependency hygiene, secrets management, and model/package provenance checks. Automate checks in CI/CD.
- Protect AI endpoints: test for prompt injection and data exfiltration, isolate runtime environments, and enforce rate limits and detailed logging.
- Patch velocity and coverage: prioritize internet-facing services and high-impact vulns tied to your crown-jewel data.
Everyone (yes, everyone)
- Anti-phishing muscle memory: regular simulations, short refreshers, and easy ways to report suspicious messages.
- Payment and identity checks: out-of-band confirmation for wire changes, vendor bank updates, and unusual access requests.
Governance and collaboration matter
Leaders are doubling down on structured processes to manage AI risk. As Singapore's minister Josephine Teo notes in the WEF report, AI can help defenders detect and respond, but it can just as easily increase exposure through misuse or malfunction. The answer isn't fear-it's practical governance, shared standards, and cross-border cooperation.
Over the next 12 months, prioritize data controls, fraud prevention, supply chain assurance, and the security of AI systems. Ransomware will keep swinging, but the quiet losses from AI misuse and fraud will cut deeper if left unchecked.
Read the WEF Global Cybersecurity Outlook
Upskill your teams on AI risk, governance, and security
Your membership also unlocks: