AI Security Threats Are Increasing While Governance Falls Behind
AI adoption continues to surge across industries, with 78% of organizations implementing AI by early 2025, up from 55% in mid-2023. Spending on AI technologies is also set to rise, with 92% of companies planning to increase their budgets over the next three years. Some organizations are dedicating nearly 20% of their tech budgets to AI this year.
Security teams have benefited from AI's ability to detect threats faster, automate repetitive tasks, and reduce false alerts. However, the lack of proper governance creates risks. Only 22% of companies have clear AI policies and employee training in place, leaving many vulnerable to misuse and security gaps.
Corporate Budgets Reflect Growing AI Security Concerns
Recent reports reveal that business leaders are prioritizing AI security but not always aligning budgets accordingly. For example, 67% plan to allocate funds for cyber and data protections specific to AI, while 52% emphasize risk and compliance efforts. Yet, only 10% consider AI security their top security expense, indicating a mismatch between concern and investment.
Key risks identified include rapid changes in AI ecosystems, data integrity issues, and trust challenges. Privacy worries have spiked from 43% to 69% in just two quarters, signaling heightened sensitivity around AI data handling.
New Malware Targets AI Security Tools
Security researchers have uncovered the first malware known to evade AI-powered defenses through prompt injection techniques. The prototype, named βSkynet,β tricks AI systems into ignoring malicious code by embedding commands that force the tool to declare βNO MALWARE DETECTED.β
While advanced AI models still detect this threat, its emergence signals attackers are adapting to bypass AI defenses. This underscores the need for layered security measures instead of relying solely on AI detection.
Managing Nonhuman Identities Is a Growing Challenge
Nonhuman identities (NHIs) such as service accounts, APIs, and AI agents now outnumber human users 50 to 1 in many organizations. Nearly 40% of these identities lack clear ownership, creating potential security blind spots.
AI agents complicate identity management further by acting autonomously on behalf of users. While most companies feel confident defending against attacks targeting human identities, fewer than 60% feel equipped to handle threats involving NHIs.
AI-Generated Misinformation Intensifies in Global Conflicts
Conflicts involving Israel, Iran, and the U.S. have seen a surge in AI-generated misinformation. Fake images and videos falsely depicting military events have circulated widely on social media. Experts warn that detecting such deepfakes is becoming harder as AI tools improve, increasing the risk of misinformation impacting public perception and decision-making.
Steps to Manage AI Security Risks
- Develop clear AI security policies: Define acceptable use and security requirements for AI deployments.
- Train employees: Ensure teams understand how to use AI safely and recognize potential threats.
- Adopt layered defenses: Combine AI detection with traditional security controls to reduce risk.
- Manage nonhuman identities: Assign ownership and monitor AI agents and service accounts rigorously.
- Stay vigilant against misinformation: Monitor social channels and verify sources to mitigate AI-driven fake content.
For managers looking to strengthen their AI security posture, learning practical strategies is essential. Explore courses on AI security and governance to build skills that align with emerging threats and compliance needs.
Your membership also unlocks: