Security Leaders: The New Roles in AI Leadership
AI is changing how organizations operate, and security leaders must evolve. They need to become cheerleaders for AI, risk experts, data stewards, teachers, and researchers to guide their companies toward safer and smarter AI adoption.
When ChatGPT launched, many security leaders saw AI as another disruptive technology, like past shifts with iPods or SaaS apps. They believed 80% of AI security needs were already covered by existing cybersecurity foundations—things like asset inventory, data security, identity governance, and vulnerability management.
By 2025, this view proved partially correct. Core security practices remain critical, but AI brings unique challenges. The attack surface grows, including third-party partners and deep software supply chains, creating blind spots. AI often relies on open source and APIs, leading to hidden or shadow AI use. Also, AI innovations move quickly, making it tough for security teams to keep pace.
Beyond tech challenges, many AI projects fail. Research shows 42% of businesses abandoned most AI initiatives in 2025, up from 17% in 2024. Nearly half halted AI proof-of-concepts before production. Common causes include high costs, poor data quality, weak governance, talent shortages, and scaling difficulties.
Five Priorities for Security Leaders in AI
1. Start with Strong Governance
Governance isn’t just about technology or security. It begins with aligning business and tech teams on AI’s role in achieving organizational goals. Security leaders should collaborate with CIOs to educate business units—legal, finance, and others—and establish an AI framework supporting both business needs and technical capabilities.
This framework should cover the full AI lifecycle: from conception to deployment. Include ethical guidelines, acceptable use policies, transparency, compliance with regulations, and clear success metrics. Reviewing existing models like the NIST AI Risk Management Framework, ISO/IEC 42001:2023, UNESCO AI ethics recommendations, and RockCyber’s RISE and CARE frameworks can provide valuable guidance. Organizations may need to customize a “best of” approach that fits their environment.
2. Maintain a Continuous View of AI Risks
Start with an AI asset inventory, software bills of materials, vulnerability management, and an AI risk register. Go beyond basics by understanding AI-specific threats like model poisoning, data inference attacks, and prompt injection. Security teams and threat analysts must stay updated on emerging AI attack methods; resources like MITRE ATLAS help track these.
Since AI often involves third parties, audits of their data and security controls are essential. Supply chain security must also be monitored closely. Staying aware of AI regulations is critical. The EU AI Act is comprehensive, focusing on safety, transparency, fairness, and environmental concerns. Other laws, like the Colorado Artificial Intelligence Act, may evolve rapidly. Expect more state, federal, and industry regulations in this space.
3. Expand the Definition of Data Integrity
Data integrity has traditionally meant preventing unauthorized changes and ensuring consistency. For AI, it also means ensuring the accuracy and fairness of the AI models themselves. Consider examples where biased training data caused problems: Amazon’s recruiting tool favored male candidates due to male-dominated training data, and a UK passport photo app discriminated against darker skin tones because of skewed data.
Security leaders must include AI model veracity in their governance responsibilities to avoid such pitfalls.
4. Build AI Literacy at All Levels
Everyone in the organization will interact with AI in some form. Starting with the security team, provide training on AI fundamentals. Update secure software development lifecycles to include AI threat modeling, data handling, and API security.
Developers need education on AI-specific best practices, including resources like the OWASP Top 10 for Large Language Models, Google’s Secure AI Framework (SAIF), and Cloud Security Alliance guidance. End users require training on acceptable use, data privacy, misinformation, and identifying deepfakes. Consider human risk management solutions to customize training by role and risk level.
5. Stay Cautiously Optimistic About AI Security Technology
Current AI security tools act more like driver assist features than fully autonomous systems. Security teams should identify repetitive tasks—alert triage, threat hunting, risk scoring, report generation—where AI could help and explore emerging solutions in those areas.
Regularly meet with security tech vendors to discuss specific needs and how AI might optimize their products. Many AI features are still in early stages and can be costly to develop and maintain. Expect some startups to fail and others to be acquired. Approach new AI security products with careful evaluation.
Looking Ahead
Currently, about 70% of security leaders report to CIOs. This is likely to change as AI becomes more central. Expect more CISOs to report directly to CEOs, especially those who lead AI governance across business and technology. Taking charge of AI strategy now could position security leaders for future advancement.
Your membership also unlocks: