CISOs Face High-Stakes Balancing Act Managing AI Risks and Driving Innovation

CISOs balance managing AI risks with driving innovation by integrating AI governance into existing frameworks. Collaboration and clear policies help secure AI use without stifling progress.

Published on: Jul 18, 2025
CISOs Face High-Stakes Balancing Act Managing AI Risks and Driving Innovation

CISOs Face the Challenge of Managing AI Risks While Driving Innovation

As AI technologies become integral to business operations, Chief Information Security Officers (CISOs) find themselves balancing two critical demands: managing the risks AI introduces and supporting organizational innovation. The growing use of generative and agentic AI requires a fresh look at governance, risk, and compliance (GRC) programs to ensure regulatory requirements are continuously met.

AI doesn’t fit neatly into predefined categories. Jamie Norton, CISO at the Australian Securities and Investment Commission, highlights that AI is a disruptive force that can’t be easily boxed. Enterprises must consider how AI broadens and reshapes their risk surface. For example, Check Point’s 2025 AI security report reveals that 1.25% of prompts sent to generative AI services from enterprise devices pose a high risk of sensitive data leakage.

CISOs must keep up with the pace of innovation while putting guardrails around AI deployments to prevent risks like shadow AI—unapproved AI use within organizations. The goal is to allow innovation without exposing the business to undue risk.

The Role of GRC Frameworks in Managing AI Risks

Governance, risk, and compliance frameworks originated as tools to handle uncertainty, enforce integrity, and ensure compliance. Over time, they’ve evolved beyond checklists to encompass broader risk management strategies. Cybersecurity is now a core element of enterprise risk, and CISOs have played a vital role in aligning security with regulatory demands.

With AI’s rise, integrating AI-specific risks into existing GRC frameworks is essential. However, adoption is still uneven—only 24% of organizations have fully enforced enterprise AI GRC policies, according to the 2025 Lenovo CIO playbook. Meanwhile, AI governance is a top priority across industries.

CISOs must act swiftly to strengthen AI risk management. They face dual pressure: accelerating AI adoption to boost productivity while safeguarding governance, risk, and compliance obligations. Rich Marcus, CISO at AuditBoard, summarizes this tension: organizations want AI-driven gains but cannot afford moves that might jeopardize the business.

Promoting a Collaborative Approach to AI Risk

To manage AI risks effectively, CISOs should foster collaboration across departments. Marcus advises against working in isolation; instead, building trust and shared responsibility is key. This collaborative mindset encourages transparency about AI use and supports risk-aware adoption.

Visibility into AI deployment is critical. Norton stresses the need for security processes that track current AI use and emerging requests. AI manifests across countless products and platforms, but governance forums often miss many forms of AI.

Practical steps include categorizing AI tools, assessing their risks, and balancing innovation benefits. Tactical efforts—like secure-by-design practices, change management, shadow AI discovery, and risk-based AI inventories—help manage smaller AI tools. Major AI systems, such as Microsoft Copilot and ChatGPT, require strategic oversight through dedicated AI governance forums.

The objective is to focus resources on high-impact risks without creating burdensome procedures. Lightweight processes that evaluate AI risk enable organizations to move quickly while maintaining control.

Ultimately, security leaders must embed a security lens within AI governance as part of the broader GRC framework. Security teams should provide risk visibility, enabling senior executives to make informed decisions rather than simply issuing yes-or-no approvals.

Integrating AI Risk Controls into Existing Frameworks

AI risks span data safety, tool misuse, privacy, shadow AI, bias, ethical issues, hallucinations, legal concerns, and model governance. Dan Karpati, VP of AI Technologies at Check Point, recommends treating AI risks as a distinct category within the organization’s risk portfolio by integrating them across four GRC pillars:

  • Enterprise Risk Management: Define AI risk appetite and establish an AI governance committee.
  • Model Risk Management: Monitor model drift, bias, and conduct adversarial testing.
  • Operational Risk Management: Prepare contingency plans for AI failures and train for human oversight.
  • IT Risk Management: Conduct audits, compliance checks, and align AI governance with business goals.

CISOs can leverage frameworks like the NIST AI Risk Management Framework, COSO, and COBIT to apply core principles—governance, control, and risk alignment—to AI’s unique traits such as probabilistic outputs and rapid evolution. The emerging ISO/IEC 42001 standard offers structured guidance for AI oversight and assurance.

By adapting these frameworks, organizations can align AI risk appetite with overall risk tolerance and embed governance across business units. Mapping AI risks to tangible business impacts—financial loss, reputational damage, operational disruption, and legal penalties—helps CISOs assess and communicate risk effectively. Tools like FAIR (Factor Analysis of Information Risk) can quantify risk exposure in monetary terms.

Monitoring regulatory developments is also crucial. CISOs should track draft regulations and prepare for compliance ahead of ratification. Peer networks and GRC platform alerts can support staying current on emerging threats, risks, and controls.

Establishing Clear Governance Policies for AI

Beyond risk definition and compliance, CISOs must create governance policies that set clear expectations for AI use. Marcus suggests implementing a stoplight system—red, yellow, green—to classify AI tools:

  • Green: Approved for use after review.
  • Yellow: Require further assessment and defined use cases.
  • Red: Prohibited due to insufficient protections.

This approach guides employees, provides safe spaces for exploration, and enables security teams to develop detection and enforcement strategies. It also supports collaboration between innovation and security functions.

At AuditBoard, standards for AI tool selection focus on protecting proprietary data and retaining ownership of inputs and outputs. Defining guiding principles upfront and educating teams creates a culture of self-enforcement, so risky tools are filtered before reaching security teams.

Marcus’ team uses “model cards” — concise documents detailing AI system architecture, data flows, intended use, and training data — to assess privacy, security, and regulatory compliance. This process identifies risks, informs stakeholders, and shifts conversations from individual cases to strategic AI risk management.

Balancing Innovation and Security in AI Adoption

With AI tools accessible to all, security teams must focus on underlying risks beyond surface-level interfaces. Applying strategic risk analysis, leveraging management frameworks, monitoring compliance, and developing governance policies enable CISOs to guide organizations safely through AI integration.

The goal is not to hinder innovation but to establish guardrails that prevent data leaks and uncontrolled risk exposure. This balance ensures that AI becomes a productive asset rather than a liability.

For executives interested in enhancing their AI knowledge and governance capabilities, Complete AI Training offers a range of courses tailored to business and security leaders.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide