ISO/IEC 42001:2023 Certification Sets New Standard for Trustworthy AI at Darktrace

Darktrace earned ISO/IEC 42001:2023 certification for responsible AI management, ensuring ethical AI development and strict governance. This sets a global benchmark for trustworthy AI in cybersecurity.

Categorized in: AI News Management
Published on: Aug 13, 2025
ISO/IEC 42001:2023 Certification Sets New Standard for Trustworthy AI at Darktrace

ISO/IEC 42001:2023: A Milestone in AI Standards at Darktrace

Darktrace has achieved ISO/IEC 42001:2023 accreditation, marking one of the first cybersecurity companies to earn this recognition for responsible AI management. This certification is fast becoming the global benchmark that distinguishes companies genuinely innovating with AI from those merely using AI as a marketing term. For customers, it offers assurance that AI systems are developed responsibly, governed strictly, and supported by expert teams focused on security and meaningful innovation.

Darktrace Announces ISO/IEC 42001 Accreditation

This accreditation represents a significant step for Darktrace as it continues to enhance its AI governance and compliance frameworks, expand R&D capabilities, and commit to responsible AI development. It builds on existing certifications such as:

  • ISO/IEC 27001:2022 – Information Security Management System
  • ISO/IEC 27018:2019 – Protection of Personally Identifiable Information in Public Cloud Environments
  • Cyber Essentials – A UK Government-backed cybersecurity certification

Understanding ISO/IEC 42001:2023

Introduced by ISO in December 2023, ISO/IEC 42001:2023 provides a framework for organizations to demonstrate responsible development and use of AI. It requires establishing, implementing, maintaining, and continuously improving an Artificial Intelligence Management System (AIMS). This is the first international standard to guide AI risk management, addressing challenges like transparency, accuracy, and misuse while balancing innovation with governance.

Certification means an organization meets all regulatory and legislative requirements and has effective processes to manage AI risks and opportunities.

The ISO/IEC 42001:2023 Accreditation Process

Darktrace worked with BSI over 11 months to develop and implement a comprehensive AI management system that builds on its existing frameworks. The process included a thorough audit covering AI design, usage, competencies, resources, and HR processes. Darktrace’s unique Self-Learning AI approach, which integrates multiple AI techniques for cybersecurity tasks, was part of the wide scope of this certification.

The certification also includes adherence to all Annex A controls specified in the standard, ensuring a comprehensive governance approach.

Benefits of an AI Management System

With AI advancing quickly, organizations face challenges related to data privacy, security, and bias. An AI Management System helps organizations establish governance aligned with best practices and regulatory standards. It balances innovation with risk management, enabling organizations to maximize AI benefits while maintaining control and accountability.

Key Components of ISO/IEC 42001

The standard emphasizes responsible AI development and use by requiring organizations to:

  • Establish and implement an AI Management System
  • Commit to responsible AI development with measurable objectives
  • Manage, monitor, and adapt to AI risks effectively
  • Commit to continuous improvement of the AI Management System

Its structure is similar to other ISO standards like ISO/IEC 27001:2022. Detailed information on the standard’s structure is available in Annex A.

What ISO/IEC 42001 Means for Darktrace’s Customers

Darktrace’s certification signals a strong commitment to secure, trustworthy AI in cybersecurity. Customers and partners can trust that Darktrace develops AI ethically, securely, and in compliance with regulations. The certification ensures:

  • Trustworthy AI: AI development is responsible, transparent, and ethical.
  • Innovation with integrity: Cutting-edge AI innovation balanced with governance and trust.
  • Regulatory readiness: Proactive adaptation to emerging compliance and regulatory demands.

This certification is more than a milestone; it affirms Darktrace’s leadership in responsible AI and ongoing innovation.

Why ISO/IEC 42001 Matters for Every AI Vendor

In a market where “AI” can mean different things, ISO/IEC 42001 certification is a clear indicator of authenticity. Certified vendors have demonstrated through independent audits that their AI development, deployment, and management meet rigorous standards.

For customers, this means:

  • Real AI: Skilled teams developing AI systems that meet measurable standards.
  • Data protection: Strong governance over data use, bias, transparency, and risk.
  • Continuous innovation: Vendors committed to advancing AI responsibly.

If a vendor lacks this certification, it’s important to question their AI governance and risk management practices.

Annex A: Structure of ISO/IEC 42001

The standard requires adherence to seven main areas for certification:

  • Context of the organization: Understanding internal and external factors affecting AI management.
  • Leadership: Senior management commitment to AI governance.
  • Planning: Processes to identify risks and opportunities related to AI.
  • Support: Provision of resources, competencies, and communication for AI management.
  • Operation: Processes supporting AI system development and use aligned with policies.
  • Performance evaluation: Regular monitoring and evaluation of the AI Management System.
  • Improvement: Commitment to continuous enhancement of AI management.

Four annexes provide guidance on implementing controls and managing AI risks. Darktrace has fully adopted these controls, focusing heavily on Annex A, which includes:

  • Enforcement of AI policies
  • Clear roles and responsibilities
  • Processes for escalating and handling AI concerns
  • Resource availability for AI systems
  • Impact assessments of AI systems
  • End-to-end AI lifecycle management
  • Data treatment and transparency
  • Use case definitions and third-party impact consideration

Responsible AI in Cybersecurity: Darktrace’s Five Guiding Principles

Darktrace has outlined five principles for building secure, trustworthy, and responsible AI in cybersecurity. For organizations interested in structured AI governance and ethical development, these principles provide a valuable framework.

To explore further learning on AI and responsible frameworks, consider visiting Complete AI Training for courses and certifications tailored for management and cybersecurity professionals.