OECD's New Responsible AI Due Diligence Guidance: What Multinationals Need to Know

OECD's new AI due diligence guide gives a six-step playbook for anyone building, running, or using AI. It helps you cut risk, reduce harm, and fit with existing rules.

Categorized in: AI News IT and Development
Published on: Mar 10, 2026
OECD's New Responsible AI Due Diligence Guidance: What Multinationals Need to Know

OECD's Responsible AI Guidance: A practical playbook for IT and development teams

Responsible AI isn't a legal footnote anymore. It's an engineering constraint, a product requirement and a business risk. The OECD's new Due Diligence Guidance for Responsible AI gives you a clear framework to build, deploy and use AI without creating avoidable harm-or needless liability.

If you touch AI systems at any point-from data pipelines to model ops to production integrations-this applies to you.

What the OECD released (and why it matters)

The OECD created a due diligence guide to help enterprises apply the OECD AI Principles and the Guidelines for Multinational Enterprises to real AI work. It connects human rights, safety, transparency and accountability to day-to-day engineering and product decisions.

The guidance is meant to work with existing frameworks and regulations and to support current and emerging due diligence laws. It's relevant across sectors, not just "AI companies."

Key ideas in plain terms

  • AI system: A machine-based system that infers how to generate outputs (predictions, content, recommendations or decisions) from inputs to influence digital or physical environments. Levels of autonomy and adaptiveness vary.
  • AI lifecycle: Plan/design → data collection/processing → model build/adaptation → test/evaluate/validate → deploy → operate/monitor → retire. Iterative, not strictly linear.
  • Who is this for: Any enterprise that supplies inputs to AI, builds/deploys AI, or uses AI in operations, products or services.

The three groups (find yourself here)

Group 1: Upstream inputs

You provide data, code, models, research, governance processes, training, funding, infrastructure or hardware support. You enable others to build AI. Note: the hardware raw-materials supply chain is covered in separate OECD guidance.

Group 2: Builders and operators

You design, develop, deploy and run AI systems. That includes model training, fine-tuning, evaluation, red teaming, deployment and on-call operations.

Group 3: Downstream users

You integrate AI into products, services or internal workflows-whether or not you're an "AI company." You should evaluate AI systems you use as part of your broader due diligence across operations and business relationships, prioritizing the most significant risks.

The six-step due diligence playbook (what to actually do)

Step 1: Embed responsible business conduct (RBC) in policies and systems

  • Publish clear policies aligned with the OECD AI Principles and MNE Guidelines. Cover how you'll run due diligence across your operations and business relationships.
  • Wire this into governance: executive oversight, cross-functional committees, risk acceptance thresholds, approval gates and incident response.
  • Push expectations through contracts, SOWs, vendor onboarding and customer disclosures.

Step 2: Identify and assess actual and potential adverse impacts

  • Scope where AI risks can show up: data sourcing, labeling, model training, evaluation gaps, fine-tuning, deployment context, end-user behavior and repurposing.
  • Run deeper assessments on the highest-risk areas. Look at data risks (rights, privacy, representativeness), model risks (bias, security, reliability), human-AI interaction risks (misuse, overreliance, deceptive UX) and dual-use concerns.
  • Decide your involvement: Did you cause it, contribute to it or are you directly linked through a partner, supplier or customer?
  • Prioritize by severity and likelihood. Address the most serious issues first; then work down the list.

Step 3: Cease, prevent and mitigate

  • If you're causing or contributing to harm, stop the activity and prevent recurrence. Plan mitigations for foreseeable risks.
  • For harms directly linked through partners or customers, choose a response: continue while mitigating, suspend while mitigating or disengage if mitigation fails or the impact is too severe. Engage stakeholders and consider downstream effects when disengaging.
  • Practical moves: improve dataset documentation and rights checks, tighten evals, add usage controls and rate limits, implement model and data lineage, security hardening, abuse monitoring and safety filters.

Step 4: Track implementation and results

  • Define KPIs: incident counts and severity, model performance drift, bias metrics, false positive/negative trends, red-team findings closed, time-to-mitigate, supplier conformance.
  • Operationalize: dashboards, audit trails, decision logs, risk registers and periodic reviews tied to release cycles.

Step 5: Communicate actions

  • Share what you're doing and why: policies, risk assessments at a high level, mitigations and outcomes. Match the format to the audience (engineering blogs, model cards, transparency notes, trust center pages, customer briefings).
  • Be specific enough to build trust without disclosing sensitive detail that increases risk.

Step 6: Provide or cooperate in remediation

  • If you caused or contributed to harm, support remediation proportional to the impact and aim to restore affected parties where possible.
  • Build channels for reports and grievances. Make sure issues can escalate and get resolved.

How this works with other frameworks you may use

The OECD approach aligns with many AI risk management and governance frameworks. You don't have to start from scratch; map your existing controls and fill gaps. Whether you use internal policies, security standards or AI-specific frameworks, the six steps give you a unifying structure.

High-risk signals to watch

  • Safety-critical or finance/health contexts; systems affecting rights, access or employment.
  • Generative models with dual-use potential; capabilities that can be repurposed for harm.
  • Opaque models with limited explainability where users may over-trust outputs.
  • Weak data provenance, unclear licensing or privacy exposure.

What this looks like for different teams

For upstream providers (Group 1)

  • Data: provenance, licensing, consent, PII handling, dataset sheets and versioning.
  • Models/code: model cards, training data summaries where permissible, eval suites, safety tests and secure model delivery.
  • Governance: terms that restrict harmful use, monitoring for misuse, partner due diligence.

For builders/operators (Group 2)

  • Dev process: risk checkpoints at design, pre-deploy and post-deploy; red teaming and adversarial testing.
  • Security: supply chain integrity, model hardening, prompt injection defenses, data exfiltration controls.
  • Reliability: domain-specific evals, bias testing, monitoring for drift and rollback plans.
  • UX: clear disclosures, human oversight where needed, safe defaults and rate limits.

For downstream users (Group 3)

  • Procurement: ask for model cards, eval results, data rights assertions, security posture and incident history.
  • Integration: context-specific guardrails, access controls, logging and misuse detection.
  • Operations: user training, human-in-the-loop where outcomes matter, continuous review of real-world performance and harms.

Getting started: a simple rollout plan

  • 30 days: Write or refresh your AI policy. Stand up an AI risk register. Identify top 3 high-risk use cases and owners.
  • 60 days: Add lifecycle risk gates to your SDLC. Ship or adopt an eval suite. Set up basic incident intake and transparency notes.
  • 90 days: Run a supplier/user due diligence pass. Close the highest-severity findings. Publish a concise trust page and escalation process.

Authoritative references

Next steps for teams


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)