From Pilots to Production: Scaling AI with a Digital Thread in New Product Development

AI in product development works when it's scaled via a Digital Thread, not stuck in point tools. The paper shows how to connect data, tools, and guardrails for faster releases.

Categorized in: AI News Product Development
Published on: Nov 07, 2025
From Pilots to Production: Scaling AI with a Digital Thread in New Product Development

AI in New Product Development: From Pilots to Enterprise-Scale Impact

Accenture, the German Research Center for Artificial Intelligence (DFKI), and Fraunhofer ISST have released a joint white paper on how AI changes the way products are conceived, engineered, and launched. The study moves past isolated experiments and lays out a practical path to scale AI across product development.

If your team is still applying AI to single tasks-like requirements tagging or anomaly detection-you're leaving value on the table. The bigger win comes from connecting data, tools, and domains through a reliable Digital Thread that carries product knowledge from concept to release.

Why single-point AI isn't enough

Point solutions create local efficiency. But product decisions happen across requirements, architecture, simulation, testing, and release-often with conflicting data and disconnected tools. A Digital Thread pulls these pieces together so knowledge flows across disciplines and decisions improve at system level.

With shared context, engineers can reuse learnings, trace impacts, and speed up change without rework. That's where AI moves from "nice demo" to measurable cycle-time and quality gains.

Five dimensions for scaling AI across engineering

  • Data Quality: Clean, complete, and consistent product data with clear lineage and versioning. No signal, no AI.
  • Interoperability: Open formats, shared identifiers, and connectors that let tools "talk" across the lifecycle.
  • AI Platforms: A managed stack to develop, deploy, monitor, and secure models and agents at scale.
  • Context Management: Ways to inject requirements, geometry, simulation results, test evidence, and constraints into AI workflows so outputs are grounded.
  • Federated Governance: Guardrails for data access, compliance, safety, and risk-owned across business, IT, and engineering.

Where AI moves the needle across the lifecycle

  • Requirements management: Auto-classify, de-duplicate, and trace requirements to design, code, tests, and known issues.
  • Product architecture: Suggest component variants, interface contracts, and change impacts based on past designs.
  • Simulation: Generate scenarios, tune parameters, and flag model gaps using historical performance and field data.
  • System testing: Propose test cases, map coverage to risks, and analyze failures with cross-domain context.
  • Release and change/configuration management: Orchestrate approvals, verify dependencies, and maintain audit trails automatically.

Two integration patterns matter. Vertically integrated use cases optimize within a domain (e.g., model calibration). Horizontally integrated use cases connect domains so AI can reason across systems and transfer knowledge where it matters most.

What's next: agentic AI for cross-domain automation

The study points to agentic AI-systems that reason, plan, and orchestrate workflows across engineering tools. Think multi-step change processes that span PLM, ALM, simulation, and testing-executed end-to-end with policy checks and traceability.

This isn't just a model. It's AI plus data plus orchestration, running within clear governance so outcomes are safe, compliant, and explainable.

What product development leaders should do now

  • Connect your data: Establish shared IDs across parts, requirements, tests, and configurations. Start the Digital Thread where you have the strongest business case.
  • Fix quality and semantics: Define reference data models, versioning rules, and minimum data standards for engineering artifacts.
  • Pick an AI platform strategy: Decide what you build, buy, and integrate. Standardize on monitoring, security, and model lifecycle.
  • Add context to AI: Wire requirements, constraints, and verification evidence into prompts, graphs, or RAG so outputs reflect reality.
  • Govern with the business: Set federated policies for data access, safety, and compliance-owned jointly by engineering, IT, and data.
  • Pilot across boundaries: Choose use cases that cross domains (e.g., requirements-to-test traceability) to prove system-level value.
  • Build skills and playbooks: Train engineers on AI-assisted workflows and define SOPs for reviews, exceptions, and sign-offs.
  • Measure outcomes: Track cycle time, quality, rework, and risk-then scale what works.

The message is blunt: teams that connect engineering data and scale AI now will move faster and spend less to reach market. Teams that wait will wrestle with fragmentation and stalled innovation.

Learn more

Read the announcement and details from DFKI: Accenture, DFKI, and Fraunhofer ISST publish joint white paper.

If you're planning capability building for engineering teams around AI workflow automation, explore curated learning by job role: Complete AI Training - Courses by job.

Contacts

Contact:
Dr.-Ing. Dirk Alexander Molitor
Engineering and AI Consultant, Accenture
dirk.molitor@accenture.com

Wissenschaftliche Ansprechpartner:
Dr.-Ing. Daniel Porta
Group Lead Research Department Cognitive Assistants, DFKI
Daniel.Porta@dfki.de
+49 681 85775 5272


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)