Nissan extends AI partnership with Monolith to speed physical vehicle testing
December 1, 2025
Nissan and Monolith have extended their strategic partnership for three more years. The goal: reduce physical testing and compress development time across Nissan's European programs while keeping quality and performance intact.
The move supports Nissan's broader effort to bring products to customers faster by simplifying processes and working with partners who can deliver measurable efficiency.
From prototypes to predictions
After proving value on the new Sunderland-built Nissan LEAF, Monolith's AI will now support a wider set of tests across future European models. Engineers at Nissan Technical Centre Europe are using decades of test data-more than 90 years-to train models that accurately predict physical test outcomes.
Fewer prototype loops, fewer test iterations. More time for engineers to make decisions and fix the things that actually move the needle.
What changed in practice
In a recent application, Nissan used AI to assess bolt joint performance in the chassis. The model recommended an optimal torque range and flagged which additional tests would produce the most useful insight.
Result: a 17% reduction in physical testing versus the previous process. Nissan estimates that scaling this approach across European programs could cut testing time by up to half.
What leaders said
"By integrating Monolith's advanced AI-driven engineering software and decades of testing data, we're able to simulate and validate vehicle performance with remarkable precision. Their machine learning models, trained on a combination of historical test data and digital simulations, allow us to reduce reliance on physical prototypes - cutting development time and resource use significantly. This approach accelerates our time to market and supports our commitment to innovation and sustainability. As we look to the future, AI will play an increasingly central role in how we design, test, and deliver the next generation of vehicles to our customers sooner," said Emma Deutsch, Director of Customer Orientated Engineering and Test Operations, Nissan Technical Centre Europe.
Monolith's platform turns historical tests and simulation outputs into predictive models and smart test plans. Tools like the Next Test Recommender and Anomaly Detector help engineers choose the next best experiment and spot suspect results before they snowball into delays.
"Our mission is to empower engineers with AI tools that unlock smarter, faster product development. The results of our work with Nissan demonstrate how machine learning can drive efficiency and innovation in automotive engineering. We're thrilled to continue this journey together," added Dr. Richard Ahlfeld, CEO and Founder of Monolith.
Why this matters to product development teams
- Shift from "test everything" to "test what matters." Use models to prioritize the next best experiment and focus on high-value conditions.
- Turn legacy data into an asset. Clean, label, and centralize historical test results and simulations so they're usable for training and validation.
- Blend simulation with measurements. Train on both to close gaps and reduce surprises when you hit the lab or track.
- Start with a narrow, high-frequency domain. Fasteners, NVH sweeps, durability rigs-areas with repeatable setups and rich data pay off quickly.
- Instrument for anomaly detection. Catch setup errors, rig drift, and sensor faults early to protect data quality and cycle time.
- Bake AI into the workflow. Make "next test" recommendations visible where engineers plan work, not in a separate tool nobody opens.
Metrics to track
- Physical tests per program: use the 17% reduction as a baseline; target bigger gains in repeatable domains.
- Prediction error vs. physical test for key KPIs (e.g., torque retention, NVH, durability).
- Test-to-insight lead time and engineer hours shifted from execution to decision-making.
- Prototype count and cost per prototype avoided.
- Quality escapes and retest rates; anomaly flags caught pre-test.
- Resource and environmental impact (energy per test, material scrap) where tracked.
Risk checkpoints
- Model drift: revalidate after major design or process changes and monitor performance over time.
- Data bias: ensure coverage across environments, suppliers, and edge cases.
- Safety-critical domains: keep essential physical tests; use AI to target the rest.
- Governance: keep traceability from recommendation to decision to result; retain SME review in the loop.
Learn more
Explore Monolith's platform and case studies: monolithai.com.
If you're building AI skills for product roles, see curated programs by job category: Complete AI Training - Courses by Job.
Your membership also unlocks: