Fujitsu and Nvidia team up to build AI infrastructure for agents in robotics and beyond
Fujitsu has agreed with Nvidia to co-develop AI infrastructure that enables software agents across robotics and other sectors. The plan centers on combining Fujitsu's CPUs with Nvidia's GPUs to deliver large-scale training and real-time inference. The companies are targeting initial infrastructure build-out by 2030.
At a press event in Tokyo, Fujitsu's leadership emphasized that more compute will push AI forward, and that a full-stack approach can be adapted to healthcare, manufacturing, and customer service. Nvidia's leadership echoed the need to build the AI backbone in Japan and globally. Fujitsu also announced a partnership with Yaskawa Electric to create smart robots using Yaskawa's AI robotics technology.
Why this matters for Operations
- Capacity planning: Expect demand for GPU-accelerated workloads (training and edge inference). Start modeling power, cooling, and floor space for mixed CPU/GPU clusters.
- Hybrid compute strategy: On-prem plus cloud will be standard to manage cost, latency, and data sovereignty. Define what runs where by workload and risk profile.
- Data readiness: Agentic systems depend on clean, permissioned, event-rich data. Prioritize data pipelines, governance, and observability before scaling pilots.
- Vendor management: Integration across CPUs, GPUs, networking, and MLOps stacks will require clear SLAs, upgrade paths, and exit options to reduce lock-in.
- Risk and compliance: Map how autonomous actions are approved, logged, and audited. Build human-in-the-loop controls for safety-critical tasks.
- Supply chain realism: GPU lead times and costs will fluctuate. Secure allocations early and model cost scenarios for 12-36 months.
What to prepare in the next 12-24 months
- Run 2-3 high-impact pilots per site: one in robotics/automation (e.g., pick-and-place, visual inspection), one in customer operations (agent assist), and one in predictive maintenance.
- Stand up an AI operations playbook: incident response for models/agents, drift monitoring, rollback procedures, and model/version registries.
- Standardize the stack: containerized inference, GPU scheduling, and storage tiers for hot/cold data. Align on observability (metrics, traces, logs) for model and system health.
- Train frontline teams: upskill supervisors, technicians, and planners on AI-assisted workflows and exception handling.
- Budget by outcome: treat GPU/AI spend as a portfolio tied to cycle time, throughput, quality, and service levels-retire pilots that don't return value.
Where agents will likely show up first
- Manufacturing: vision-guided robotics, adaptive quality checks, line balancing, and autonomous material handling.
- Healthcare operations: scheduling, triage support, document processing, and prior-authorization workflows.
- Customer service: multimodal agent assist, self-serve issue resolution, and knowledge retrieval with approval gates.
- Field service: route planning, part forecasting, and procedure guidance with on-device inference.
For Operations leaders, the signal is clear: agent-capable infrastructure is moving from concept to build phase. Align facilities, data, and teams now so you can plug into GPU-backed stacks as they become available.
Learn more about the platforms involved:
Nvidia AI platform
Yaskawa Electric (robotics)
If you're planning workforce upskilling for AI-enabled operations, explore role-based options here:
Complete AI Training - Courses by Job
Your membership also unlocks: