Hyundai Motor Group's on-device AI chips for robots are ready - here's what it means for builders
LAS VEGAS - Hyundai Motor Group has finished development of its on-device AI chips for robots and says they're ready for mass production. The chips were developed with DEEPX over three years and presented at CES Foundry 2026, a new program focused on AI, quantum, and other advanced tech.
The move is part of Hyundai's bigger push into "physical AI" - the tight integration of hardware and intelligence at the edge. In plain terms: fewer cloud dependencies, quicker decisions, and more reliable robots that can keep working where connectivity gets spotty.
Key takeaways for IT and dev teams
- On-device inference means low latency and consistent behavior, even in basements, underground parking, and dense logistics hubs.
- Ultra-low energy use extends runtime and opens the door to smaller, more mobile form factors.
- Fewer cloud roundtrips reduce exposure to external attacks and help with data locality.
- Mass-production readiness suggests Hyundai is building a steadier supply chain for robots across factories, logistics, and service deployments.
Why on-device AI matters on the floor
Robots can't stall every time Wi-Fi hiccups. Hyundai's chip is built for real-time perception and decision-making without leaning on the cloud.
That's key in warehouses, hospitals, and airports where dead zones are common and uptime is non-negotiable. Less network dependence also simplifies privacy workflows for sensitive environments.
Physical AI as a product strategy
Hyundai framed physical AI as a core revenue driver at CES 2026, alongside robots like Atlas and the self-driving Mobile Eccentric Droid. Think smarter, safer worksites that can actually scale.
"To realize the physical AI, Hyundai Motor Group is developing AI solutions and relevant software under the vision of robotization of space," said Hyun Dong-jin, vice president and head of the group's Robotics LAB. "Our ultimate goal is to build a sustainable robot ecosystem, rather than the robot development, in itself."
Where this lands first
Hospitals, airports, and logistics centers are the obvious early wins. Tasks that depend on stable, fast perception - indoor navigation, load handling, patient transport, asset delivery - all benefit from local inference and lower latency.
Hyundai and Kia also expect gains across their own value chain, from auto production to movement of parts and finished vehicles.
What to watch next (developer lens)
- Tooling and SDKs: model conversion, quantization paths, and supported runtimes.
- Model formats: ONNX and other common graph formats would smooth adoption - watch for specifics.
- Scheduling and safety: real-time guarantees, fallbacks, and sandboxing for on-robot apps.
- Fleet ops: secure OTA, version pinning, remote telemetry, and rollback strategy.
- Data loops: on-device logging with privacy controls, plus pipelines for improving models without sending raw data off the robot.
- Compliance: hospital and airport standards often require clear audit trails and strict access controls.
If you're planning pilots
- Map network dead zones and set an offline-first policy for critical tasks.
- Define an edge inference budget (latency, memory, energy) per task.
- Separate safety-critical controls from higher-level autonomy stacks.
- Lock down update channels with signed binaries and staged rollouts.
- Set up observability early: local logs with periodic sync, synthetic tests, and battery health metrics.
- Run tabletop failure drills: sensor dropouts, GNSS denial, and blocked paths.
Hyundai's direction fits a broader shift: more intelligence at the edge, less dependence on the cloud, and a cleaner path to scale in places where connectivity can't be trusted. Expect interest from operators who need predictable latency and stronger data control.
If you're upskilling teams for edge AI and robotics, browse practical programs here: AI courses by job
Your membership also unlocks: