Honda and Mythic Team Up on 100x-Efficient Analog AI Chip Aiming for Zero Crashes

Honda and Mythic are building an analog AI SoC for safer, smarter cars, targeting late-2020s. Big energy gains and six-figure TOPS will push on-device ADAS and assistants.

Categorized in: AI News IT and Development
Published on: Feb 10, 2026
Honda and Mythic Team Up on 100x-Efficient Analog AI Chip Aiming for Zero Crashes

Honda and Mythic Team Up on Energy-Efficient Analog AI SoC for Safer, Smarter Vehicles

Honda R&D and Mythic have signed a joint development deal to co-build an automotive-grade AI system-on-chip that brings analog compute-in-memory to Honda's next wave of software-defined vehicles. Honda will license Mythic's Analog Processing Unit (APU) and target deployment in the late 2020s to early 2030s. The aim is straightforward: push far more on-board AI while pulling far less power.

This aligns with Honda's long-term goal of zero traffic collision fatalities involving its motorcycles and automobiles by 2050. Lower power budgets mean more AI headroom within thermal and cost constraints-key for scaling advanced driver-assist and autonomous features across entire lineups, not just flagship trims.

What's new: 100x efficiency and six-figure TOPS on the edge

Mythic says its analog compute-in-memory approach delivers ~100x better energy efficiency than conventional digital AI chips by bringing memory and compute into the same layer. Less data movement, fewer trips off-chip-that's where the savings come from. The roadmap envisions future vehicles topping 100,000+ TOPS of AI compute within the same power envelope.

"Cars are quickly becoming petascale supercomputers on wheels," said Mythic CTO Dave Fick. "Vehicles will soon require computing performance on par with data centers, but with far tighter energy budgets." That's the gap analog inference is aiming to close.

Workloads Honda is targeting

  • Vision transformers for perception and scene understanding
  • Physics-informed neural networks for vehicle dynamics and control
  • Cloud-free large language models for in-car assistants

The common thread: high-throughput, low-latency inference without depending on a persistent data connection. That's better for privacy, resilience, and cost.

How analog compute-in-memory helps

By performing multiply-accumulate operations where the weights live, analog APU architectures reduce memory bandwidth pressure-often the biggest energy drain in AI inference. The practical upside is more model capacity and higher FPS at the same thermal design, or the same performance in a smaller, cheaper power envelope.

Expect a toolchain that leans on low-bit quantization and careful calibration. For developers, the preparation looks similar to squeezing every drop from digital NPUs-just with tighter attention to quantization-aware training and numeric stability.

Why this matters for IT and development teams

  • Architect for on-device first: assume intermittent or no backhaul. Optimize models and UX for local operation.
  • Prioritize INT8/INT4 pipelines: bake quantization into training, validate accuracy on target bit-widths, and plan for per-layer calibration.
  • Design for thermal ceilings: sustained performance beats peak benchmarks in vehicles. Latency consistency matters for safety functions.
  • Right-size LLMs: use distilled, domain-tuned models with streaming attention and efficient KV cache strategies to fit memory budgets.
  • Model governance: bring ISO 26262/AUTOSAR-aligned testing, traceability, and reproducibility into your MLOps. Determinism and fault handling are not optional.
  • Sensor fusion stack: keep perception and planning modular. Expect higher-rate, multi-camera + radar + lidar pipelines as compute headroom grows.

Timeline and what to expect next

Prototype chips are expected to be tested in vehicles in the next three to five years, with production following successful trials in the late 2020s/early 2030s. That puts real deployment within typical OEM program cycles for next-gen SDVs.

For teams building ADAS, autonomy, and in-car assistants, now is the time to validate low-bit models, refine edge-first inference paths, and plan for analog-friendly optimization. The winners will have portable, quantization-ready models and deterministic pipelines that drop cleanly onto new silicon.

Useful references

Level up your team

If you're formalizing an edge-AI upskilling path for developers and MLEs, see our curated tracks by role and focus area: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)