Inside GM's tech shake-up: unified AI and software, new CPO, L2+/L3 roadmap
General Motors is bringing AI, software, and global product under one roof-and putting it all under a single chief product officer. For product teams, this is a clear signal: fewer handoffs, tighter platform focus, and faster delivery across hardware and software.
The company cited speed and integration as the core reasons. A centralized computing platform and a target for L2+ and L3 driver assistance by 2028 set the execution bar and the timeline.
The org pattern: why it matters for product leaders
Unifying AI, software, and product around one CPO is a move from project silos to a platform business. It aligns roadmaps, budgets, and decisions around a single product system instead of competing priorities.
- One backlog, one platform: Fewer dependencies, clearer sequencing, and measurable cycle-time improvements.
- End-to-end ownership: From vehicle compute to cloud services to OTA, the same leadership stack sets standards and ships.
- Safety and compliance baked in: Product, engineering, and AI share the same acceptance criteria for feature gates and releases.
Leadership moves and what they signal
Sterling Anderson steps in as CPO with a wide remit: vehicle development, manufacturing, battery programs, and all software. This consolidates decision rights where product trade-offs actually happen.
Exits include Baris Chetinok (software and services product), Dave Richardson (engineering), and Barak Turovsky (AI). New arrivals include Christian Mori leading robotics, plus autonomy talent from Apple and Cruise-pointing to deeper robotics integration and applied AI in the stack.
Platform and autonomy: translating the 2028 target
GM's plan centers on a centralized compute platform and L2+/L3 capabilities by 2028. For product teams, this dictates architecture, sequencing, and risk posture for the next 12-24 months.
- Operational design domain (ODD): Define and lock target scenarios; tie scope to compute, sensor, and map budgets.
- Driver monitoring and HMI: L2+/L3 handoff quality is a product problem; treat it like a critical user journey with safety gates.
- Redundancy and fail-operational paths: Clear requirements for brakes, steering, power, and data persistence.
- Data engine: Telemetry, labeling, evaluation pipelines, and OTA updates as first-class platform services.
- Regulatory evidence: Build the safety case alongside the feature set, not after it.
If you need a crisp reference for automation levels, see SAE's J3016 overview here. For context on current GM driver assistance capabilities, review Super Cruise details here.
Execution playbook you can borrow
- Unify the roadmap: Merge AI, software, hardware, and manufacturing milestones into one plan with a single definition of done.
- Platform first: Carve out a central compute and data platform team. Productize internal APIs and SLOs. Publish a deprecation schedule.
- Gate by safety and readiness: Establish feature gates tied to safety cases, ODD validation, and driver monitoring quality.
- Phase releases: ODD-limited launches, then expand regions and conditions with evidence. Treat each expansion as a product release.
- Supplier strategy: Reduce unique parts and firmware variants. Demand API-level contracts and telemetry access.
- Org design: Keep a lean platform core and federate feature teams. Avoid rebuilding the platform in every program.
- Tooling: Invest in simulation, data labeling ops, CI/CD for embedded, and OTA observability early.
Risks to watch-and how to hedge
- Talent churn: Paired leadership transitions with clear charters and 90-day operating models. Keep velocity metrics public.
- Over-centralization: Platform teams own standards; feature teams own outcomes. Audit for shadow platforms quarterly.
- Timeline slip (2028): Lock the ODD and MVP stack now. Timebox experiments. Ruthlessly descale non-critical scope.
- Supplier lock-in: Dual-source critical components. Keep portability in the interface, not the implementation.
- Brand and safety: Use transparent release notes, driver education, and conservative default behaviors.
Metrics that matter
- Lead time: idea-to-prod and change-to-prod for both cloud and embedded.
- Platform SLOs: compute uptime, OTA success rate, and rollback MTTR.
- ADAS quality: miles between disengagements and false positives/negatives by scenario.
- Certification throughput: time to evidence for each ODD expansion.
- Org health: hiring velocity, internal NPS, and dependency wait time.
Bottom line: GM is centralizing decision rights, platformizing the stack, and setting a public autonomy target. If you run product in a complex org, the model is clear-own the platform, narrow the ODD, ship in phases, and make safety criteria part of the product, not paperwork.
If you're upskilling teams for AI-first product work, here's a curated list of role-based programs: AI courses by job.
Your membership also unlocks: