Waymo is testing Gemini as an in-car ride assistant
Waymo appears to be experimenting with integrating Google's Gemini into its robotaxis as a rider-facing assistant. Code discovered in Waymo's mobile app points to a detailed "Ride Assistant" system prompt that defines behavior, tone, and guardrails inside the vehicle. The feature hasn't shipped publicly.
Waymo confirmed it's exploring features like this. "While we have no details to share today, our team is always tinkering with features to make riding with Waymo delightful, seamless, and useful," said Julia Ilina, a Waymo spokesperson. "Some of these may or may not come to our rider experience."
What the assistant is built to do
- Answer rider questions in clear, simple language (1-3 sentences).
- Use pre-approved, personalized greetings and basic rider context (e.g., trip history count).
- Control select in-cabin features: temperature, lighting, and music.
- Offer reassurance if a rider is anxious or confused.
- Handle general knowledge queries: weather, store hours, basic facts.
The assistant's identity is kept separate from the autonomous driving stack. It refers to the vehicle's driving system as "the Waymo Driver," not "I."
What it won't do
- No control over volume, route changes, seat adjustment, or windows.
- No real-world actions like ordering food, making reservations, or handling emergencies.
- No commentary on real-time driving behavior, incidents, or system performance.
- No speculation about competitors or sensitive topics; responses deflect without a defensive tone.
If a rider requests an unapproved action, the assistant replies with an "aspirational" response such as, "It's not something I can do yet."
Why this matters for product, IT, and development teams
- Scope discipline: The prompt shows tight function boundaries, making safety reviews and QA far easier.
- Identity separation: Clear lines between the AI assistant and the driving system reduce confusion and legal risk.
- Context control: Limited personalization (like trip count) without personal assistant overreach is a practical privacy stance.
- Interaction design: Short, calm replies and pre-approved greetings create predictable UX under stress.
- Safety by design: No real-time driving commentary and strict no-action rules for emergencies prevent misinterpretation.
What this signals
Gemini has already been used inside Waymo's stack for model training on rare or high-stakes situations, according to the company. Bringing Gemini into the cabin shifts AI from "how the car drives" to "how the ride feels." It's pragmatic: focused on comfort, clarity, and small conveniences rather than entertainment or long conversations.
Tesla is developing a similar concept with Grok, but the positioning differs. Waymo's assistant reads as ride-focused and boundaries-first; Grok is pitched more like a chatty in-car companion.
Practical takeaways for teams building in-car assistants
- Start with narrow control surfaces (climate, lighting, media) and expand only with clear safety cases.
- Codify tone, length, and "do-not-answer" topics in system prompts and logs for audits.
- Use "aspirational" responses for out-of-scope asks to set expectations without frustration.
- Keep the assistant's identity separate from any safety-critical system to avoid liability and UX confusion.
- Pre-approve greetings and scripts for high-stress moments to reduce cognitive load.
Where this could go next
- Deeper personalization (within privacy limits) such as preferred cabin presets.
- Richer city context: events, drop-off tips, and ETA-friendly guidance-without editing the route.
- Optional voice input with clear wake words and strict interruption rules.
If you're exploring similar assistants, Waymo's public materials on its Driver and safety approach are useful context. See the technology overview on Waymo's site.
For teams upskilling on AI assistants
Build internal competency with focused, job-relevant learning paths. Explore curated programs from leading AI companies here: AI courses by leading companies.
Your membership also unlocks: