Google's AI Execution Shift: What Product Leaders Can Learn from Josh Woodward
Since ChatGPT launched in November 2022, Google looked slow on products despite leading in research. This year flipped the script. After the image model "Nano Banana" took off in August, Gemini 3 arrived with benchmarks and demos that were reported to beat top-tier models-some coverage even compared it favorably to "GPT-5."
The catalyst behind the momentum: Josh Woodward, 42, vice president at Google. He took over Gemini in April and monthly users for Gemini apps climbed from 350 million to 650 million shortly after.
The Operator Behind the Push: Josh Woodward
Woodward previously ran Google Labs, the experimental arm. In 2022 he greenlit NotebookLM-an AI tool that summarizes and analyzes user-uploaded documents and links-by recruiting author Steven Johnson from outside the company to co-develop the product. Instead of the usual internal dogfooding, the team ran feedback loops on Discord and shipped fast. NotebookLM launched in July 2023 and found traction.
CNBC noted that Woodward runs two divisions while tackling a dual mandate: limit AI's harms and still outrun OpenAI and Anthropic. That's a tough balance. It requires clear product judgment and a bias for shipping.
Woodward's Operating System for a Big Org
- Kill friction fast: He set up "Block," a system where anyone can post obstacles. Leaders clear them visibly so work keeps moving.
- Secure resources early: For NotebookLM, he aggressively locked down more TPUs to avoid compute bottlenecks. See how Google provisions TPUs.
- Feedback in public: He and the team answered users directly on X and Reddit, turning support into product discovery.
- Build with outsiders: Bringing in a non-Googler co-creator shaped a clearer product narrative and faster iteration.
What Product Leaders Can Steal
- Shorten the loop: Use open communities (Discord, Reddit) for weekly feedback. Internal-only testing hides problems.
- Make blockers public: Stand up a company-wide "Block" doc. Every blocker needs an owner and a deadline. No silent stalls.
- Pre-commit capacity: Treat compute like a product dependency. Budget it upfront so teams don't idle mid-sprint.
- Executive time on users: Leaders answer 10 user questions per week. You'll spot patterns early and set the tone.
- Cross-functional pods: Model, safety, design, and PM ship as one unit. Weekly demos. No handoffs.
- Ship smaller, more often: Measurable progress beats big-bang reveals.
Speed vs. Safety
Moving fast with AI comes with risk. Woodward's challenge mirrors your own: reduce harm while pushing past competitors. Align launches with clear safety checks and public commitments like the AI Principles so your org can move quickly without flying blind.
Benchmarks, Hype, and What to Actually Track
Gemini 3 drew attention for strong benchmarks and claims of beating advanced models. That matters, but product decisions should ride on practical metrics: capability, cost, and control.
- Latency: p95 under load, not just p50 in demos.
- Cost per request: Real unit economics by feature, not blended averages.
- Reliability: Tool-use success rate, function-call accuracy, and failure modes.
- Context + modality: Max context, multimodal round-trip time, and grounding quality.
- Safety frictions: False positives/negatives and escalation paths.
- Integration friction: SDK stability, rate limits, and monitoring hooks.
30-Day Plan to Apply This Inside Your Team
- Week 1: Pick one workflow with high support volume or revenue impact. Define a single success metric. Stand up a "Block" doc with owners.
- Week 2: Launch a closed beta community on Discord. Ship the smallest useful version. Start a weekly public changelog.
- Week 3: Pre-allocate compute and budget for the next 60 days. Add evals that mirror your top 20 user tasks.
- Week 4: Execs answer user questions on X/Reddit. Review blocker close rates, not just sprint points. Decide go/no-go on a wider rollout.
Why This Works
It turns a giant org into a builder's shop. Transparent blockers, public feedback, and early resource bets create momentum you can feel. As Claire Babar put it, Woodward saw the product potential of large language models early and understood where they were heading-and how they'd be used.
If your team needs structured upskilling to execute on this, explore focused tracks by role at Complete AI Training.
Your membership also unlocks: