AI Strategy: Why Nvidia's CEO Wants "Every Task" Automated
Nvidia's CEO Jensen Huang is clear: automate every task that can be automated with AI. He even called managers who tell teams to use less AI "insane."
This wasn't theory. It was said the day after Nvidia posted record results, reinforcing a simple point to his own org: AI is a work engine, not a side project. "I want every task that is possible to be automated with AI to be automated with AI."
And he's not worried about work disappearing. His stance: "I promise you, you will have work to do."
Automation-first, hiring more
Nvidia's workforce grew from 29,600 at the end of fiscal 2024 to 36,000 at the end of fiscal 2025. The company still sees itself as roughly 10,000 people short, even as it pushes automation across functions. Offices are expanding in Taipei and Shanghai, with two more sites under construction in the US.
Internally, engineers use Cursor, an AI coding assistant adopted widely across tech. The directive from Huang is blunt: "If AI does not work for a specific task, use it until it does." That means push the tools, improve the workflows, and close the gap through use.
Nvidia also reported US$57.01bn in revenue in the last quarter, up 62% year over year, with a market cap above US$4 trillion. For context on earnings, see Nvidia's investor relations page: investor.nvidia.com. For Cursor, see: cursor.sh.
What this signals to executive teams
- Set an automation-first policy: if a task is repeatable and digital, assume AI handles it.
- Make persistence a norm: don't pause when tools fail; iterate until they work.
- Scale people and AI together: headcount growth plus automation increases throughput, not just efficiency.
- Standardize the stack: pick core AI tools for code, content, analytics, and workflow, then enforce usage.
- Measure adoption, not presentations: usage, speed, quality, and cost per unit of work become primary KPIs.
Implementation playbook (next 90 days)
- Days 0-30: Inventory tasks across engineering, operations, finance, support, GTM. Prioritize by volume and time spent. Select 5-10 high-frequency workflows for pilots (code reviews, reporting, ticket triage, QA checks, weekly summaries).
- Days 30-60: Standardize tools (e.g., coding assistants, meeting summarizers, RAG for knowledge search). Create process templates and prompts. Add basic guardrails: data classification, access control, and review steps.
- Days 60-90: Roll out across teams. Tie AI usage to performance reviews. Publish weekly leaderboards and baseline metrics (cycle time, error rates, unit cost). Start a small "AI reliability" squad to fix broken automations within 48 hours.
Metrics that matter
- % of priority workflows with AI in the loop
- Cycle time reduction per workflow (hours to minutes)
- Quality outcomes (bug rates, rework, CSAT)
- AI usage per employee (daily/weekly active, tasks run)
- Cost per task vs. baseline (compute + tool spend)
- Time-to-production for new automations
Governance without the brakes
- Data policy: classify data; restrict sensitive data to approved tools; log prompts and outputs where needed.
- Human-in-the-loop: require review for code merges, financial outputs, legal content, and customer communications.
- Model controls: set limits on where generative outputs are used directly vs. as drafts.
- Vendor checks: basic security posture, SOC 2 where applicable, clear data retention terms.
- Audit trail: keep change history for AI-assisted actions in critical systems.
Talent strategy: automate work, hire for throughput
Nvidia's approach shows a pattern: expand headcount and let AI remove friction. The result is more output per person and faster cycle times. Hiring pace should match integration capacity, not just budget.
- Define role skills with explicit AI usage expectations (coding assistants, analytics co-pilots, documentation tools).
- Add AI proficiency to performance criteria. Reward measurable adoption and shipped outcomes.
- Centralize enablement: 1-2 day onboarding to the tool stack, approved prompts, and playbooks per function.
Sector movement
Microsoft and Meta plan to evaluate employees on AI usage. Google has told engineers to use AI for coding. Amazon has explored adopting Cursor after internal demand. This isn't a side trend; it's becoming policy.
What to do this week
- Publish an automation-first memo and name a single owner for adoption per function.
- Pick three workflows and ship version 1 automations in two weeks.
- Stand up a weekly AI ops review: usage, wins, failures, fixes.
- Lock a standard tool stack and access model; remove tool sprawl.
- Set targets: 30% cycle time reduction in 90 days on the first wave.
If your teams need structured upskilling paths by role, see our resources here: AI courses by job.
Huang's directive is simple and useful: "If AI does not work for a specific task, use it until it does." Treat that as a systems rule, not a quote.
Your membership also unlocks: