"Holy crap. The end of me." What Eric Schmidt's AI warning means for product development
Eric Schmidt watched an AI generate a complete program and said what many builders feel: "Holy crap. The end of me." After 55 years of programming, he saw a full loop close in front of him-start to finish, human to machine.
He added another reality check: at top labs, AI already writes 10-20% of code. And he believes artificial general intelligence could arrive in three to five years. That's not a headline. That's a product timeline.
But his bigger point wasn't about code. It was about operations. Billing, accounting, delivery, inventory, even parts of product design-quiet, expensive functions are moving to automation fast.
Key signals product leaders should track
- AI is already a material contributor to software output (10-20% at leading labs), and that curve is steepening.
- Greatest near-term value sits in automating back-office and internal ops: billing, accounting, delivery, inventory, and product workflows.
- Recursive self-improvement is emerging: systems that plan and learn with less human instruction.
- AGI in 3-5 years is Schmidt's call. Whether you agree or not, you need a plan that doesn't get blindsided if he's right.
- Oversight matters: "There's no higher duty than to preserve human agency and human freedom." Build guardrails now, not after a headline.
Your 90-day plan
- Map work: catalog recurring processes across product, engineering, support, finance, and ops. Score by volume, rules clarity, risk, and expected ROI.
- Target 3 pilots: one code-gen use case, one back-office automation (billing or accounting), and one customer-facing assist (support or onboarding).
- Stand up an evaluation harness: define success metrics (latency, accuracy, deflection rate, cost per task, error severity) and test weekly.
- Data first: connect clean, permissioned data sources. No PII or secrets in prompts without masking and policy.
- Human-in-the-loop: require review for high-risk actions (payments, PII, irreversible writes). Log every decision.
- Cost model: track tokens, inference minutes, and human review time. Compare to current fully loaded costs.
- Security and compliance: adopt a basic policy aligned to the NIST AI Risk Management Framework (NIST AI RMF).
Where to automate first (Schmidt's hint: operations)
- Billing and accounting: invoice parsing, reconciliation, collections emails, variance checks.
- Delivery and inventory: demand forecasting drafts, reorder suggestions, exception summarization, supplier comms.
- Product and design: brief generation, spec drafting, UX copy variants, experiment ideas with guardrails.
- Engineering: unit test generation, code review suggestions, doc updates, migration scaffolds.
- Support: intent routing, answer drafts from your knowledge base, QA of responses, post-call summaries.
Team and roles to stand up
- AI product owner: accountable for business outcomes and risk acceptance.
- Platform and data: builds secure connectors, feature stores, retrieval pipelines.
- Evaluation and safety: owns test sets, red-teaming, drift detection, and incident response.
- Applied engineers: integrate models, optimize prompts, ship features with clear KPIs.
Metrics that matter
- Cycle time: spec to shipped experiment, PR open-to-merge, ticket resolution.
- Quality: accuracy vs. gold sets, escaped defects, customer effort score.
- Unit economics: cost per action (tokens + infra + review) vs. baseline cost.
- Adoption: % workflows assisted, assist acceptance rate, human override rate.
Guardrails before scale
- Policy: what data models can access, where outputs can go, who approves changes.
- Review gates: required human approval on money movement, data deletion, or external messaging.
- Observability: prompt/output logging, feature flags, rollbacks, kill switch.
- IP and privacy: mask secrets, watermark sensitive outputs, vendor DPAs, regional routing.
What Schmidt's AGI timeline means for roadmaps
If AGI shows up by 2029, today's "assistive" features turn into autonomous workflows fast. He claims systems are learning to plan without instruction and already contribute meaningfully to code. Treat that as a design constraint: build products that supervise machines, not just use them.
That changes prioritization. Tools that compress cycle time, reduce operational drag, and increase certainty will outpace feature-heavy roadmaps with manual overhead baked in.
Practical product calls you can make this week
- Rewrite one core workflow as "AI-first": agent drafts, human verifies, system logs, metrics tracked.
- Replace 20% of status reporting with auto-generated summaries and dashboards.
- Introduce eval gates in CI: block merges if AI-generated code fails strict tests or linting.
- Spin up a monthly safety review: incidents, hallucinations, bias checks, false positives/negatives.
Skill focus for your team
- System prompting and retrieval design (context windows, grounding, feedback loops).
- Evaluation set design and error taxonomies.
- Latency and cost optimization (caching, routing, model selection, batching).
- Change management: training, documentation, and clear escalation paths.
Schmidt's punchline wasn't doom. It was direction. Software and operations are getting rewritten. Product teams that pick the right workflows, measure rigorously, and keep humans in charge will ship faster and safer than those waiting for the dust to settle.
Build your bench: explore a curated set of coding copilots and automations here: AI tools for generative code. For role-based upskilling paths, see courses by job.
Your membership also unlocks: