Do more with AI now: 5 insights for founders from Microsoft CTO Kevin Scott

Microsoft CTO Kevin Scott says it's a rare window: ship small, test fast, and pick models that fit. Do the plumbing, run experiments, and build what people keep using.

Published on: Dec 19, 2025
Do more with AI now: 5 insights for founders from Microsoft CTO Kevin Scott

Startups can do more right now with AI: 5 insights from Microsoft CTO Kevin Scott

Dec 18, 2025 - Estimated reading time: 4 min

This is a rare window for early-stage builders. Coding agents, bigger context windows, and cheaper experimentation have changed the pace of what small teams can ship. Entire categories are open for anyone willing to test fast and listen to real users.

In a fireside chat at South Park Commons in San Francisco, Microsoft CTO Kevin Scott put it plainly: "I know building things is hard. It's a grind. Just don't lose sight of how special this moment is." His message: ignore the noise, follow the signal, and build for people.

1) Focus on what's real, and seek frequent feedback

Strong opinions won't save a weak assumption. The job is to spot the forces already in motion-technical, economic, or behavioral-and align with them. Then stress-test those assumptions early, often, and with actual usage.

  • Ship tiny, on-purpose releases and watch what users do, not what they say.
  • Track sustained engagement over vanity metrics. Retention > first-week hype.
  • Let data kill your favorite ideas quickly so you can double down on what works.

2) Use the "capability overhang" - and be ready for the grind

Today's models can do more than most products pull from them. The gap between what's possible and what's built is wide. Closing it is less about genius and more about plumbing: integrations, evals, guardrails, UX, and iteration.

  • Invest in evals and tracing so you can see where the model fails and why.
  • Do the unglamorous work: data cleanup, prompt libraries, caching, and retries.
  • Automate the boring parts so you can spend cycles on outcomes that matter.

As Scott said: "Some of the things that you need to do to squeeze the capability out of these systems is just ugly-looking plumbing stuff... But you're in a startup, that's kind of your life. It's more about the grind."

3) Open vs. closed isn't a battle - it's a toolbox

This isn't ideology. It's selection. Use the best tool for the job: small, local models for cost or privacy; frontier models for reasoning or quality; hybrids when you need both.

  • Define constraints (cost, latency, data sensitivity) before you pick a model.
  • Prototype with multiple models; keep a thin abstraction so you can swap later.
  • Measure total cost: inference, infra, maintenance, and developer time.

As Scott put it: "The category error here is thinking that it's got to be either/or."

4) Exploit the unusually low cost of trying things

It has never been cheaper to run experiments. You can validate a workflow with an agent in hours, not weeks. Treat experiments as part of production, not a pre-launch ritual.

  • Set weekly experiment quotas. Small bets compound.
  • Use coding agents to brute-force variations while you judge outcomes.
  • Kill fast, keep the few that move real metrics, and iterate.

Scott's advice: "Do the experiments. Try things. I would really encourage folks to not be precious about the possibility of failure."

5) Build for people and produce real value

Technology matters when it expands what people can do. Fancy demos fade; useful tools stick. Choose problems worth solving and measure whether a person's job or life is clearly better because of what you built.

  • Start with one painful, frequent workflow and make it 10x faster or simpler.
  • Price against value delivered, not features shipped.
  • Write the "user wins" in plain language before you write the spec.

Scott's anchor question: "What am I doing with these tools that is of service and of value to my fellow human beings?" That focus cuts through distractions and creates staying power.

Action prompts to try this week

  • Run a 3-day spike: prototype one AI-assisted workflow end-to-end, measure time saved, and ship it to five users.
  • Build a model matrix: list 2-3 open models and 2-3 proprietary models, test the same task, compare cost, latency, and quality.
  • Create a lightweight eval harness for your top task and track it daily.

If you want a broader view of practical tools and learning paths, browse curated AI courses and certifications to level up your stack and skills: Latest AI courses and Courses by leading AI companies.

Learn more about the community that hosted the conversation: South Park Commons.

Bottom line: the window is open. Be fearless, stay close to users, and ship the useful thing.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide