No Product, No Pitch Deck, No Problem: Investors Chase Raw AI Talent

Money follows scarce skill: investors back talent-led AI founders without a deck when thesis is sharp and chops are real. Show proof, map compute, ship weekly, de-risk fast.

Published on: Jan 28, 2026
No Product, No Pitch Deck, No Problem: Investors Chase Raw AI Talent

The Pitch With No Deck: Why Talent-First AI Bets Still Get Funded

Ben Spector walked into investor meetings with no deck, no product, and no plan to make money anytime soon. What he had: a top-tier AI research background and a willingness to chase hard problems. Investors still listened.

This isn't a fairy tale about pitching on a napkin. It's a snapshot of how capital flows when talent is scarce and timing matters. If you're a founder, investor, or builder, there are practical takeaways here.

What investors actually bought

  • Signal over slides: A credible research track record outweighs pretty decks.
  • A thesis, not a feature list: A clear view of where a breakthrough could create value.
  • Long-horizon patience: Willingness to fund learning before revenue.
  • Network velocity: Access to cofounders, early hires, and compute through strong academic or industry ties.

If you're a founder with no product (yet)

  • Write the one-sentence thesis: What problem, why now, and why you.
  • Define milestones: 30/60/90-day deliverables (prototype, eval results, first pilot), plus a 6-9 month research plan.
  • Show proof-of-work: Public repos, demos, or a short technical note. Even a simple eval beats promises.
  • Clarify your edge: Proprietary data access, unique method, or compute credits that reduce burn.
  • Be specific on risks: Technical, data, and distribution risks-and how you'll test them.
  • Use simple terms: Many pre-seed rounds still use SAFEs. Keep it clean and milestone-driven.

For finance leaders underwriting early AI bets

  • Underwrite compute: Map experiments to actual GPU hours and expected timelines. Cash burn is compute burn.
  • Look for compounding advantages: Data loops, deployment channels, or integrations that get better with use.
  • Ask for decision checkpoints: What evidence unlocks the next check? Tie funding to learning, not vanity metrics.
  • Model downside: What if the core method stalls? Is there a fast pivot to a service or tooling product?

For IT and development teams

  • Prototype fast: Build thin end-to-end demos. Latency, eval, and UX lessons arrive only after something runs.
  • Measure quality: Use task-level benchmarks and human-in-the-loop reviews. Track regression like uptime.
  • Pick infra that won't box you in: Modularize model interfaces so you can swap providers or fine-tunes without rewrites.
  • Ship weekly: Tight feedback loops beat grand designs. Small wins stack.

Risks to keep front and center

  • Technical: The method may not beat baselines outside the lab.
  • Product-market fit: Research-grade demos can stall without a clear user and distribution path.
  • Team: One-star researcher is great. Two or three complementary builders are better.

A simple operating rhythm

  • Thesis → Experiments → Evidence → Capital → Iteration
  • Make the thesis explicit. Design the smallest experiments that could disprove it. Share evidence early. Raise against learning, not buzz.

Why this matters: money follows scarce skill, especially in AI. Some founders will get funded pre-product because their signal is undeniable. Most still need proof customers care. Know which bucket you're in before you pitch.

Helpful resources

Keep building your edge


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide