The Pitch With No Deck: Why Talent-First AI Bets Still Get Funded
Ben Spector walked into investor meetings with no deck, no product, and no plan to make money anytime soon. What he had: a top-tier AI research background and a willingness to chase hard problems. Investors still listened.
This isn't a fairy tale about pitching on a napkin. It's a snapshot of how capital flows when talent is scarce and timing matters. If you're a founder, investor, or builder, there are practical takeaways here.
What investors actually bought
- Signal over slides: A credible research track record outweighs pretty decks.
- A thesis, not a feature list: A clear view of where a breakthrough could create value.
- Long-horizon patience: Willingness to fund learning before revenue.
- Network velocity: Access to cofounders, early hires, and compute through strong academic or industry ties.
If you're a founder with no product (yet)
- Write the one-sentence thesis: What problem, why now, and why you.
- Define milestones: 30/60/90-day deliverables (prototype, eval results, first pilot), plus a 6-9 month research plan.
- Show proof-of-work: Public repos, demos, or a short technical note. Even a simple eval beats promises.
- Clarify your edge: Proprietary data access, unique method, or compute credits that reduce burn.
- Be specific on risks: Technical, data, and distribution risks-and how you'll test them.
- Use simple terms: Many pre-seed rounds still use SAFEs. Keep it clean and milestone-driven.
For finance leaders underwriting early AI bets
- Underwrite compute: Map experiments to actual GPU hours and expected timelines. Cash burn is compute burn.
- Look for compounding advantages: Data loops, deployment channels, or integrations that get better with use.
- Ask for decision checkpoints: What evidence unlocks the next check? Tie funding to learning, not vanity metrics.
- Model downside: What if the core method stalls? Is there a fast pivot to a service or tooling product?
For IT and development teams
- Prototype fast: Build thin end-to-end demos. Latency, eval, and UX lessons arrive only after something runs.
- Measure quality: Use task-level benchmarks and human-in-the-loop reviews. Track regression like uptime.
- Pick infra that won't box you in: Modularize model interfaces so you can swap providers or fine-tunes without rewrites.
- Ship weekly: Tight feedback loops beat grand designs. Small wins stack.
Risks to keep front and center
- Technical: The method may not beat baselines outside the lab.
- Product-market fit: Research-grade demos can stall without a clear user and distribution path.
- Team: One-star researcher is great. Two or three complementary builders are better.
A simple operating rhythm
- Thesis → Experiments → Evidence → Capital → Iteration
- Make the thesis explicit. Design the smallest experiments that could disprove it. Share evidence early. Raise against learning, not buzz.
Why this matters: money follows scarce skill, especially in AI. Some founders will get funded pre-product because their signal is undeniable. Most still need proof customers care. Know which bucket you're in before you pitch.
Helpful resources
- Primer on pre-seed and seed dynamics: Pre-Seed Funding (Investopedia)
Keep building your edge
- Curated learning by role: AI courses by job
- Practical toolsets for finance and engineering: AI tools for finance and AI tools for generative code
Your membership also unlocks: