Simile Raises $100M as Brands Turn to AI Twins for Market Research

Simile's agentic twins let marketers ask endless questions and get instant, modeled answers then validate with real panels. Faster tests, sharper segments, lower cost; up to 95% accuracy.

Categorized in: AI News Marketing
Published on: Mar 07, 2026
Simile Raises $100M as Brands Turn to AI Twins for Market Research

Digital twins for marketers: faster research, deeper insight, lower cost

AI agents can now mirror real people well enough to predict and simulate how they think and act. Simile, a Stanford spinout, is using this to let teams query "agentic twins" instead of waiting weeks for panels.

The pitch is simple: ask unlimited questions, get instant answers, then back it up with human validation. For marketing, that means faster message testing, clearer segmentation, and more confident decisions.

What exactly is an agentic twin?

Simile builds AI agents that behave like specific people or cohorts. They interview real participants, capture preferences and personality, then blend that with behavioral and purchase data.

The result: a digital stand-in you can question like a respondent-without fatigue or scheduling. It's not magic. It's statistics and modeling trained on actual human input, then checked against ground truth.

Why marketers should care

  • Speed: Go from idea to insight in hours, not months.
  • Depth: Ask follow-ups endlessly. No drop-off. No incentives.
  • Coverage: Simulate hard-to-reach groups (providers, chronic patients, niche buyers) before you spend on recruitment.
  • Cost control: Annual access ranges from ~$150,000 to several million. Pricey, but often far less than constant panels and ad hoc studies at scale.

How Simile works (and what's new)

Simile trains its own behavior model and pairs it with open-source systems. Agents are shaped from interview data and calibrated with first-party signals to improve generalization.

CVS built twins from 2.9 million responses across 400,000 real people (with consent) and aligned those with historical surveys and support interactions. In testing, the twins matched known findings with up to 95% accuracy. Gallup is partnering with Simile to offer 1,000+ twins for policy, trends, and corporate research.

What this changes in your workflow

  • Message testing: Rapidly compare headlines, CTAs, or value props by segment before live spend.
  • Segmentation: Stress-test personas against real behaviors and stated preferences.
  • Journey design: Probe barriers (e.g., refill friction, support access) with layered follow-ups you could never afford with humans.
  • Concept and pricing: Pressure-test bundles, features, and willingness to pay-then confirm with a smaller human sample.

What the early adopters found

CVS calls the twins "always on." They used them to explore adherence behaviors and everyday concerns like getting a pharmacist on the line or managing refills. No fatigue meant they could keep digging until the insight was clear.

They also simulated providers, chronic condition segments, and pet-medicine buyers-finding convenience and vet coordination mattered more than "is this a chore?" Gallup expects demand in policy, trend analysis, and workplace topics like well-being and job satisfaction.

Limits, risks, and guardrails

  • It's still early: As one Gartner analyst put it, don't replace your full process. Keep collecting real data.
  • Validation loop: Backtest twins against known outcomes and recent survey waves. Flag drift quickly.
  • Bias and safety: Use role-based access, monitor sensitive content, and document consent sources and usage rights.
  • Human in the loop: Great questions still matter. Skilled researchers get better answers-from humans and twins.

How to run a smart pilot (30-60-90 days)

  • 30 days: Pick 2-3 high-cost research tasks (e.g., message tests, creative pretests). Define success metrics and a small human validation sample.
  • 60 days: Build 3-5 segment twins. Run A/B/C message tests, capture qualitative "why," and compare to historic panel results.
  • 90 days: Roll into a weekly "always-on" insights loop. Use twins to screen ideas, then validate winners with a lean human panel.

Metrics that matter

  • Speed-to-insight: Hours per test vs. historical baseline.
  • Cost per validated insight: All-in cost divided by insights confirmed by human data.
  • Concordance: % agreement between twins and human panels on directionality and effect size.
  • Coverage: New segments simulated vs. previously unreachable segments.
  • Business lift: Improved CTR, conversion, or CPA from twin-informed messaging.

What to ask a vendor before you buy

  • What real data sources train and calibrate your agents? How is consent handled?
  • How do you prevent hallucinations? Show backtesting results and drift monitoring.
  • Can we import our first-party data securely with role-based controls?
  • What does accuracy mean in your reports-replication of findings, directional match, or effect size?
  • How fast can we update twins after new campaigns or surveys?
  • What's the pricing model for seat licenses, data volume, and API access?

Where this is heading

Next up: multi-agent simulations where segments interact with each other, products, or environments (think store layouts or service flows). That lets you test social effects, not just single-person decisions.

For now, marketing is a low-risk, high-learn field to apply twins-message testing, creative screening, and concept checks. Then confirm with a smaller, higher-quality human study. Fast loop, strong signal.

Want to level up your team's skills?

If you're building an always-on insights engine, start here: AI for Marketing.

Sources and further reading

Bottom line: Use agentic twins to cut time and cost, explore more options, and make cleaner calls. Keep humans in the loop to verify, adjust, and stay honest.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)