Rapidata raises $8.5M to eliminate the feedback bottleneck in AI development
Financing - 20.02.2026
Model improvements live and die by feedback loops. Compute keeps getting faster. Models keep getting bigger. But getting high-quality human judgment into the loop is still slow and expensive. Rapidata says it has an answer-and now has $8.5 million in fresh seed funding to scale it.
The Zurich-based company built a human feedback platform that plugs into AI development workflows and returns targeted judgments, preferences, and validations at speed. Instead of waiting weeks for a labeling vendor to spin up, teams can request feedback on demand and receive large volumes of signal in days-or even the same day-without standing up new ops.
Why this matters to engineering teams
Most teams can ship a new model checkpoint in hours. Validating it with real human preferences still takes too long, creating a drag on iteration. This is the gap Rapidata focuses on: shorten the feedback cycle so you can run more experiments, ship safer models, and measure real-world quality faster.
- Preference data for fine-tuning and reinforcement learning from human feedback.
- Evals for response quality, groundedness, and safety before release.
- Red-teaming and edge-case discovery across diverse user profiles.
- Prompt, tool, and agent behavior comparisons during A/B tests.
- Continuous post-deploy human-in-the-loop checks to catch regressions.
How Rapidata works
Rapidata taps a continuously available, global network of people through short, opt-in tasks embedded in popular apps. The platform routes tasks to relevant respondents based on trust and expertise profiles, so you get quality at scale without running your own annotation shop.
Engineering teams can trigger asks directly from training pipelines, CI for prompts/agents, or batch evaluation jobs. The promise: feedback cycles that used to take months compress to days-often a single day-so iteration keeps pace with development.
Funding and use of proceeds
Rapidata closed an $8.5 million seed round co-led by Canaan Partners and IA Ventures, with participation from Acequia Capital and BlueYard. The company plans to expand its global human data network and meet demand from AI teams that need faster, more reliable feedback to train, validate, and improve models.
"Jason Corkill is one of the greatest founders I've encountered in my career. Every serious AI deployment depends on human judgment somewhere in the lifecycle," said Jared Newman, who led the investment at Canaan Partners. "As models move from expertise-based tasks to taste-based curation, the demand for scalable human feedback will grow dramatically. Rapidata is positioned to serve a market that spans foundation models, enterprise AI, and the next generation of AI-driven products."
What to do next
- Pipe Rapidata feedback into your eval harness and treat it as a first-class metric alongside automated scores.
- Gate model and prompt releases on human pass rates that reflect real user preferences and safety thresholds.
- Use targeted cohorts (domain experts, locales, user segments) to stress-test failure modes before rollout.
Team leads: Luca Strebel (Chief Architect), Mads Alber (CIO), Jason Corkill (CEO), and Marian Kannwischer (CTO).
If you're integrating human feedback into AI and DevOps workflows, see our guide: AI for IT & Development.
Your membership also unlocks: