Government and 16 VCs to deploy over Rs 1,000 crore for deep tech and AI startups at Bengaluru Tech Summit 2025

Govt and 16 VCs will deploy over Rs 1,000 crore for deep tech and AI startups, with a push on skills and CoEs. Public teams get quicker pilots, shared risk, and clear metrics.

Categorized in: AI News Government
Published on: Nov 19, 2025
Government and 16 VCs to deploy over Rs 1,000 crore for deep tech and AI startups at Bengaluru Tech Summit 2025

Government and 16 VCs to deploy over Rs 1,000 crore for deep tech and AI startups: What this means for public sector teams

At the 28th edition of the Bengaluru Tech Summit 2025, Priyank Kharge announced a joint push by the government and 16 venture capital firms to offer over Rs 1,000 crore for deep tech and AI startups. He also stressed the need for strong skills and Centers of Excellence (CoEs). Here's what that means for your department-and how to use it.

What was announced

  • Over Rs 1,000 crore in combined funding support focused on deep tech and AI ventures.
  • Emphasis on building talent pipelines and CoEs to sustain delivery, not just fund pilots.
  • Further details on structure and eligibility are expected from state authorities after the summit.

Why this matters for government teams

  • Faster pilots: Co-funded proofs of concept can shorten decision cycles for AI use cases in public services.
  • Lower risk: Shared investment with VCs reduces risk for early experiments while keeping accountability.
  • Better fit: Startups can build to your problem statements and data realities, not generic templates.
  • Capability build: CoEs and upskilling keep outcomes in-house after pilots end.

30-60 day action plan

  • Define 3-5 priority problem statements (e.g., fraud detection, grievance triage, permit turnaround, field inspection optimization).
  • List the datasets you control, their quality, access rules, and any anonymization needed.
  • Nominate a single point of contact (product + data + procurement) to work with startups and VCs.
  • Draft success metrics upfront: accuracy targets, time saved per transaction, cost per case, citizen satisfaction.
  • Prepare a sandbox: secure test environment, redacted data, audit trails, and rollback plans.
  • Run an RFI to scout startups, then a short challenge with fixed timelines and evaluation rubrics.
  • Set a lightweight legal pack: standard NDA, data-sharing addendum, IP and model ownership terms, and incident reporting.

Building skills and CoEs

Funding helps you start. Skills keep you moving. Stand up a small CoE that pairs domain experts with data engineers and policy owners. Keep it lean and outcome-led.

  • Three tracks: product owners (use case + policy), data teams (pipelines + quality), and oversight (privacy + ethics + security).
  • Quarterly playbooks: reusable prompts, data schemas, evaluation sets, and deployment checklists.
  • Partner with local institutes and accredited programs for continuous training. For role-based learning paths, see AI courses by job.

Guardrails and compliance

  • Data privacy: enforce DPDP Act 2023 requirements-consent, purpose limitation, retention, and breach response.
  • Human oversight: keep a human in the loop for high-stakes decisions and document decision logic.
  • Bias and fairness: test models across demographic segments before scale-up.
  • Security: isolate training data, use signed model artifacts, and require vendor SOC 2/ISO 27001 or equivalent.
  • Procurement: include evaluation datasets, red-teaming steps, and exit clauses in contracts.
  • Output validation: never accept model outputs without verification for facts, safety, and policy compliance.

How to work with the 16 VCs and startups

  • Host joint demos focused on your problem statements and data constraints. Avoid generic pitch decks.
  • Co-fund pilots with clear stage gates: 6-8 weeks per phase, go/no-go on predefined metrics.
  • Structure IP sensibly: startups keep base models; you retain rights to fine-tuned versions and prompts used on your data.
  • Set data access norms early: minimal data needed, logging, and reproducibility.
  • Publish a short outcomes report to attract better partners for the next cohort.

Measure what matters

  • Cycle time: average time saved per service or case.
  • Quality: accuracy, false positive/negative rates, and error severity.
  • Citizen experience: resolution rates, first-contact resolution, satisfaction scores.
  • Cost: cost per transaction and cost to serve after deployment.
  • Adoption: percentage of staff using the tool weekly and training completion rates.

Where to track policy and resources

This is a window to move from pilots that fade to services that stick. Pick one high-value problem, one clean dataset, one focused pilot. Ship, measure, improve, then scale.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)