Personal AI Agents and India's Next Digital Layer
At the 2026 India AI Impact Summit in New Delhi, the pitch was bold: give every citizen a personal AI agent and decentralize AI through an ecosystem of proxies. Think agents speaking to agents so people don't have to. In India, this idea sits on top of a decade of digital public infrastructure (DPI), from Aadhaar and UPI to DigiLocker and ONDC. The promise is a shift from a "factory" phase of centralized models to a "bazaar" where individuals train, deploy and delegate through their own agents.
MIT's Ramesh Raskar captured it with a simple example: a 70-year-old woman in rural Bihar planning a trip to Kumbh Mela. Her agent books travel, aligns meals with dietary needs, secures accommodation and pays - if vendors, platforms and institutions also run agents. The bazaar is less a marketplace of people and more a marketplace of proxies.
From DPI to Delegation
DPI made identity verifiable, payments interoperable, documents credentialed and marketplaces open. That shifted how citizens are recognized and how transactions get validated. A personal agent adds a new property: delegability. Now a citizen isn't just known to systems - a proxy can act, coordinate and decide on their behalf across those same rails.
This is a qualitative change. Once agents execute actions, technical errors become institutional outcomes. The stack moves from "assistive AI" to "representative AI."
When probabilistic systems act for you
Large language models generate plausible continuations, not guarantees. Hallucination isn't a bug at the margins; it's a statistical behavior that can surface under pressure, ambiguity or poor grounding. When AI is an assistant, a human checks the final action. When AI is a proxy, uncertainty flows straight into execution.
That raises hard questions. Who defines reliable knowledge? What is a faithful interpretation of a person's intent? How are errors detected, explained and contested when mediation is continuous and automated?
Democratization or distributed mediation?
Doot's strongest claim is universalism: everyone gets an agent. But equal access to a proxy isn't equal access to the rules that govern it. Agents depend on identity systems, payment rails, data standards and API frameworks that determine what can be known and done.
Training data encodes social hierarchies. Optimization targets are set by designers. Distribution can feel like participation while core authority stays put. Convenience improves; control doesn't automatically follow.
What changes on the ground
Handing negotiation and coordination to agents reduces friction. But friction is where people test claims, assert interests and build shared understanding. Replace that with optimization and you get decisions as statistical outputs, not social outcomes.
The likely result: a parallel layer of economic and civic activity conducted by algorithmic representatives. The citizen participates through delegation - fast and accessible, but more abstract.
Practical guardrails for builders, operators and policymakers
- Explicit intent and consent: Standardize intent capture, consent scopes and expiration. Log them immutably. Require re-consent for high-risk actions.
- Uncertainty by default: Agents must expose confidence, assumptions and alternatives. No silent fallbacks to guesses for critical tasks.
- Verification floors: Ground actions in authoritative data sources; require multi-source checks for eligibility and benefits decisions.
- Human-in-the-loop tiers: Define risk bands. Low-risk automates; medium requires confirm; high-risk mandates human review.
- Traceability and audit: Keep signed, queryable trails of prompts, data taps, model versions and decisions. Make them accessible to citizens and regulators.
- Identity and non-repudiation: Strong agent identity, key management and proof-of-action for both citizens and vendors.
- Redress and contestation: Fast channels to contest agent decisions, with escalation paths and service-level guarantees.
- Safety by design: Rate limits, spending caps, geofencing, vendor allow-lists and kill switches at the platform and agent levels.
- Bias and harm testing: Evaluate across demographics, languages and contexts. Publish impact reports and mitigation steps.
- Accessible fallbacks: Always provide human service and offline options, especially for low-connectivity and low-literacy users.
For product and engineering teams
- Minimal Agent Spec (MAS): Intent schema, consent ledger, action catalog, tool permissions and safe defaults.
- Protocol-first design: Open standards for agent identity, messaging, negotiation and settlement. Avoid brittle point-to-point integrations.
- Tool-use over free-form chat: Constrain actions via typed tools with validation and policy checks.
- Local-first where possible: On-device or edge inference for sensitive tasks; server-side only with explicit consent and logs.
- Observability: Real-time dashboards for failure modes: hallucination flags, vendor timeouts, over-permissioned calls.
- Multilingual UX: Native support for Indian languages and code-switching, with community-tested prompts and NLU.
For policy and procurement
- Certification: Pre-deployment audits for safety, bias, security and logging. Periodic re-certification tied to model/tool changes.
- Liability clarity: Who pays when an agent misrepresents a user or a vendor misleads an agent? Codify shared responsibility.
- Data minimization: Contractual limits on data retention and usage across public and private actors. Default to "collect less."
- Public sandboxes: Test beds with real vendors and real constraints before national rollout. Open metrics, open results.
- Rights charter: Consent, explainability, opt-out, data portability and human recourse - stated in plain language.
- Metrics that matter: Track error costs, dispute rates, reversal time, fairness gaps and user well-being, not just conversion or speed.
The bazaar of proxies: where this likely goes
The Kumbh Mela scenario will generalize. Vendor agents bid for your business. Repair agents fix other agents. Insurance markets price agent risk profiles. New economic layers appear around coordination, not just goods.
That's viable - and useful - if the ecosystem centers consent, verification and contestation. If not, hallucination becomes a governance condition, not a bug.
The choice in front of us
Moving from assistive AI to representative AI is not a routine upgrade. It's a political decision about how people are represented in code. A citizen made verifiable, transactable and delegable lives with different incentives and protections than one who engages directly.
Start small. Pilot with strict safety rails. Publish what breaks. Preserve human channels. If the goal is inclusion and agency, make sure the bazaar serves people first - and their proxies second.
Further reading
Your membership also unlocks: