Better call DAVID? Alberta's AI chatbot steps into personal injury work under sandbox rules
Alberta now lets an AI chatbot help with personal injury claims. The tool is called DAVID-Digital Attorney for Victim Injury Disputes-and it's run by Painworth, an Alberta-based firm. It answers calls instantly, gathers intake details, suggests likely outcomes, and requests a fee. Every file is still overseen by one of Painworth's human lawyers.
This isn't hype for hype's sake. It's a test of whether AI can streamline intake and early-stage claims work without compromising standards, privacy, or client care. And for lawyers watching the economics of PI shift, it's worth paying attention.
What DAVID actually does
DAVID handles 24/7 intake in multiple languages. It opens with empathy, records the call, collects the caller's details, and asks what happened. After that, it outlines potential civil outcomes and moves the matter toward engagement.
Painworth's co-founder, Michael Zouhri, comes from data science and says DAVID signed its first client in December. He built the system after his own 2019 crash with a drunk driver left him frustrated with unreturned calls, conflicting advice, and steep contingency quotes. The goal: reduce friction at the first mile of a claim.
Regulatory posture: Innovation Sandbox, limited to Alberta
The Law Society of Alberta granted Painworth exemptions through its Innovation Sandbox. Two big shifts stand out: an ownership rule waiver allowing a non-lawyer-owned firm to operate, and an exemption to let an unlicensed AI system assist with legal claims under supervision.
Scope is tight. Services are limited to Alberta and delivered in a controlled test environment with law society oversight. That guardrail matters for anyone considering a similar model.
Not just DAVID: other AI use cases cleared
The sandbox has also approved other AI-enabled services. Philer applies AI in real estate transactions. Jointly uses AI to help develop marriage contracts. Meanwhile, law-society-led sandboxes have appeared in British Columbia, Manitoba, and Ontario, signaling broader regulatory experimentation across Canada.
Assistant, not autonomous lawyer
Experts characterize DAVID as closer to a legal assistant than a lawyer. That framing matters for expectations, accountability, and billing. If a task typically billed at several hours can be executed in minutes by AI (and validated by a lawyer), the cost curve bends fast.
Personal injury is a sensible proving ground. Many matters settle, and a well-developed body of precedent can guide valuations and recommendations. Think: standardized workflows, repeatable fact patterns, and clear documentation needs-prime territory for AI to support intake, summarization, and draft prep before human review.
Risks to watch: privacy, false confidence, synthetic empathy
There are live concerns. Privacy is front and center: intake transcripts, medical details, and contact data push firms to get data governance right. Retention, encryption, cross-border storage, and vendor risk must be nailed down.
Accuracy is another pressure point. AI can sound polished while being wrong. That's manageable if lawyers validate outputs, but dangerous if automation sneaks past review. Guardrails, audits, and clear escalation paths are non-negotiable.
Then there's the human factor. An AI can simulate empathy, but it doesn't feel it. For vulnerable clients, that gap matters. Firms should set expectations, provide human touchpoints, and route sensitive conversations to people.
What this means for legal teams
- Scope the workflow: Use AI for intake, chronology building, medical and police report summaries, demand letter drafts, and settlement-range benchmarking. Keep liability analysis and negotiation strategy with lawyers.
- Supervise, don't outsource judgment: Require human review on all legal assessments, especially causation, damages valuation, contributory negligence, and limitation issues.
- Data hygiene first: Map data flows. Lock down PHI/PII. Set retention limits. Document vendor terms, model access, and audit logs. Assign a data steward.
- Prompt libraries and templates: Standardize instructions for consistent outputs. Store firm-approved prompts and exemplars for demand letters, records requests, and adjuster communications.
- Calibrate the fee model: If AI compresses hours, rethink contingency thresholds, fixed fees for intake, or hybrid models that reflect efficiency without eroding perceived value.
- Client triage rules: Define when the bot routes to a person-e.g., catastrophic injuries, minors, suspected bad faith, language barriers beyond supported coverage, or emotionally distressed callers.
- Disclosure and consent: Tell clients when AI is used, what's recorded, and how data is protected. Provide an easy path to a human at any time.
- Red-team your system: Test for hallucinations, bias in settlement estimates, and misclassification of injuries. Run periodic sampling and corrective training with human feedback.
- Train your people: Paralegals and intake staff should be fluent in supervising AI outputs, correcting errors, and escalating edge cases.
Limits and next steps
Alberta's sandbox gives room to test, measure, and course-correct. The real proof will come from outcomes: client satisfaction, time-to-resolution, error rates, and complaint data. If those look good, expect broader adoption and tighter standards to follow.
For now, the safe move is pragmatic experimentation. Start with low-risk segments, measure relentlessly, and keep humans in the loop.
If you want to go deeper
- Law Society of Alberta Innovation Sandbox - framework and exemptions.
- AI for Legal - practical training and tooling for legal professionals evaluating AI in their workflows.
Your membership also unlocks: