Alberta's AI "DAVID" enters personal injury: what legal teams need to know
Alberta has approved an AI chatbot, DAVID (Digital Attorney for Victim Injury Disputes), to assist with personal injury matters under the Law Society of Alberta's Innovation Sandbox. Operated by Painworth, DAVID answers intake calls immediately, gathers facts, estimates potential outcomes, and then requests a fee. Each client is supervised by one of the firm's three human lawyers, and the service is restricted to Alberta.
Co-founder Michael Zouhri, who was injured by a drunk driver in 2019, built the tool after struggling to find responsive, affordable help. The pitch is simple: 24/7 intake, multilingual support, and faster, more consistent guidance on routine claims like motor vehicle accidents and slip-and-falls.
Regulatory posture: exemptions and scope
The Law Society of Alberta granted exemptions through its sandbox to allow a non-lawyer-owned entity to operate a firm and to let an unlicensed AI assistant help the public with personal injury claims. Oversight and guidance apply, and services are Alberta-only. Other sandbox approvals include AI tools such as Philer (real estate transactions) and Jointly (marriage contracts). Similar sandbox programs exist in British Columbia, Manitoba, and Ontario.
Where AI fits-and where it doesn't
- Good candidates for AI support: Client intake, preliminary fact-gathering, triage, template-driven communications, basic precedent lookups, and settlement-range benchmarking on well-trodden issues.
- Keep human-led: Legal analysis tied to novel facts, negotiations with material judgment calls, liability disputes, damages strategy, privilege-sensitive advice, and final opinions or filings.
- Supervision duty: Treat DAVID as a legal assistant. Document lawyer review at key checkpoints (intake validation, liability assessment, settlement position, final client communications).
Benefits cited-and the fine print
Proponents point to cost and time savings: tasks billed for hours can be compressed into minutes with an AI assistant. Personal injury often settles without trial and leans on established precedents, which makes it a practical testing ground.
But there are trade-offs. Professors and practitioners flag privacy risks, false confidence from polished but incorrect outputs, and the "empathy illusion"-AI can sound caring without actually understanding context or risk. Human oversight in the sandbox is a guardrail, not a guarantee.
Operational checklist for Alberta PI teams
- Client intake: Provide a clear pre-advice disclaimer, consent to recording, and scope/limitation notice (Alberta-only; not legal advice until engagement is confirmed).
- Conflicts: Run conflict checks before any substantive advice or fee request. Ensure the AI flow pauses pending clearance.
- Engagement terms: Plain-language fee disclosure, contingency structure (if applicable), and who is responsible for what (AI assistant vs. supervising lawyer).
- Privacy and security: Minimize data collection, encrypt in transit/at rest, restrict access by role, and schedule deletion for non-clients. Use a data processing agreement with the vendor and document a privacy impact assessment.
- Quality control: Require lawyer sign-off for liability analysis, quantum ranges, and settlement recommendations. Keep an auditable trail of prompts, versions, and human edits.
- Jurisdictional guardrails: Geofence or detect non-Alberta matters early; hand off with a referral policy rather than offering cross-border advice.
- Escalation rules: If the matter is complex, disputed, or emotionally escalated, route to a human within minutes-not hours.
- Bias and fairness: Periodically sample outcomes across demographics and injury types. Adjust prompts and data sources to avoid skewed ranges.
- Training and competence: Train lawyers and staff on supervising AI, spotting hallucinations, and correcting outputs. Refresh prompts and templates quarterly.
- Complaints and incidents: Maintain a rapid-response workflow for client complaints, misadvice, and privacy events, including notification timelines.
Metrics to track
- Intake-to-engagement time and drop-off rate.
- Settlement variance versus model-predicted ranges.
- Percentage of AI drafts requiring substantive correction.
- Client satisfaction and complaint rate post-resolution.
- Privacy/security incidents and near-misses.
What the stakeholders are saying
According to Painworth, DAVID secured its first client in December and supports nearly every language. Legal academics note that sandbox approvals are widening access to AI-enabled services while allowing regulators to keep them in a controlled environment.
Experts also stress caution: AI can be confidently wrong and may create a false sense of being "heard." The consensus: proceed, but keep meaningful human supervision and clear client communication at the center.
Bottom line for legal professionals
Think of DAVID as structured intake plus precedent-informed guidance with a human safety net. If you adopt similar tools, focus on supervision, privacy-by-design, and outcome monitoring-not novelty. The firms that win here will pair speed with professional judgment, and prove it with data.
Want practical guidance on integrating tools like this into your workflows? Explore AI for Legal for playbooks and training.
Your membership also unlocks: