AI is surging at work. Governance isn't. That's your legal problem.
Canadian employees are adopting AI tools at scale. KPMG's Generative AI Adoption Index reports that more than half of workers use AI on the job, while 93 percent of enterprises say they've adopted generative AI. Yet only two percent report meaningful returns.
The signal is clear: productivity gains exist, but the absence of rules, controls, and verification is eating the ROI - and amplifying legal risk.
Two fronts to control: official tools and "shadow use"
Teresa Scassa, Canada Research Chair in Information Law and Policy (University of Ottawa), says employers must manage AI on two levels: procured systems and employee-selected tools. Contracts and configurations must align with law and business needs, and policy has to address what staff can and cannot do with external tools.
That can include outright bans on shadow use. It also means setting conditions for allowed use: what data employees may share, what outputs must be verified, and when approval is required. Without this, risk falls through the cracks.
Privacy law already applies - and AI amplifies the blast radius
AI doesn't suspend privacy law. It magnifies mistakes. Scassa points to real-world failures where AI note-taking tools captured confidential board discussions, then emailed transcripts broadly. An Ontario case saw a doctor's meeting bot continue joining patient visits via calendar invites after he left the hospital - a direct privacy breach.
These are process failures, not edge cases. The legal exposure spans unauthorized collection, improper disclosure, retention missteps, and weak access controls. For federal private-sector privacy obligations, see PIPEDA (OPC).
Bias, monitoring, and hiring: high-stakes zones
Valerio De Stefano, Canada Research Chair in Innovation, Law and Society (Osgoode Hall Law School), warns that off-the-shelf tools are often deployed with little understanding of how they actually work. That gap is risky where systems inform hiring, performance management, or discipline.
Productivity trackers tend to enforce a narrow model of an "ideal" worker. That ignores disability, parental responsibilities, or needed accommodations - a path to human rights complaints and health and safety issues. "These technologies are never one hundred percent," De Stefano notes, and they can mislead decision-makers without robust checks.
Procurement: make the contract do real work
- Demand a data map: what is collected, from whom, where it's stored, retention periods, who accesses it, and cross-border flows.
- Secure privacy and security terms: purpose limitation, least data necessary, data minimization, encryption, audit rights, breach notice timelines, deletion on demand.
- Prohibit vendor training on your data unless expressly approved; require isolation of your datasets.
- Lock in accuracy, explainability (where feasible), and human-in-the-loop requirements for any decision support.
- Mandate bias testing, documentation, and remediation; require version change notices and re-testing after major updates.
- Address privilege and confidentiality: no ingestion of privileged material into external models without legal sign-off.
- Plan for exit: data return/erasure, model artifacts you're entitled to, and support for audits or litigation holds.
Policy: set boundaries employees can follow
- Scope: list approved tools; define prohibited uses; clarify what "shadow use" means and consequences for breaches.
- Data rules: ban entry of personally identifiable information, customer data, trade secrets, or privileged matter into non-approved systems.
- Verification: require fact-checking, citations where possible, and sign-offs for high-impact outputs (client advice, HR actions, regulatory filings).
- Records: specify when AI outputs become records of the organization and how they are stored for discovery and retention.
- Transparency with workers: explain what each system does, the data it uses, and how outputs inform decisions.
- Accessibility and accommodation: ensure monitoring tools account for disabilities, caregiving, and legitimate workflow differences.
Monitoring and decision support: controls that hold up
- Human review: require a qualified decision-maker to validate AI-influenced outcomes, especially in hiring, promotion, and discipline.
- Impact/risk screening: for high-risk use (hiring, monitoring, safety), complete a privacy or algorithmic impact assessment; in some provinces, this may be legally required.
- Explain the metric: publish the benchmark logic for productivity tools; document reasonable accommodation paths.
- Appeals: create a clear route for employees to challenge AI-assisted decisions, with timelines and escalation.
- Union engagement: where applicable, consult and bargain where monitoring touches terms and conditions of employment.
Operational controls your legal team should drive
- Role-based access for AI services; disable default data sharing and external contact syncing.
- Meeting tools: default "off" for recording/transcription; require explicit consent and scoping; exclude "in camera" sessions by policy and config.
- Labeling: watermark AI-generated drafts; separate workspaces for sensitive matters; prohibit personal accounts for official AI tools.
- Incident playbook: treat misdirected transcripts or data leaks like any other breach - contain, notify, document, remediate.
- Audit logs: retain logs of prompts, outputs, approvals, and model versions used for material decisions.
Worker dialogue isn't cosmetic - it reduces risk
De Stefano recommends structured dialogue before and during deployment. Share what a system does, what data it touches, and how its outputs will be used day to day. This counters vendor hype and surfaces workflow realities your contracts and policies must address.
Conversation also builds accountability. If workers understand standards and appeal paths, you reduce friction, errors, and grievances.
Quick checklist for counsel
- Inventory: who's using AI, for what, and with which data. Include shadow tools.
- Triage: flag high-risk use cases (hiring, monitoring, client advice, health data).
- Contract: plug data, security, bias, and exit gaps with vendors.
- Policy: publish allowed uses, verification steps, and no-go zones.
- Training: teach staff what data never goes into AI, how to verify outputs, and how to report issues.
- Testing: run bias and accuracy checks; re-test on major updates.
- Govern: assign owners for risk, compliance, and incident response; review quarterly.
Training and enablement
Governance fails when people guess. Pair policy with practical training and sandboxes so teams learn safe, compliant workflows before production use. Curated role-based programs can shorten this curve.
If you're building a rollout plan, see these AI courses by job to support role-specific training and internal adoption.
Bottom line
Employees are using AI. The question is whether your organization sets the rules or pays the price. Start with contracts, policy, verification, and worker dialogue - then audit and adjust on a regular cadence.
And keep privacy law front and center. The tech is new; your obligations are not.
Your membership also unlocks: