Ontario's IPC sets guardrails for AI in health care: what providers need to do now
AI is moving into clinics, wards and admin workflows. It can lighten paperwork and speed decisions, but it also opens the door to privacy, security and legal risks. Ontario's Office of the Information and Privacy Commissioner (IPC) is stepping in with guidance to keep patients safe while giving teams room to innovate.
Two resources set the tone: new Principles for the Responsible Use of AI (developed with Ontario's Human Rights Commission) and guidance on AI notetakers, or "scribes," including a checklist for procurement, developers and users. The goal is simple: earn and keep public trust by respecting privacy and human rights while improving care.
Why this matters now
A recent survey from the Canadian Medical Association and Canadian Federation of Independent Businesses showed what many of you see daily: heavy admin load. Ninety percent of nearly 2,000 physicians reported significant paperwork, totaling about 20 million hours a year. That's time away from patients.
About half of physicians see AI as a way to reduce that burden. The other half see real privacy, security and legal risks in clinical settings. Many also want help vetting products before they touch patient data.
What the IPC released
Principles for the Responsible Use of AI. These are intended to guide organizations to build, buy and implement AI that respects privacy and human rights. Used well, principles like necessity, proportionality, accountability, explainability, and fairness keep teams aligned and risks contained.
Guidance on AI scribes. The IPC's checklist targets practical steps across the AI life cycle: procurement due diligence, privacy and security controls, ongoing monitoring, and governance. It's designed to help health custodians meet Ontario's health privacy law, reduce bias and errors, and keep trust intact.
Where AI fits in care today
AI use in health care tends to fall into four groups:
- General AI (e.g., large language models for drafting notes or quick lookups).
- General clinical AI (health system data embedded into a chatbot for staff or patient queries).
- Clinical AI tools (decision support, triage, risk stratification).
- AI embedded in devices (imaging, wearables, bedside tools).
Adoption is uneven, and benefits can be concentrated. That's why evidence standards matter. If 10% of users benefit a lot, leaders need clear data to judge cost, trade-offs and what to stop doing to fund the new tool.
Faster evaluation, smarter access to data
To deliver value, Ontario and Canada need a practical way to evaluate AI without waiting years. Traditional randomized trials are valuable, but long timelines can make results stale by the time they land. Shorter, well-designed studies and staged rollouts can get answers sooner while protecting patients.
There's also a call for a "playbook" that clarifies access to medical data for research and development with strong privacy controls. The aim: enable improvement while maintaining consent, transparency and patient rights.
Frameworks that protect patients and reduce regret
Data provenance and consent. Training data often comes from mixed sources with different consent paths. Some consents aren't clear or meaningful. If you can't trace where data came from and on what terms, risk rises fast.
Procurement truth vs. vendor claims. Many tools claim compliance. Ontario's custodians answer to Ontario's law. Certifications from other places may not map cleanly. Validate, don't assume.
Clear disclosure to patients. Patients want to know if AI is in the loop. Disclose purpose, what data is shared with third parties and why, key risks like bias, and safeguards in place. A national survey found 88% of Canadians worry about their data being used to train AI, with many extremely concerned. Expect questions and be ready with real answers.
Liability and explainability. Clinicians shouldn't have to explain how a model works under the hood. They should explain material risks that affect patient care: automation bias, data leakage, or re-identification worries. Review contracts so liability isn't quietly shifted onto clinicians or the institution without proper protections.
Practical checklist: adopting AI scribes and similar tools
- Define the use case and success metrics. Target a narrow workflow (e.g., clinic note drafting). Set measurable goals (minutes saved per encounter, error rates, patient satisfaction).
- Run a privacy impact assessment early. Map data flows end-to-end. Identify legal basis, consent needs, and high-risk processing. Document mitigations.
- Minimize data. Collect the least PHI needed. Prefer on-device or edge processing when feasible. Turn off data retention by default.
- Vendor due diligence. Validate data residency, encryption at rest/in transit, access controls, audit logs, model update process, and secure development practices. Confirm the vendor won't train on your PHI unless explicitly agreed.
- Contract protections. Lock down data use, ownership and IP. Limit onward transfers. Set breach notification timelines, security standards, and audit rights. Cap liability thoughtfully; require vendor indemnities for their faults.
- Bias and clinical validation. Test on your population. Measure accuracy across subgroups. Compare against current standard of care. Keep a human in the loop.
- Operational safeguards. Set retention and deletion schedules. Control role-based access. Log prompts/outputs for quality review. Provide fallback workflows if the AI is down.
- Transparent patient notice. Tell patients where AI is used, why, and what happens with their data. Offer an alternative when feasible.
- Security controls. MFA, least-privilege access, endpoint protection, secure APIs, and regular penetration testing. Verify third-party sub-processors.
- Monitoring and incident response. Track quality, error rates, complaints, and drift. Set thresholds that trigger retraining or rollback. Rehearse incident playbooks.
- Training and accountability. Train clinicians and staff on proper use and limits. Name an owner for risk, quality and compliance. Review quarterly.
- Scale by evidence. Expand only after hitting predefined safety and ROI targets. Publish internal learnings to build trust.
How to move forward
Start with low-risk, high-pain admin tasks. Pilot with small teams, strong safeguards, and clear exit criteria. Share results with clinical leadership and privacy teams, then adjust.
Adopt the IPC principles system-wide. Build procurement templates that reflect Ontario's health privacy law. Make disclosure and patient choice standard, not an afterthought.
Most of all, keep trust front and center. That means real transparency, careful contracts, strong security, and continuous quality checks.
Useful resources
Review current guidance and public expectations, and align your next pilot accordingly:
Build AI skills across your team
If your clinicians, privacy officers and IT leads need a shared baseline in AI tooling and safe adoption practices, explore role-based options here: Complete AI Training - courses by job.
The bottom line
AI can help reduce paperwork and improve the patient experience, but only if it's deployed with care. Ontario's IPC is laying out practical guardrails. Use them to make better choices now-and to keep patients' trust as you scale.
Your membership also unlocks: