From uncertainty to trust: Investment in HAIVN to safely scale AI in Canadian healthcare
Canada is slipping behind peers in deploying safe, effective AI in healthcare. Fragmented provincial rules, minimal post-market oversight, and clinician mistrust are slowing adoption and delaying tools that could improve access and outcomes. We have a straightforward fix within reach: fund and implement the Health AI Validation Network (HAIVN), the Canadian Association of Radiologists' framework for a national oversight body focused on post-market validation.
Why HAIVN now
HAIVN is an independent, clinician-led model for monitoring and improving AI used in care. It brings vendors, providers, policymakers, and patients to the same table, with clear accountability and transparent reporting. The goal is simple: protect safety and equity while making it easier to deploy AI tools that actually work in real clinical settings.
- Clinician-led validation in real-world settings
- Continuous monitoring to detect performance drift and bias early
- National standards that reduce duplication and speed adoption
- Independent governance free from vendor influence
The four pillars of HAIVN oversight
- Phase 1: Initial Post-Approval Monitoring
- Deployment audit: confirm AI tools follow approved protocols
- Baseline challenge: set credible performance benchmarks
- Performance metrics: define indicators for long-term evaluation
- Phase 2: Continuous Monitoring
- Drift testing: regular checks against challenge datasets
- Data collection: ongoing analysis of performance metrics
- User feedback: structured input from healthcare providers
- Ethical audits: verify privacy and ethical compliance
- Adverse event reporting: independent reporting mechanisms
- Phase 3: Performance and Safety Re-Evaluation
- Recurrent review: annual verification of standards
- Longitudinal assessment: work with health technology assessors for value-based evaluation
- Re-assessment triggers: clear thresholds for immediate review
- Phase 4: Ongoing Improvement and Re-Certification
- Technology updates: review substantial model changes
- Re-certification: ongoing checks for safety, efficacy, and compliance
What gets certified
HAIVN's certification process raises the bar before and after deployment. It applies to diagnostic tools, predictive analytics, Clinical Decision Support systems, and administrative AI. The intent is to make sure what reaches patients and clinicians is safe, effective, and ethically sound over time, not just at launch.
Momentum in Ottawa
Members of the CAR met with federal officials in Ottawa in October to present this and other priorities ahead of the 2026 federal budget. In one meeting, CAR President Dr. Alison Harris met with her local MP Taleeb Noormohamed, Parliamentary Secretary to the Minister of AI and Digital Innovation Evan Soloman, to discuss the need for government and clinician-led oversight.
"We discussed the Health AI Validation Network (HAIVN) and the importance of having a national entity that provides oversight for AI, that it has an advisory group with representation from multiple specialties, and that is adequately funded as outlined in our 2026 Pre-Budget Submission."
HAIVN has been raised consistently with the federal government over the past several years. In 2023, Dr. Jaron Chong, AI Standing Committee President, led a session on Parliament Hill on the need for national oversight. This year, the CAR formally requested $50 million over five years to establish HAIVN's administrative framework, backed by support from the Canadian Medical Association, the Canadian Association of Medical Radiation Technologists, and Sonography Canada.
What healthcare leaders can do now
Whether you run a clinic, a department, or an entire health system, you can prepare today while policy moves forward. Start with practical steps that reduce risk and build confidence.
- Map current and planned AI use across your organization, including shadow tools.
- Stand up a clinician-led validation process and align it with HAIVN's pillars.
- Define safety and performance thresholds, plus triggers for re-assessment.
- Plan for continuous monitoring: drift testing, audit trails, and adverse event reporting.
- Engage privacy, ethics, and patient partners early; publish governance policies.
- Train frontline teams on limitations, escalation paths, and documentation.
For broader regulatory context on machine learning-enabled medical devices, see Health Canada's approach to adaptive ML-enabled medical devices: official guidance.
If your team needs practical AI literacy to participate in validation and oversight, explore curated options by role: Complete AI Training - courses by job.
The bottom line
Canada can move from uncertainty to trust with a national, clinician-led oversight network. Fund HAIVN, standardize post-market validation, and give providers a clear path to adopt safe AI that improves care. The framework exists. It's time to put it to work.
Your membership also unlocks: