OSFI's AI Rulebook Forces Canadian Insurers to Account for Every Model by 2027

Canada's insurers face strict AI oversight: every model is a governed asset, with inventories, risk tiers, fairness tests, and independent validation. Deadline: May 1, 2027.

Categorized in: AI News Insurance
Published on: Dec 12, 2025
OSFI's AI Rulebook Forces Canadian Insurers to Account for Every Model by 2027

OSFI imposes strict AI oversight rules on Canadian insurers

Canada's insurance sector just got a clear message: every model you use is now a governed asset. OSFI's Guideline E-23 - Model Risk Management sets a hard standard across life, P&C, and fraternal companies, with full compliance due by May 1, 2027.

The aim is simple: reduce model risk, improve accountability, and prevent hidden failures-especially from AI and machine learning systems driving pricing, underwriting, claims, and catastrophe risk.

Who's in scope-and what counts as a model

If a model influences decisions or risk, it's in scope. That includes actuarial pricing engines, underwriting algorithms, claims prediction, reserving tools, ALM, IFRS 17 components, and vendor catastrophe models.

OSFI calls out the surge in AI and self-learning systems. These require stricter controls because they can absorb bias, shift over time, and fail in ways that are hard to see early.

The core requirements (what OSFI expects to see)

  • Complete model inventory: Maintain a living catalog of every material model. Track ownership, purpose, data sources, use cases, limits, and change history.
  • Risk tiering: Rate models by financial exposure, autonomy, complexity, and failure impact. High-impact models get deeper validation and tighter monitoring.
  • Data quality and fairness: Prove data is accurate, relevant, representative, lawful, and refreshed. Detect and mitigate bias that could drive unfair outcomes.
  • Documented development: Clear standards, version control, explainability, and traceability. Technical, business, compliance, legal, and risk all have defined roles.
  • Independent validation: Reviewers cannot be the builders. Validate at initial deployment, after material changes, on performance issues, after data shifts, and on a schedule tied to risk.
  • Ongoing monitoring: Set thresholds, alerts, and action plans. Track drift, stability, and performance decay. Escalate and roll back when limits break.
  • AI-specific controls: For autonomous systems, control re-parameterization, guardrails, human-in-the-loop where needed, and robust fallback procedures.
  • Senior management accountability: Management owns the program. Assign clear roles, fund skills for novel tech, and report material model risk to the board.
  • Vendor parity: Third-party models are not exempt. Demand transparency, document assumptions, and validate as if you built them in-house.

What this changes for pricing, underwriting, and claims

Actuarial and underwriting teams will need tighter pipelines from data intake to deployment. Bias checks, feature controls, and explainability won't be optional add-ons-they're gates to production.

Claims analytics and fraud detection models must prove they don't unintentionally disadvantage segments. If your training data bakes in historical patterns, expect to rework the stack.

Cat models will face deeper challenge on assumptions, exposure data quality, and tail behavior. Vendor black boxes won't pass without evidence and documented challenge.

A practical action plan

  • Now: Stand up model risk ownership. Define what "model" means at your firm. Build the inventory and set a simple, defensible risk rating scheme.
  • Next 3-6 months: Map data lineage for high-impact models. Establish validation standards, monitoring metrics, and breach playbooks. Identify skills gaps.
  • 6-12 months: Retrofit your top 10-20 models with full documentation, independent validation, fairness testing, and monitoring. Extend to vendor models.
  • 12-18 months: Automate reporting to management and the board. Embed model risk controls into change management and release processes.

Governance and reporting that will stand up to scrutiny

Expect to show clear escalation paths, evidence of challenge, and why your thresholds make sense. Boards should see regular, plain-language reports on model inventory, issues, exceptions, and remediation progress.

For AI systems, be ready to explain how decisions are made, what constraints exist, and when a human must step in.

Common gaps to fix early

  • Hidden spreadsheets and shadow models running critical decisions without oversight.
  • Vendor models with limited documentation or expired validation.
  • Models still using stale pre-pandemic assumptions or uncalibrated post-event data.
  • No clear triggers for revalidation after data, product, or market shifts.

Why this matters

Models are now the operating system of your insurance business. OSFI is making sure they don't quietly steer you into legal, financial, or reputational damage-especially through biased or drifting AI systems.

Deadline and reference

Guideline E-23 takes effect May 1, 2027. Build your roadmap now and start with the highest-impact models first.

See OSFI for official guidance and updates: OSFI - Office of the Superintendent of Financial Institutions.

Need to upskill your teams on AI oversight?

If you're building or buying AI models, your actuaries, underwriters, and risk teams will need new skills. Explore focused training by role: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide