AI in Canadian Finance: What the FCAC-GRI Workshop Means for Your Risk, Compliance, and Consumer Strategy
A workshop co-hosted by the Financial Consumer Agency of Canada (FCAC) and the Global Risk Institute (GRI) put a spotlight on how the sector can adopt AI responsibly while protecting consumers. The interim report distills opportunities, emerging risks, and practical guardrails aimed at financial well-being and consumer protection. The message was clear: collaboration beats siloed experimentation.
More than 55 representatives joined, spanning major banks, tech firms, advocacy groups, supervisors, law firms, and academia. For finance leaders, this is a signal: AI governance isn't a back-office project-it's a front-line issue that affects trust, margin, and regulatory posture.
Three principles to guide AI adoption
- Inclusion by design: Build consumer interests into AI tools from day one-not as a compliance patch at the end.
- Innovation and resilience: Modernizing infrastructure and controls tends to improve consumer outcomes when done with intent and testing.
- AI literacy across the market: Educating consumers while deploying AI improves access, inclusion, and protection.
"[This] workshop … is focused on ensuring that innovation in the financial marketplace is not only forward-thinking and efficient, but also grounded in fairness, transparency, and a strong commitment to protecting consumers," said Shereen Benzvy Miller, FCAC commissioner.
Risks flagged by participants
- Transparency and accountability: Black-box decisions erode trust and complicate dispute resolution.
- Data integrity: Poor lineage and quality control lead to unstable models and weak outcomes.
- Bias: Skewed data or features can harm vulnerable consumers and create compliance exposure.
- Third-party concentration: Overreliance on a few AI providers introduces systemic and vendor risk.
- Consumer confidence: Unclear explanations or errors reduce adoption and increase complaints.
- Fraud: AI can amplify social engineering, deepfakes, and transaction spoofing.
What finance leaders can do now
- Adopt "inclusion by design" checklists for model ideation, training data selection, and feature engineering.
- Stand up a model risk framework for AI: bias testing, outcome monitoring, challenger models, and human-in-the-loop controls.
- Tighten third-party risk management: concentration metrics, exit plans, and audit rights for critical AI vendors.
- Make consumer-facing AI explainable: clear disclosures, accessible appeal paths, and meaningful consent flows.
- Invest in AI literacy for staff and consumers: short-format training, simulated scams, and frontline playbooks.
- Strengthen fraud defenses: anomaly detection tuned for synthetic identities and deepfake-resistant verification.
- Document decisions: data lineage, prompt/version control, and a living inventory of AI use cases.
FIFAI II workshop series: where this fits
This was the fourth and final session in the Financial Industry Forum on Artificial Intelligence II, a GRI-led initiative with Canadian financial regulators that began in 2022. Each workshop examined a core risk area and surfaced practical practices for the sector.
- May 2025: Security and cybersecurity (GRI, OSFI, Finance Canada)
- October 2025: AI-enhanced financial crime (GRI, FINTRAC)
- October 2025: AI and risks to financial stability (GRI, OSFI, Bank of Canada, Finance Canada)
- Latest workshop: Consumer protection and financial well-being (GRI, FCAC)
What's next
A full report consolidating insights from all four workshops is slated for spring 2026. To stay aligned with supervisory expectations and peer practices, monitor updates from the FCAC and GRI.
Build AI literacy across your finance teams
If you're planning rollouts or revisiting your AI controls, upskilling your teams will shorten the learning curve and reduce avoidable risk. These resources can help you move quickly and responsibly:
- AI tools for finance: curated options to evaluate and govern
- Courses by job: role-based AI training for financial teams
Your membership also unlocks: