8 in 10 Canadian business leaders want humans in the AI loop - yet ethics, compliance, and a skills gap keep adoption slow

Canadian leaders are leaning into AI, but 80% say a human still needs to steer. Trust, clear policies, and basic skills drive ROI while ethics and compliance catch up.

Published on: Nov 19, 2025
8 in 10 Canadian business leaders want humans in the AI loop - yet ethics, compliance, and a skills gap keep adoption slow

8 in 10 Canadian business leaders say keeping a human in the loop matters with AI

AI adoption is climbing, but Canadian employers know the tech only works if people still guide the outcomes. ADP reports that 80% of business leaders say keeping a human in the loop is important, and 64% highlight trust as a key factor in AI use.

Leaders are trying to modernize operations without losing employee confidence. As one executive noted, organisations are balancing innovation with human-centred practices, while dealing with new disclosure rules and a growing skills mandate.

What the data says

  • 80% value human-in-the-loop for AI decisions.
  • 64% prioritise building trust in AI.
  • 46% say managing AI ethics is a priority, yet only 22% have a formal AI ethics policy.
  • 21% use AI for compliance tasks; of those, only 51% have strong trust in its accuracy. Among non-users, just 10% plan to adopt.
  • Top compliance pain points: data privacy, paid leave, payroll tax requirements, pay transparency, overtime.
  • AI seen as essential for competitiveness by 75% of large and 61% of mid-sized companies, but only 13% and 5% are hiring for AI skills.

As the federal government's Chief Data Officer put it, AI efforts should build trust, improve services, and address daily employee challenges while equipping leaders and teams with practical skills.

Ethics and compliance: move from intent to policy

Ethical management is a stated priority, but policies lag. Close that gap with a clear AI use policy that defines acceptable use, data sources, review checkpoints, and escalation paths.

Align your approach with Canadian regulations and proposals. Track developments related to the Artificial Intelligence and Data Act (AIDA) from the Government of Canada and uphold privacy duties under PIPEDA.

Employees want agentic AI-but they're anxious

Most employees are ready to use agentic AI to offload repetitive work and improve output. At the same time, reports show rising anxiety tied to job security, accuracy, and fair use.

Trust grows when people understand where AI is used, how results are reviewed, and how it benefits their work. Communicate the "why," share guardrails, and make humans the final decision-makers for material outcomes.

The skills gap is real-and it's blocking ROI

ADP's survey shows hiring priorities like strong work ethic, detail orientation, time management, problem solving, and teamwork. Ironically, these are also the hardest skills to find, along with leadership and critical thinking.

That mismatch delays AI ROI. Without core behaviours and baseline AI literacy, pilots stall, adoption dips, and error rates go up.

Practical actions HR and management can take this quarter

  • Publish a one-page AI use memo: where AI is allowed, what data sources are approved, and when a human must review.
  • Draft an AI ethics policy and create a cross-functional review group (HR, Legal, IT, Operations) to oversee pilots and risk.
  • Start with low-risk use cases: drafting, summarising, meeting notes, internal knowledge search. Require human sign-off for anything external or compliance-related.
  • Run a compliance check: privacy, payroll tax, pay transparency, overtime, paid leave. Validate vendors' data handling and audit trails.
  • Map roles to skills: add AI literacy, prompt quality, data awareness, and review discipline. Pair this with the soft skills you already value.
  • Upskill quickly: short courses on prompt quality, evaluation, and workflow design for managers and ICs.
  • Tighten onboarding: day-one access to approved AI tools, policy brief, and two guided exercises with review criteria.
  • Listen continuously: pulse on trust, clarity, and perceived fairness; share actions from feedback.

Recommended upskilling resources

Human-in-the-loop: make it specific

  • Define review points: data prep, model prompts, output review, and final approval.
  • Set risk tiers: low (no PII, internal drafts), medium (client-facing drafts), high (legal, finance, compliance)-and match oversight to risk.
  • Capture provenance: who prompted, which tool, data used, who approved, timestamp.
  • Train for failure modes: hallucinations, bias, outdated data, privacy leaks.

Retention watch: fix the basics before "revenge quitting" spreads

Less than half of employers rate onboarding and hiring as highly efficient, and many lack confidence in employee feedback data. That's a warning sign for churn.

Improve clarity, tooling, and growth paths before rolling out bigger AI programs. People stay when the work gets easier, not fuzzier.

Metrics to manage

  • Adoption: percentage of teams using approved AI weekly.
  • Quality: error rate, rework rate, human escalations per use case.
  • Speed: cycle time for targeted tasks (before vs. after).
  • Risk: policy violations, privacy incidents, audit findings.
  • People: eNPS, trust-in-AI score, new-hire time-to-productivity, regretted attrition.

Bottom line: leaders overwhelmingly want people in control of AI. Codify that intent into policy, skills, and process, and you'll get the benefits of automation without losing trust or talent.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)