Bulkley-Nechako Board Unanimously Approves AI Governance Policy Centred on Human Oversight and Transparency

RDBN approves AI Governance Policy, setting clear rules and accountability. HR must deploy training, enforce human review, oversight, privacy compliance, and audits.

Categorized in: AI News Human Resources
Published on: Sep 20, 2025
Bulkley-Nechako Board Unanimously Approves AI Governance Policy Centred on Human Oversight and Transparency

RDBN Approves AI Governance Policy: What HR Needs to Do Now

The Regional District of Bulkley-Nechako (RDBN) Board unanimously approved a new AI Governance Policy on Sept. 18. It sets the rules for how AI is used across the organization and clarifies accountability.

The policy applies to all personnel: board members, employees, contractors, and consultants. It was developed by the RDBN AI Committee and refined after board feedback earlier this month.

"This is about setting clear expectations and safeguards as AI becomes a bigger part of our work," said Human Resources Advisor Anusha Rai.

Who This Policy Covers

  • All RDBN staff and directors using or overseeing AI tools
  • Contractors and consultants building, integrating, or evaluating AI systems
  • Any role involved in decisions, public communications, or regulatory functions where AI is used

Stronger Language, Clearer Accountability

The policy moves from suggestions to requirements. Words like "should" were replaced with "must" and "will," signaling clear expectations for compliance and oversight.

Translation for HR: this is enforceable policy, not guidance. Update job expectations and workflows accordingly.

Key Requirements HR Must Operationalize

  • Transparency: AI tools must be clear in purpose and operation. Residents will be told when they're interacting with AI and how to raise concerns.
  • Human-in-the-loop: Employees are responsible for reviewing and verifying AI outputs before use, with human involvement maintained for important decisions.
  • Oversight: Clear approval and monitoring mechanisms are required for AI deployment and use.

Privacy, Security, and Fairness

  • Bias checks: Staff must review outputs for fairness and document actions taken to reduce bias.
  • Legal compliance: Follow provincial and federal requirements, including FIPPA and PIPA.
  • Data handling: Personal data must be anonymized before use in AI systems.
  • Incident reporting: Any privacy breach must be reported to the district's privacy officer.

Governance and Controls

  • Audits and risk assessments: Regular reviews will monitor AI use and risk.
  • Oversight structure: An internal AI Committee will oversee compliance with support from IT, HR, and privacy staff.
  • Account access: Personal AI accounts are discouraged. RDBN will provide licensed accounts for security and auditability.
  • Incident response: Defined steps for containing and correcting errors, assessing impacts, and informing affected individuals when required.

Training and Enablement

Mandatory training will be rolled out for staff involved in decision-making, public communications, and regulatory work. Additional resources will be made available to all employees to support ethical and effective AI use.

If you're building your training plan, you can also explore role-based AI courses here: Complete AI Training - Courses by Job.

Public Transparency

The district committed to informing residents about AI use and inviting input through meetings, surveys, or online tools. Annual reports or audits on AI use may be published.

The policy will be reviewed each year and updated as technologies, laws, and best practices change, incorporating feedback from staff and the public.

HR Action Checklist for This Quarter

  • Update policy manuals, job descriptions, and SOPs to reflect mandatory human review, oversight, and documentation requirements.
  • Deploy mandatory training by role (decision-making, comms, regulatory). Track completion and effectiveness.
  • Stand up an AI risk register: catalog AI tools in use, data types involved, owners, and review schedules.
  • Implement a privacy workflow: data anonymization steps, bias checks, and breach reporting paths to the privacy officer.
  • Migrate users from personal AI accounts to district-licensed accounts. Enforce access controls and logging.
  • Run a tabletop exercise for the incident response protocol covering error containment and resident notification.
  • Create public-facing messaging templates that disclose AI use and provide a clear feedback channel.

Directors emphasized trust and accountability in adopting new tools. "This policy makes sure that human oversight stays at the centre of decision-making," Director Michael Riis-Christianson said before the unanimous vote.