Guernsey regulator backs AI to boost efficiency - no new rules required
The Guernsey Financial Services Commission (GFSC) has signalled clear support for AI across the island's financial sector. No new AI-specific rules are planned. The message: use AI to streamline operations and reduce administrative drag, but keep governance tight.
The Commission "supports innovation and recognises the role AI, in all forms, could play in transforming the way financial services are administered, managed and delivered." That includes machine learning, large language models, agentic systems, and generative AI. Firms can implement these within the current regulatory framework without seeking specific approval.
Treat AI adoption like any other technical or strategic project. Apply the Finance Sector Code of Corporate Governance - especially accountability and risk management - and meet the Minimum Criteria for Licensing under the relevant laws.
What this means for finance leaders
You don't need to wait for AI-specific regulation to move. The GFSC expects the same discipline you'd apply to core systems change: clear ownership, documented risks, and measurable outcomes. Board oversight, senior management accountability, and demonstrable control effectiveness still apply.
Governance expectations in plain terms
- Accountability: Assign a senior owner for each AI use case, with clear decision rights and escalation paths.
- Risk assessment: Identify model, data, conduct, operational, cyber, and third-party risks; set control objectives before deployment.
- Data and security: Classify data, control access, log prompts/outputs, and keep sensitive data out of public tools unless contractually protected.
- Vendor and model oversight: Perform due diligence on providers, model lineage, update cycles, and support; align with outsourcing requirements.
- Documentation and auditability: Keep an audit trail of design decisions, training data sources (where applicable), and monitoring results.
- Testing and monitoring: Validate for performance, bias, hallucinations, and resilience; set thresholds and alerting; test before scale-up.
- Human-in-the-loop: Use human review for higher-risk decisions (e.g., client suitability, sanctions screening hits) and record overrides.
- Incident response: Define playbooks for model failure, data leakage, or vendor outage; rehearse them.
- Training and conduct: Train staff on acceptable use, confidentiality, and prompt hygiene; align to your code of conduct.
Practical first steps
- Pick narrow, high-ROI use cases: KYC file prep, client communications drafting, reconciliations, transaction monitoring triage, regulatory reporting checks.
- Stand up a lightweight AI policy covering data use, model approval, monitoring, and vendor controls. Keep it pragmatic; iterate as you learn.
- Run small pilots with clear KPIs (time saved, error rates, false positives reduced). Keep a human reviewer until metrics prove stable.
- Contract smart: data residency, retention, IP/outputs ownership, security certifications, uptime SLAs, and right to audit.
- Measure and report: operational efficiency gains, risk incidents avoided, and client/staff satisfaction. Feed results to the board.
Helpful frameworks the GFSC points to
The regulator suggests established resources to guide your approach:
- NIST AI Risk Management Framework - structure your AI risk controls end to end.
- NCSC Guidelines for secure AI system development - security-by-design for AI systems.
Bottom line
The GFSC has given a green light: adopt AI now, within existing rules, and apply standard governance. Start small, prove value, document controls, and scale with confidence.
If you're mapping use cases or upskilling your team, this curated list of tools is a useful shortcut: AI tools for finance.
Your membership also unlocks: