HKMA's Second GenAI Sandbox: 20 Banks and 14 Tech Firms Test 27 Use Cases for Responsible Finance and Deepfake Defences

HKMA taps 20 banks and 14 tech partners to trial 27 GenAI use cases with strong governance and accuracy. Pilots span deepfake fraud, trading surveillance, and HSBC accessibility.

Categorized in: AI News Finance
Published on: Oct 16, 2025
HKMA's Second GenAI Sandbox: 20 Banks and 14 Tech Firms Test 27 Use Cases for Responsible Finance and Deepfake Defences

HKMA picks second cohort to test GenAI in finance: 20 banks, 14 tech partners, 27 use cases

The Hong Kong Monetary Authority (HKMA) has expanded its GenAI sandbox, selecting 20 banks and 14 technology firms to test 27 use cases focused on safe, responsible deployment. The work is co-run with Hong Kong Cyberport and centers on governance, accuracy, and scalable controls.

Participants include HSBC, Standard Chartered, and Bank of East Asia (BEA). The initiative signals broad industry commitment to building AI that meets supervisory expectations and real-world operating demands in financial services.

What's in scope

  • Governance-first pilots aimed at improving accuracy, consistency, and auditability at scale.
  • Deepfake-related fraud defenses, including adversarial simulations to stress-test controls against sophisticated digital attacks.
  • Risk-controlled testing environment with technical assistance and supervisory feedback.

Notable pilots

  • HSBC: Experiments in fraud detection and converting text to sign language to improve accessibility.
  • Standard Chartered: AI for trading surveillance and gauging client demand patterns.
  • BEA: Proof-of-concept with an IT partner to streamline workflows, raise efficiency, and proactively manage risks.

How we got here

The first cohort launched in December 2024 with 10 banks and four tech partners across 15 use cases. Participants were chosen for innovation, technical depth, contribution to the sector, and adherence to fair-use principles.

Why this matters for finance leaders

  • Regulatory alignment: Clear signals on expectations for governance, fairness, explainability, and model lifecycle controls.
  • Fraud response: Deepfake risk is moving from hypothetical to operational; banks are pressure-testing defenses now.
  • Productivity and control: Use cases target both customer-facing capabilities and internal efficiency with measurable guardrails.

Practical next steps for your roadmap

  • Prioritize high-impact, low-regret use cases: fraud operations, trading surveillance, KYC/AML alert handling, client communications, and knowledge retrieval with human-in-the-loop.
  • Tighten model risk governance: establish data lineage, red-teaming, bias/quality checks, prompt and output logging, and clear approval gates.
  • Measure what matters: precision/recall, false-positive rate, latency, cost-per-task, containment rate, and human review time saved.
  • Plan for adversarial scenarios: simulate deepfake voice/video/email events, evaluate vendor tools, and practice end-to-end incident response.
  • Build production rails: retrieval-augmented generation where relevant, content filtering, PII handling, policy enforcement, and continuous evaluation.

For context on the initiative, see the HKMA and Hong Kong Cyberport pages:

Upskill your team

If you're building an internal playbook for GenAI in finance, these curated resources can help accelerate tooling and capability decisions:


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)