China sets AI-in-finance goals for 2026-2030: what finance leaders need to do now
China plans to push AI in finance in a safe, orderly way during the 15th Five-Year Plan (2026-2030). The central bank signaled clear priorities: deepen AI applications, tighten guardrails, and lower barriers to adoption across the sector.
Li Wei, who heads the Technology Department at the People's Bank of China, outlined a new-stage fintech plan with AI at the core. Expect defined objectives, key tasks, and national pilot bases focused on practical industry use, cost reduction in model training, and faster application rollout.
What regulators are prioritizing
Policy will stress clear boundaries for AI's role and capabilities in finance. Authorities will explore security norms for large AI models, stronger ethical governance, and graded risk management with controlled admission for high-risk applications.
Translation for operators: more clarity on what's permitted, tighter controls where risk is highest, and incentives to adopt AI where it benefits the real economy and technological progress.
Balance innovation with control
Shang Fulin, chairman of the China Wealth Management 50 Forum and former top banking and securities regulator, called for progress with discipline. Institutions should test and scale AI use cases under existing law while raising the bar on data governance, internal controls, and risk prevention.
He flagged two pressure points: black-box decisioning that widens information asymmetry, and a growing digital divide that could sideline smaller players. Expect policies that support leaders while keeping the system inclusive.
New market infrastructure: Financial Investment Alliance
The conference announced a Financial Investment Alliance spanning wealth research groups, asset and insurance investment institutions, the National Integrated Circuit Industry Investment Fund, regional guidance funds, and leading equity funds, with the China Wealth Management 50 Forum as a guide. The mandate: connect finance with industry and channel long-term capital into tech innovation and industrial development.
Implications for banks, brokers, asset managers, and insurers
- Define a clear AI portfolio: Prioritize high-ROI, controllable use cases (risk scoring, fraud detection, ops automation, client service, investment research). Tie each to measurable P&L and risk metrics.
- Adopt graded risk controls: Classify applications by risk level; require approvals, documentation, and monitoring that scale with risk. High-risk apps get sandboxing, human-in-the-loop, and stricter change control.
- Strengthen data governance: Ensure lawful data sources, clear lineage, quality checks, and minimization. Prevent hidden data leakage and ensure auditability for training and inference pipelines.
- Model risk management for AI: Maintain a model inventory, validation standards for large models, explainability where outcomes affect customers, bias testing, drift monitoring, and fallback procedures.
- Security for large models: Protect against prompt injection, data exfiltration, and insecure tool use. Isolate sensitive workloads, enforce least-privilege, and log everything.
- Third-party oversight: Update vendor due diligence for AI providers (model cards, training data disclosures, evaluation results, incident reporting, audit rights, SLAs for reliability and security).
- Controls meet culture: Formalize an AI governance committee, clear RACI across business, risk, compliance, and tech. Train frontline teams and auditors to spot AI-specific failure modes.
- Pilot base readiness: Track national pilot-base announcements and eligibility. Design pilots that can move quickly to production once rules land.
- Execution discipline: Build a 2026-2030 roadmap with budget gates, compliance checkpoints, and KPIs. Fund foundational data and control layers before scaling customer-facing AI.
Key risks to manage early
- Black-box decisions: Use explainability or outcome-based controls where explainability is limited. Keep humans in the loop for material impacts.
- Data misuse: Block non-compliant sources; enforce retention and purpose limits. Monitor for sensitive data appearing in prompts or outputs.
- Operational fragility: Treat model updates like code releases. Version, test, and roll back safely.
- Vendor concentration: Avoid single points of failure. Maintain exit plans and model portability.
- Digital divide inside your org: Equip smaller branches and subsidiaries with shared platforms and playbooks so adoption isn't limited to HQ.
Signals to watch
- Publication of the new-stage fintech plan and AI security norms for finance.
- Lists of national AI pilot bases for the financial sector and their participation criteria.
- Guidance on risk grading and admission standards for high-risk applications.
- Early projects from the Financial Investment Alliance linking capital to industrial tech.
Practical next steps
- Run a gap assessment of your AI governance against expected graded risk controls.
- Create a prioritized AI use-case backlog with compliance reviews built in from the start.
- Stand up red-teaming and adversarial testing for high-impact models.
- Update third-party risk frameworks to cover foundation models and AI tooling.
- Prepare short memos for your board on AI risk appetite, metrics, and oversight cadence.
Helpful references
Tooling for finance teams
If you're mapping quick wins, this curated list can help identify safe, high-impact tools for finance workflows: AI tools for finance.
Bottom line: China is moving to scale AI in finance with tighter rules and shared infrastructure. Institutions that build strong data controls, graded risk management, and disciplined execution now will be ready to deploy at speed as policies and pilot bases come online.
Your membership also unlocks: