Ant International launches AI SHIELD to secure financial AI from bias, breaches, and deepfakes
Ant International debuts AI SHIELD to cut AI risk in payments, lending, and customer service. Built-in security, real-time monitoring, and red teaming keep financial AI compliant.

Ant International's AI SHIELD Targets Real AI Risk in Finance
Ant International has launched AI SHIELD, a toolkit built to reduce risks and vulnerabilities in AI systems used across financial services. The move puts security and compliance front and center as AI adoption accelerates across payments, lending, and customer service.
AI is already embedded in Ant International's payment operations, which processed over US$1 trillion in transactions in 2024. The company cites research projecting up to US$57 billion in annual costs from AI-related incidents, while only 5% of organisations report high confidence in their AI security-even as large language models are widely used.
What's inside AI SHIELD
At the core is the AI Security Docker-security built into the development and deployment lifecycle of AI models. The goal: keep fraud detection, payment authorisation, and AI assistants reliable and compliant at scale.
- Agent trustworthiness authentication: AI agents are evaluated and tested before deployment on the Alipay+ GenAI Cockpit platform.
- AI service safeguard: Continuous monitoring of agent interactions to block threats in real time.
- Dynamic patrolling with red teaming: Ongoing inspection and adversarial testing to find and fix vulnerabilities.
Where it will be used
Ant International and partners serve over 100 million merchants and 1.8 billion user accounts via Alipay+, Antom, Bettr, and WorldFirst. AI SHIELD will reinforce risk controls across these services.
Use cases include protection against scams, fraud, and deepfake attacks across payments and customer channels. One example: Alipay+ EasySafePay 360, launched in September 2025, which the company says can reduce account takeover incidents in digital wallet payments by 90%. Ant International has also introduced the Digital Wallet Guardian Partnership with AlipayHK and Malaysia's TNG eWallet to secure cross-border wallet transactions.
Why finance leaders should care
- Operational resilience: reduce fraud loss, limit AI misuse, and cut false positives that hurt customer experience.
- Compliance confidence: align AI controls with audit needs and model governance mandates, including the NIST AI Risk Management Framework.
- Auditability: ensure traceability of AI-driven decisions in payments and customer interactions.
- Scalability: standardise security practices across models, workflows, and regions.
Due diligence checklist for adopting AI risk controls
- Governance: Define ownership, approval workflows, and a model registry.
- Security: Threat models for AI/LLM use, red-teaming cadence, and secure model release gates.
- Data: PII minimisation, anonymisation, consent, logging, and retention policies.
- Performance: Track ATO rate, fraud detection precision/recall, false positives, model drift, and mean time to detect/respond.
- Integration: Map controls to PCI DSS and ISO 27001; confirm cloud/on-prem support and standard APIs.
- Monitoring: Real-time guardrails for prompts/agents; align with known risks such as the OWASP LLM Top 10.
- Testing: Pre-deployment validation in a sandbox with synthetic and replayed fraud patterns.
- Reporting: Decision logs, explainability summaries, and auditor-ready evidence.
- Incident response: Playbooks, rollback options, and a kill switch for faulty models or agents.
- Vendor risk: Data residency guarantees, third-party access controls, and customer data opt-outs for model training.
Executive view
"Trusted AI could be a defining factor in unlocking the full potential of artificial intelligence in financial services. At Ant International, we are committed to working with industry partners to evolve the most advanced risk management framework for AI, while harnessing AI itself to strengthen our risk management capabilities. We believe a two-pronged approach is essential for driving responsible growth of FinAI," said Tianyi Zhang, General Manager of Risk Management and Cybersecurity at Ant International.
Next steps for banks, wallets, and PSPs
- Run a targeted risk assessment of AI use across payments, fraud operations, and customer service.
- Prioritise controls for high-loss vectors: account takeover, deepfakes, social engineering, and invoice scams.
- Pilot with a narrow scope; set clear success metrics (ATO reduction, false positives, approval speed, and investigation time).
- Upskill teams on AI risk and tooling. See practical resources for finance teams at Complete AI Training: AI tools for finance.