Sibos 2025: From Assistants to Agents-Banking Goes AI-Native
Finance is moving from AI as helper to AI as the core. AI-native systems decide, learn, and optimise in real time, transforming products, pricing, risk, and supervision.

Sibos 2025: From AI-assisted to AI-native finance
Finance is moving from AI as a helper to AI as the operating core. This shift is as big as the move from branches to online banking. The firms that build around AI, not on top of it, will set the pace.
AI-assisted: the current ceiling
Fraud models, better scoring, chatbots, and robo-advice have delivered efficiency. But they sit on legacy workflows, with humans triggering and governing key steps. AI reacts; it does not run the business. The economics remain largely unchanged.
AI-native: AI as the product and the process
AI-native finance makes autonomous decisions, learns continuously, and optimises in real time. Products are conceived with AI at the centre, not bolted on later. This unlocks services that were previously impractical.
For consumers, accounts can self-optimise: shift savings to higher yield, refinance when rates move, invest surplus cash, and align portfolios to stated values across providers-without manual effort. For corporates, AI agents can forecast cash, execute hedges, and arrange supply chain finance, all in the flow of activity.
Pricing can adapt minute by minute based on market conditions, behaviour, and risk signals. Here, AI isn't a feature inside a product. AI is the product.
Foundations you need in place
- Cloud-native architecture with elastic compute and storage for low-latency decisions.
- Streaming data pipelines; no more overnight batches.
- Synthetic data to train and test models without breaching privacy rules.
- Agent-to-agent interoperability standards for secure negotiation and execution.
- Hybrid AI: large language models for reasoning plus symbolic/graph rules for precision and compliance.
- Embedded compliance: explainability, policy-as-code, and full audit trails at the model and agent level.
Regulation for autonomous systems
Policies built for human-led processes break when AI makes millions of micro-decisions. Accountability, fairness in hyper-personal pricing, and systemic synchronisation risks move to the forefront. Expect "regulator agents" that monitor and audit in real time.
The UK's work on supervised experimentation is a signal of where supervision is heading. See the FCA's Digital Sandbox for direction of travel here. For a supervisory view on AI and data, BIS research offers useful context here.
Business model consequences
In an AI-native setup, banks orchestrate outcomes rather than sell static products. If customer agents can switch providers instantly, loyalty erodes unless you deliver superior outcomes. Fee models can shift to performance-based pricing, and platforms can host third-party financial agents. Advantage moves from product features to speed of adaptation.
Risk realities you must manage
- Opacity: black-box models undermine trust and dispute resolution.
- Herding: agents learning similar strategies can amplify volatility.
- Ethical drift: optimisation that meets the letter, not the spirit, of rules.
- Security: autonomous agents widen attack surfaces and blast radius.
- Adoption: customers may resist full autonomy without clear guardrails and proof of benefit.
Trust needs transparency, controls, and measurable customer benefit built in from day one.
A practical path from assisted to native
- Stage 1 - Augment: automate onboarding, document processing, and fraud with measurable SLAs and CBA.
- Stage 2 - Pilot AI-native products: limited-scope, ring-fenced portfolios with real-time monitoring and kill-switches.
- Stage 3 - Scale the operating model: agent governance, outcome-based risk, and production-grade MLOps.
Operating principles for AI-native finance
- Define intent and guardrails per agent: objectives, constraints, escalation paths.
- Data contracts: sources, lineage, entitlement, and quality SLOs.
- Interoperability: standardised schemas and secure agent messaging.
- Continuous validation: backtesting, shadow mode, challenger models, and bias checks.
- Incident readiness: rollback plans, rate limiters, circuit breakers, and red-team scenarios.
- Lifecycle management: versioning, monitoring, and deprecation tied to risk appetite.
Skills and culture
Treat AI first as a colleague, then as an autonomous business unit with oversight. You'll need AI product managers, AI ethicists, prompt engineers, data governance leads, model risk engineers, and agent safety specialists alongside existing risk and compliance talent.
If your team needs a fast way to survey practical tools used in finance, this curated list can help AI tools for finance.
The next decade
AI-native neobanks can run with minimal human operations, reserving people for high-trust relationships and complex exceptions. Cross-industry AI ecosystems will coordinate money, energy, travel, and health in one agent experience. Embedded finance becomes embedded orchestration.
What to do now: a quick checklist
- Map your top 50 repeat decisions; score for autonomy potential and risk.
- Stand up streaming pipelines and a feature store with lineage.
- Select one AI-native product to pilot with regulator visibility.
- Codify guardrails: fairness metrics, price bounds, and escalation rules.
- Implement agent kill-switches, circuit breakers, and rate limits.
- Create customer-facing transparency: what the agent does, why, and how to override.
- Shift KPIs from process speed to customer and risk outcomes.
- Budget for post-deployment monitoring as a first-class cost.
AI-assisted finance keeps humans in the driver's seat with AI as co-pilot. AI-native finance flips that: you set the destination, and the system learns every route and hazard along the way. Build for autonomy, adaptability, and continuous learning-or get left behind by those who do.