Error generating title

FSCA and the PA publish the first sector-wide baseline on AI in SA finance, with banks out front and cautious moves elsewhere. A discussion paper and industry engagement are next.

Categorized in: AI News Finance
Published on: Nov 28, 2025
Error generating title

FSCA and PA release first sector-wide view of AI use in South African finance

South Africa's Financial Sector Conduct Authority (FSCA) and the Prudential Authority (PA) have published a baseline study of how financial institutions are using artificial intelligence. They plan to follow up with a discussion paper and industry engagement focused on regulatory and supervisory questions.

The report draws on a voluntary survey run from October to December 2024 with around 2,100 responses across banking, insurance, investments, payments, pensions, fintechs, and lending. Across the sector, 220 respondents (10.6%) use AI. The analysis focuses on banking, insurance, and investments given their share of system assets.

The report is available on the FSCA website.

Adoption and spending: who's moving first

Banks lead AI adoption: 52% of banking respondents report live AI use. Payments follow at 50%, retirement funds at 14%, while insurers and lenders trail at 8% each.

Spending plans separate leaders from testers. Among AI users, 45% of banks intended to invest more than R30 million in 2024. By contrast, 62% of investment providers and 41% of insurers planned to invest less than R1 million, pointing to a cautious, exploratory approach outside banking.

Traditional AI: what's working now and what's next

Today's use cases cluster around efficiency and risk. Common deployments include document analysis, workflow automation, decision-support, fraud detection, credit scoring, and underwriting.

Planned expansion focuses on real-time fraud monitoring, cybersecurity analytics, and enhanced detection of money laundering and terrorism financing. Insurers expect deeper use in underwriting and claims. Retirement funds and investment firms signal moves into portfolio optimisation and risk modelling, though most are early-stage.

Generative AI: early footholds, broader plans

Current GenAI usage is mostly internal: drafting documents, summarising reports, and building presentations. Some firms apply GenAI to marketing and client communications.

Planned use cases include customer-facing chatbots, automated service channels, risk scoring support, report generation, and wider workflow automation. Governance and controls remain the gating factors.

Key risks firms are managing

Data privacy and protection concerns rank highest, reflecting POPIA obligations and heightened sensitivity around personal data. Cybersecurity risk is close behind, given new attack surfaces and model-specific vulnerabilities.

Model risk is a recurring theme: weak data quality, bias, concept drift, and opaque logic. Firms also flag consumer harm from inaccurate advice or unexplained decisions. For GenAI, additional issues include fabricated outputs, intellectual property exposure, and responsible model use.

What's holding adoption back

Regulation is a primary constraint, especially POPIA's data minimisation, purpose limitation, and automated decision-making rules. Many firms also face skills shortages and struggle to attract or upskill AI talent.

Explainability challenges limit higher-risk use cases in lending, insurance, and investments. Budgets, legacy tech, and competing priorities further slow progress, especially for smaller institutions.

AI governance: maturing, but uneven

Most institutions lean on existing risk frameworks; dedicated AI governance is still forming. Priorities include clear accountability, human oversight, model validation, and ongoing monitoring.

Explainability tools such as SHAP and LIME are being used to support internal review. Many respondents want clearer guidance on disclosure and consumer transparency, with alignment to POPIA's automated decision-making provisions.

Regulatory approach to AI-enabled automated advice

South Africa doesn't rely on AI-specific rules for advice. Instead, POPIA and the FAIS Act frame how algorithms may produce decisions or advice with legal or significant effects.

This aligns with a global trend of adapting principles-based rules to new technology. Transparency, accountability, appropriate oversight, and consumer protection remain the anchors.

Automated decision-making under POPIA

POPIA's protections are broadly consistent with Article 22 of the GDPR, which gives individuals safeguards against decisions based solely on automated processing that have legal or significant effects-unless conditions such as contractual necessity, legal authorisation, or explicit consent are met.

Section 71 permits automated decisions with measures to protect legitimate interests, including the right to make representations and to receive adequate information about the logic used. A practical example from the report: automated loan approvals may be acceptable, but automated rejections require additional safeguards like human review or appeal mechanisms.

Reference: Information Regulator (POPIA) and GDPR.

Fit and Proper Requirements and the General Code of Conduct

Under the FAIS Act, automated advice is recognised and regulated. Financial services providers must ensure human competence, oversight of algorithms (monitoring, reviewing, testing), strong internal controls, sound system governance, and adequate technology resources.

The General Code of Conduct applies on a technology-neutral basis: conflicts, advertising, disclosure, and risk management standards apply equally to robo-advice. The Conduct of Financial Institutions Bill will further strengthen outcomes-focused consumer protection for digital models.

What finance leaders should do next

  • Anchor AI budgets to clear risk and revenue outcomes: fraud loss reduction, cost-to-serve, credit uplift, and claims accuracy.
  • Map data lineages and quality controls before scaling models; embed monitoring for drift and bias.
  • Stand up AI governance that assigns accountability, enforces human-in-the-loop for adverse decisions, and documents model changes.
  • Operationalise POPIA: consent flows, purpose limitation, data minimisation, explainability, and auditable disclosures to customers.
  • Adopt defensible model risk management: independent validation, challenger models, stress testing, and clear thresholds for overrides.
  • Secure the ML stack: access control, prompt and model security for GenAI, data red-teaming, and controls for third-party/vended models.
  • Scale skills: cross-train risk, compliance, and product teams; pair data science with domain experts; build an internal "model review" guild.
  • Pilot GenAI where the risk is low and value is obvious (internal drafting, knowledge search) before moving to customer-facing use.

Useful resources

Browse vetted AI tools for finance to accelerate pilots and procurement: AI tools for finance. For role-based learning paths, see courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide