She Trained the AI That Took Her Job-Then Confronted CBA's Bosses

A 25-year CBA employee said she trained the chatbot that replaced her; the chair admitted 'we made a mistake'. It exposes gaps in consent, oversight and redeployment.

Categorized in: AI News Finance
Published on: Oct 18, 2025
She Trained the AI That Took Her Job-Then Confronted CBA's Bosses

'Made a mistake': Ex-CBA worker confronts banking bosses over AI redundancies

A 25-year Commonwealth Bank employee, Kathryn Sullivan, says she unknowingly trained the AI system that replaced her. After contributing scripts and testing for CBA's "Bumblebee" chatbot, she was one of 45 staff made redundant in July-the first job cuts in Australia directly tied to a company's uptake of AI.

At CBA's AGM in Brisbane, Chair Paul O'Malley acknowledged the bank had "made a mistake." Sullivan told the meeting that "not all the jobs that were offered back were the same job that these people had been made redundant from," after roles were reopened following a case brought to the Fair Work Commission.

What happened

Sullivan contributed to chatbot content and QA, unaware it would automate core parts of her role. Following redundancies, affected employees were offered alternative roles after union action at the Fair Work Commission. Sullivan declined the new role, stating it was unsuitable.

The flashpoint wasn't just automation-it was how the change was executed: limited transparency, weak job redesign, and a mismatch between employees' skills and the new positions offered.

Why this matters for finance leaders

Finance is automating fast-customer service, operations, reconciliations, credit workflows, and compliance checks are being handed to AI. The risk isn't just reputational; it's control. Poorly planned deployment can degrade service quality, increase conduct risk, and trigger costly remediation or regulatory scrutiny.

This case shows three fault lines: consent (employees training systems that replace them), governance (clear lines of accountability), and redeployment (skills mapping that actually fits the person, not the org chart).

Action plan: Deploy AI without burning trust or value

  • Say the quiet part early: Declare the automation intent, functions in scope, and likely job impact. Silence creates legal and cultural debt.
  • Build a skills ledger: Map current roles to task-level skills. Identify reusable skills (e.g., domain judgment, exception handling) before designing redeployment paths.
  • Redesign roles, don't just rename them: Create "human-in-the-loop" posts focused on exceptions, oversight, and customer care-not generic "AI support."
  • Prove the business case with controls: Tie AI ROI to error rates, complaint volumes, NPS, time-to-resolution, and audit findings. If metrics slip, slow or roll back.
  • Contract for consent: If staff train models, define IP, data rights, and redeployment guarantees upfront.
  • Stand up an AI Change Desk: A cross-functional team (Finance, Risk, Legal, HR, Ops) that signs off on models, monitors drift, and enforces kill switches.

Governance checklist for CFOs and COOs

  • Model registry: Owner, purpose, data sources, approvals, retraining cadence, and rollback plan documented.
  • Data provenance: Track prompts, training content, and ground-truth labels. Avoid staff-created data becoming a backdoor to redundancy without prior agreement.
  • Controls alignment: Link model risks to existing control libraries (SOX/ICFR-style thinking for AI).
  • Privacy and records: Ensure compliance with guidance such as the OAIC's AI privacy advice before customer-facing rollouts.
  • Union and regulator engagement: Set consultation timelines and evidence packs so you're ready if challenged.

Redeployment that actually works

  • Skill-fit over title-fit: Use assessment data to slot people into exception management, QA, model testing, fincrime reviews, or customer escalations.
  • Pay for the bridge: Fund certification and on-the-job learning with clear salary protection windows and outcome checkpoints.
  • Measure uptake: Track acceptance rates, time-to-productivity, and churn for redeployed staff-adjust roles if the data says they don't stick.

KPIs to keep leaders honest

  • First-contact resolution, AHT, and deflection rates vs. pre-AI baseline.
  • Error/exception rates and percentage human-reviewed.
  • Customer complaints and regulatory incidents tied to AI flows.
  • Headcount impact vs. retraining and internal mobility outcomes.

Bottom line

AI didn't create the trust gap here-change management did. Finance leaders who set clear consent rules, protect service quality with hard metrics, and offer real redeployment options will capture efficiency without reputational blowback.

If your team needs structured upskilling aligned to finance roles and tools, explore curated options at Complete AI Training for Finance.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)