KPMG's AI Cheating Problem Is A Warning Shot For Finance Leaders
More than two dozen KPMG Australia staff used AI to cheat on internal exams since July. One partner will be fined more than $10,000 for using AI in a training course about AI.
The firm has since upgraded its detection processes. This follows policies and extra policing that were introduced after widespread cheating on internal tests between 2016 and 2020.
Why this matters to finance
Internal exams are supposed to safeguard quality. If people shortcut the process, you don't just get bad test scores-you get bad decisions in front of clients, regulators, and audit committees.
The risk isn't theoretical: controls fail, models get used without review, and your reputation takes the hit. Cheating is a signal that your incentives, assessments, and AI rules aren't aligned.
What drives AI cheating
- Pressure to pass and move fast, with little time for deep study.
- Vague policies on what "acceptable AI use" actually means.
- Assessments that are easy for generic AI tools to solve.
- Weak monitoring and low perceived consequences.
Immediate actions for CFOs, CROs, and Heads of Audit
- Set clear rules: Define where AI is allowed (e.g., summarization) and banned (e.g., closed-book exams, independence-critical judgments). Require disclosure of AI use on any submission.
- Redesign assessments: Use scenario-based, open-book tasks with individualized data. Make responses traceable to a learner's prior work.
- Detection with care: Combine proctoring, keystroke logging, and version history with human review. Avoid one-click "AI detectors" as the sole evidence.
- Evidence trail: Keep prompts, model versions, and outputs in the workpapers for any AI-assisted task.
- Consequences matrix: Publish tiers-from retraining to financial penalties-so people know the stakes.
- Lead by example: Senior staff should declare their AI use and follow the same rules.
- Whistleblowing: Maintain a safe, confidential channel for reporting misconduct.
Build simple AI controls that actually work
- Policy: Plain-language do/do-not lists by task type (training, client work, code, analysis, marketing).
- Access: Route staff to approved tools with logging. Block unapproved uploads of client data.
- Review: Require human sign-off for AI-assisted outputs that affect financial statements, tax positions, valuations, or independence.
- Training: Teach prompt hygiene, bias checks, and data handling. Test on judgment, not trivia.
- Audit: Quarterly spot-checks of AI-assisted work, with remediation plans.
Audit and ethics checkpoints
For assurance work, independence and integrity come first. If AI is used to complete training or technical assessments, the issue touches competence and ethical requirements-not just productivity.
Review your alignment with the APES 110 Code of Ethics and consider mapping AI risks to frameworks like the NIST AI Risk Management Framework.
Encourage AI use-the right way
- Allowed: Drafting summaries, formatting, first-pass analyses with full review and source citations.
- Restricted: Any closed-book exam or certification. Any judgment-heavy area without human validation.
- Required: Disclosure of AI assistance and retention of prompts/outputs when work is submitted.
If you need structured upskilling
If your team needs clearer guardrails and hands-on practice, see AI courses and tools relevant to finance:
- Practical AI tools for finance
- Certification: AI for automation (with controls)
- AI Learning Path for Vice Presidents of Finance
Bottom line
More than two dozen cases and a five-figure fine make one thing clear: AI misuse isn't a tech issue-it's a control issue. Set explicit rules, build assessments that reward real skill, log everything, and enforce consequences.
Do that, and you keep the benefits of AI while protecting audit quality, client trust, and your firm's name.
Your membership also unlocks: