Cyber scammers refine phishing tactics with AI, targeting employee behaviour
Cybercriminals are combining psychology and technology in phishing attacks that now rely on artificial intelligence to impersonate trusted sources and craft convincing messages. The shift is costly for Australian businesses: reported cybercrime losses jumped 50% in 2024-2025, with small businesses losing an average of $56,600 per incident and large organisations facing about $202,700.
Phishing accounts for roughly 60% of reported incidents, according to Gallagher. A cyber incident is reported every six minutes in Australia.
How the attacks work
Modern phishing follows a three-stage pattern. First comes the hook: attackers copy trusted sources like banks, government agencies, or senior executives, using messages that create urgency to bypass critical thinking.
Once someone engages, the attack moves technical. Tactics include fake sender names, misleading links, QR codes that bypass email filters, and cloned login pages designed to capture credentials in real time. These methods were linked to about three-quarters of business email compromise cases in 2025.
AI adds a new layer. Attackers now write polished, error-free emails tailored with public data to look legitimate. Some scams use AI-generated voice messages to impersonate executives requesting urgent payments.
Human error remains the weak point
37% of Australian data breaches involve human error. Staff are the target, not system vulnerabilities. A trained employee who verifies an unexpected email, checks a sender's address, or questions an urgent payment request can block an attack before it reaches company systems.
Other entry points include credential theft, unauthorised access, weak password practices, and the use of personal devices or unapproved software for work. Hybrid and flexible work setups increase exposure when access controls are loose.
What works
Simple checks stop many attacks:
- Verify unusual requests through known contacts
- Check email addresses carefully
- Confirm payment instructions before acting
- Use multi-factor authentication
- Limit access based on job role
Training should be practical and ongoing, using real examples rather than one-off sessions. Incident response planning matters too-the first 48 hours after a breach often determine how much damage can be contained.
For insurance professionals managing cyber risk and claims, understanding these tactics is essential to advising clients on exposure and mitigation. AI for Insurance covers how the technology applies across underwriting and risk assessment, while AI Learning Path for Cybersecurity Analysts provides practical training on defending against evolving threats like those described here.
Your membership also unlocks: