AI-driven attacks are rising. Is your organisation ready to respond?
Attackers are using automation, synthetic media, and smarter tooling. That means faster phishing, convincing impersonation, and incidents that spread before your team can react. As a leader, you're on the hook for resilience and compliance. This session helps you get both under control.
Live session details
Tuesday 11 November, 1:00 PM - 2:00 PM (BST)
Hear practical guidance from industry leaders, including ISMS.online's Chief Product Officer and TechForce Cyber's Founder & CEO. Expect clear examples, tested playbooks, and steps you can put to work this quarter.
REGISTER NOW
Why this matters for management
- Threats scale with automation. One attacker can run thousands of targeted attempts in minutes.
- Deepfakes and voice clones raise social engineering risk across finance, HR, and executive teams.
- Regulators expect stronger governance, audit trails, and AI risk controls across the business.
- Shadow AI and third-party tools increase data exposure and complicate incident response.
What you will take away
- How AI changes the threat model: automated phishing, deepfake fraud, prompt injection, data poisoning, and model theft-explained with real cases.
- A resilience plan that works: detection, response, and recovery tuned for AI-led incidents.
- Compliance you can prove: map controls to ISO 27001, NIS2/DORA obligations, and AI governance expectations without slowing the business.
- Practical rollout: fast playbooks, clear ownership, and metrics your board will care about.
Immediate actions to start this quarter
- Update your incident response to cover deepfake fraud, model compromise, and prompt injection. Write short playbooks for finance approvals, comms, and legal.
- Run a 60-minute tabletop: CEO voice-clone wire fraud, supplier invoice swap, or leaked prompts. Time each decision and fix bottlenecks.
- Tighten access to AI tools. Enforce SSO, data loss prevention, and logging for any system that touches customer or sensitive data.
- Set clear AI use rules: what data is allowed, who approves use cases, and how outputs are checked before they hit customers.
- Measure what matters: phishing simulation failure rates, mean time to detect/respond, and third-party risk coverage.
Compliance and guidance worth bookmarking
Who should attend
- Board members and execs who sign off risk and budgets
- CIO, CISO, CTO, and Heads of Risk/Compliance
- Operations, Finance, and HR leaders exposed to social engineering
- Security, IT, and governance teams rolling out controls
Make training stick
Pair the session with focused upskilling so teams act with confidence. Browse manager-friendly programs here: AI courses by job.
Save your seat
Tuesday 11 November, 1:00 PM - 2:00 PM (BST). Limited capacity to keep Q&A useful. REGISTER NOW.
Your membership also unlocks: