Half of UK firms hit by executive impersonation attempts in past year, Gallagher finds

Half of UK firms faced executive impersonation attempts last year, with confirmed incidents costing an average of £758,000. Fraudsters use cloned voices, deepfake video, and spoofed emails to trick staff into authorising payments.

Categorized in: AI News Insurance
Published on: Apr 16, 2026
Half of UK firms hit by executive impersonation attempts in past year, Gallagher finds

Half of UK firms targeted by executive impersonation as AI-driven fraud surges

Fifty percent of UK organisations experienced at least one executive impersonation attempt in the past year, according to research from Gallagher. Confirmed incidents cost an average of £758,000, with the largest single losses reaching between £1.1 million and £5 million.

Fraudsters pose as CEOs, CFOs and senior colleagues using spoofed email domains, compromised accounts, cloned voices and AI-generated video. They push employees to authorise payments, release sensitive information or fast-track approvals by exploiting trust and hierarchy.

Executive visibility creates fresh risk

Fifty-six percent of business leaders report that executive impersonation attempts have increased over the past 12 months. Senior executives' expanding digital footprints-job titles, locations and travel shared online-give fraudsters the information they need to craft convincing impersonations.

High-profile cases using deepfake video and audio to trick employees into multimillion-pound transfers have shown how realistic these scams have become and the scale of potential losses when controls fail.

AI-enabled deception tops board concerns

AI-enabled fraud is now the top concern for directors, cited by 51% of senior leaders and overtaking traditional digital and physical security risks.

Forty-five percent of organisations report high exposure to phishing and social engineering. Forty percent face high exposure to deepfake scams that mimic voice, image or writing style. Thirty-eight percent identified virtual extortion or impersonation as a major risk.

In the insurance market, deepfake-enabled fraud is emerging as a small but fast-rising subset of cyber and crime claims, often with comparatively high severities. Underwriters are reassessing how social-engineering and business email compromise events are treated in cyber and crime wordings, including limits, conditions and verification requirements.

Physical threats persist alongside digital attacks

Executive risk extends beyond screens and inboxes. Twenty-one percent of organisations report travel-related security risks such as visits to areas with higher exposure to physical attack. Thirteen percent said kidnap-for-ransom remains a concern.

Kidnap-for-ransom is a particular issue for firms operating internationally in marine, military and natural resources sectors, and for companies working in emerging and developing economies.

Operational and reputational fallout

Impersonation and extortion incidents create consequences beyond direct financial loss. Forty-eight percent of organisations reported increased staff anxiety following an extortion attempt. Forty-six percent experienced operational disruption and 38% suffered reputational damage or loss of client trust.

Thirty-nine percent of organisations sought legal advice or reported incidents to regulators. Such events can trigger mandatory notifications and heightened scrutiny where potential breaches of data protection, financial conduct or governance requirements are involved.

Insurance response and controls

Demand for cyber insurance and broader social-engineering protection continues to grow alongside concerns over loss trends and coverage clarity. There is renewed focus on how executive-level exposures are addressed across D&O, crime, cyber and kidnap and ransom policies.

Risk-management measures include staff training, payment-verification protocols and rehearsed incident-response plans. Organisations must recognise that executive exposure has increased significantly and ensure their protection keeps pace, according to Gallagher.

For insurance professionals, understanding how generative AI and LLMs enable these fraud techniques is essential for accurate risk assessment, underwriting decisions and claims evaluation.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)