AI Editing Tools Drive Rise in Insurance Fraud, Verisk Study Finds
A study from Verisk, a data analytics firm serving the insurance industry, documents a measurable increase in insurance fraud tied to AI-powered editing tools. The research surveyed 1,000 U.S. consumers and 300 insurance claims professionals to assess how manipulated digital content is reshaping fraud patterns and detection challenges.
The findings reveal significant generational divides in willingness to manipulate claims. Thirty-six percent of consumers said they would consider digitally altering an insurance claim image or document, but younger consumers show much higher rates: 55% of Generation Z and 49% of Millennials, compared with 28% of Generation X and 12% of Baby Boomers.
Awareness of AI-assisted fraud is widespread. Forty-one percent of consumers know someone who has used AI editing tools to alter or create media for financial gain. That figure rises to 64% for Generation Z. Sixty-two percent of all respondents believe people use AI tools to manipulate insurance claim documents often or very often.
Consumers Draw Lines at Different Points
Consumers show varying tolerance for different types of edits. Fifty-two percent said adjusting brightness or contrast is acceptable, and 49% approved of cropping out background elements. Support drops sharply for more serious alterations: 15% said exaggerating damage is acceptable, and 13% said creating images of damage that never occurred is acceptable.
Among consumers who have used AI editing tools, 44% described their edits as "very realistic." This suggests altered content can convincingly resemble authentic materials, making detection harder.
Insurers Face Detection Gaps
Nearly all insurers surveyed-98%-said AI-powered editing tools are driving an increase in manipulated media. Ninety-nine percent reported encountering AI-altered documentation, and 76% said submissions have become more sophisticated over the past year.
Detection confidence varies by fraud type. Fifty-eight percent of insurers said they are very confident in detecting edits to real images or videos. Only 43% expressed strong confidence in assessing the authenticity of digital media at scale. Confidence drops further with deepfakes: just 32% said they are very confident in identifying them.
Sixty-five percent of insurers use third-party AI-based detection tools, and 50% use internally developed AI systems. Despite these investments, 66% of insurers believe digital media fraud goes undetected often or very often across the industry.
System-Wide Consequences
Consumers anticipate broader effects. Sixty-nine percent believe fraudulent claims will lead to higher premiums for all policyholders. Forty-two percent identified rising premiums as a top concern, while 36% expressed worry that legitimate claims could be delayed or denied due to suspicion of manipulation.
Insurers expect operational strain. Looking ahead three to five years, 48% expect increased adoption of technology solutions to address fraud. Forty-five percent foresee stricter documentation requirements, 36% anticipate greater strain on claims teams, 35% expect longer claim cycle times, and 35% predict higher premiums for consumers.
What Comes Next
The study concludes that addressing AI-driven fraud will require more connected systems and shared intelligence across the industry. Insurers face pressure to improve detection tools, integrate them more effectively into claims workflows, and maintain fairness while managing the growing threat of AI-assisted manipulation.
For insurance professionals, understanding these trends is critical. As AI for Insurance becomes standard in claims processing and fraud detection, staying current on both the capabilities and limitations of these tools matters for operational decisions and risk management.
Your membership also unlocks: