How brokers can drive responsible and fair AI adoption in insurance
Brokers must ensure AI in insurance is fair, transparent, and secure by monitoring bias, requiring explainability, and maintaining human oversight. Responsible AI use builds trust and reduces risks.

How Brokers Can Ensure Responsible AI Usage in Insurance Operations
Leading the Way in Fair, Secure, and Transparent AI Adoption
Artificial intelligence is increasingly integrated into insurance workflows. Recent data shows AI usage in global insurance jumped from 29% in 2024 to 48% in 2025. Despite its benefits, concerns about AI risks are rising. A recent report placed AI as the top risk for the insurance sector this year.
Rajeev Gupta, co-founder and CPO at Cowbell, highlights that AI speeds up underwriting and claims processing but introduces risks like bias, lack of explainability, over-automation, and data privacy issues.
Key AI Risks in Insurance
Bias often creeps in during AI model training, potentially leading to unfair underwriting or claims decisions. AI can also "hallucinate," producing inaccurate or nonsensical outputs. Another challenge is inconsistency—AI may not provide the same answer to identical questions.
Explainability is crucial. When AI decisions lack clear reasoning, trust erodes among brokers and policyholders. This can increase legal risks, as seen in a lawsuit against Cigna, where an AI algorithm allegedly denied over 300,000 claims with minimal human review.
Unchecked automation can amplify flawed decisions quickly. Sensitive data handled by AI can become a target for breaches or misuse, creating a significant security concern.
How Brokers and Insurers Can Mitigate AI Risks
- Identify and report biased AI decisions to improve fairness.
- Review AI outputs to catch errors or misleading information.
- Flag inconsistent results to enhance model reliability.
- Demand clear explanations for AI-driven decisions, especially claim denials.
- Raise client issues promptly to avoid legal or reputational fallout.
- Spot patterns of faulty decisions before they become widespread.
- Educate clients on data usage and promote strong data security practices.
Building Risk-Aware AI Systems
Gupta stresses the importance of starting AI projects with clear guardrails. Teams must have defined responsibilities for building, testing, reviewing, and approving AI models. Regular testing for accuracy and bias is essential.
Setting up alerts for unusual patterns—like sudden spikes in claim rejections—helps catch problems early. Transparency can be improved by creating dashboards that show AI model performance openly.
Every AI-assisted underwriting decision should be logged with details like model version, data inputs, scores, and final actions. This creates an audit trail for compliance reviews or regulatory checks.
Overcoming Barriers to AI Adoption
Fear and misconceptions hold back AI adoption. Some believe AI can’t be trusted because it lacks human judgment. Others worry AI will replace jobs altogether.
Gupta suggests viewing AI as a partner and assistant rather than a replacement. Combined with responsible human oversight, AI can help make smarter, fairer, and more accountable decisions.
Practical Steps to Build Trust and Fairness in AI
- Advocate for transparency in AI decision-making.
- Monitor AI outputs for consistency and fairness.
- Educate clients about AI-driven processes.
- Raise concerns early when issues appear.
- Insist on human oversight in critical decisions.
- Promote awareness of data privacy and security.
- Push for ethical AI development practices.
- Stay informed about AI developments and regulations.
- Support compliance with regulatory requirements.
- Encourage client rights to appeal and review AI decisions.
AI’s Role in the Future of Insurance Brokerage
Karli Kalpala, head of strategy transformation at Digital Workforce, offers a clear perspective: AI doesn’t replace brokers—it enhances their capacity. AI tools handle repetitive tasks, allowing brokers to focus on faster, smarter interactions with carriers and clients.
Kalpala envisions brokers evolving into digital orchestrators who supervise and collaborate with AI tools, strengthening their role as risk experts and trusted advisors.
For insurance professionals ready to build AI skills that support responsible use, exploring targeted AI courses can be valuable. Resources like Complete AI Training offer tailored learning paths for insurance and operations roles.