Inside Manulife’s Responsible AI Strategy: Sustainability, Adoption, and Fairness by Design

Manulife’s AI adoption reached 75% usage with tools like Chat MFC, supported by interactive sessions and a focus on sustainability and fairness in AI deployment. Responsible AI includes efficient model choices and embedding fairness from the start.

Categorized in: AI News Insurance
Published on: Jul 23, 2025
Inside Manulife’s Responsible AI Strategy: Sustainability, Adoption, and Fairness by Design

Manulife's Chief AI Officer on Responsible AI, Part Two

Jodie Wallis, Manulife's first global chief AI officer, shares insights on responsible AI deployment and the insurer's approach to sustainability, adoption, and fairness in AI applications. This is the second part of her conversation, focusing on practical strategies and lessons learned.

Considering Sustainability in AI Deployment

AI relies heavily on computing power, which means large data centers consume significant amounts of energy. While Manulife doesn't train its own foundation models, it uses models like OpenAI's in its solutions. This means power consumption happens both during model training (by the model providers) and during model execution (which Manulife controls).

To address this, Manulife partners with organizations that have clear sustainability commitments and responsible power consumption principles. Internally, the focus is on choosing AI models that are efficient yet accurate rather than just adopting the newest or largest models. This discipline helps minimize the environmental footprint without compromising performance.

Observations on AI Usage Within the Organization

Wallis was surprised by how quickly employees embraced AI tools. In 2023, Manulife developed an internal general-purpose generative AI tool called Chat MFC and rolled it out to all colleagues and contractors by early 2024.

The company supported adoption through dedicated help resources and interactive sessions called Promptathons, where employees in similar roles practiced using AI for their specific tasks. This hands-on approach resulted in a 75% utilization rate across 40,000 users—an impressive level of engagement.

Practical Advice for Insurance Professionals

  • Invest in Adoption Support: Building AI technology is only part of the challenge. Helping employees change their workflows and feel comfortable with new tools is critical. Underestimating this need can slow or block adoption.
  • Find Business Champions: AI professionals can't drive change alone. Successful scaling requires strong partners within business units who advocate for AI solutions and help integrate them into daily operations.
  • Understand Methodological Differences: AI, especially generative AI, demands different testing and validation approaches compared to traditional deterministic systems. Companies must adapt processes to meet accuracy, transparency, and explainability standards suitable for probabilistic models.
  • Bias and Fairness Testing: Fairness is the goal, and bias is what undermines it. Testing for fairness should be as important as testing for accuracy and explainability. Appropriateness of AI-generated content and sustainability also must be considered.
  • Embed Fairness and Accuracy by Design: Instead of testing for bias and accuracy after development, Manulife focuses on integrating these principles from the start. This means carefully selecting data, models, and writing code that supports fairness and appropriateness from the beginning.

Manulife maintains a strong model risk management process that assesses AI use cases based on their materiality. More significant models undergo independent review either internally or externally. This disciplined approach ensures responsible AI deployment aligned with risk and compliance standards.

For insurance professionals looking to deepen their AI knowledge and skills, exploring targeted AI courses can be a practical step. Resources like Complete AI Training's courses by job role provide relevant learning paths tailored to insurance and financial services.