Decency at the Core: Mastercard's Path to Belonging in an AI-Driven Workforce

Mastercard shows how decency, community, and inclusion make AI work at work. Build trust, set rules, keep humans in key calls, and track fairness so adoption and results improve.

Categorized in: AI News Human Resources
Published on: Feb 21, 2026
Decency at the Core: Mastercard's Path to Belonging in an AI-Driven Workforce

Artificial Intelligence + Human Resources: How Mastercard Builds a Culture of Belonging in an AI-Powered Workforce

Amanda Gervay highlights a simple truth: decency, community and inclusion aren't just values - they're performance infrastructure. In an AI-powered workforce, they create the trust and clarity people need to use new tools with confidence.

For HR, this is the job. Not just deploying models, but building the conditions where people feel seen, safe and supported while they work with AI to do their best work.

Why belonging drives AI performance

AI changes how decisions are made, how work is measured and who gets opportunity. That can trigger fear or disengagement if you don't set the right norms.

Belonging reduces that friction. When people believe the system is fair and their voice matters, they adopt faster, share better data, and improve the tools through feedback.

Principles to bake into your HR operating system

  • Decency as policy: Make commitments explicit - no surveillance without purpose, no hidden evaluations, clear opt-in where possible, and easy ways to challenge outcomes.
  • Community in practice: Involve ERGs, works councils and frontline teams in solution design and testing. Co-creation builds trust and catches edge cases early.
  • Inclusion by design: Set fairness targets up front. Use diverse training data, structured processes and consistent rubrics so AI augments - not amplifies - bias.
  • Transparency and choice: Tell people when AI is used, what inputs matter, and how to correct errors. Offer human review paths for high-stakes calls.
  • Human judgment where it counts: Keep people in the loop for hiring, promotion, pay and termination. AI informs; humans decide.
  • Skills over pedigree: Shift talent moves to skills signals from performance, projects and learning - not brand names and degrees.

Governance that protects people and performance

  • Clear ownership: Assign product owners for each AI use case with defined KPIs, risks and escalation paths.
  • Risk controls: Adopt an AI risk framework for bias, privacy, security and explainability. The NIST AI Risk Management Framework is a solid starting point.
  • Bias testing and monitoring: Check models pre-launch and continuously post-launch for adverse impact across protected groups. Document fixes.
  • Vendor standards: Require transparency on training data sources, evaluation methods and model updates. Bake audit rights into contracts.
  • Data minimization: Collect only what's needed, store less, and set short retention windows. Sensitive data? Keep it out.
  • Explainability thresholds: If you can't explain it, don't use it for high-stakes decisions.

Build capability at every level

  • Executives: Tie AI goals to culture goals. Track belonging, adoption and fairness alongside productivity.
  • People managers: Train on responsible use, feedback coaching and change conversations. Give them scenario playbooks.
  • Employees: Offer bite-size training, prompt libraries and safe sandboxes. Publish do/don't rules in plain language.
  • HR teams: Upskill in people analytics, data literacy and vendor due diligence. Treat AI systems like HR products - with roadmaps, releases and retirements.

Practical plays you can run this quarter

  • Fair screening: Use structured criteria and blind review options for early-stage filtering. Log reasons for recommendations and rejections.
  • Skill-based mobility: Map internal skills to projects and roles. Ask for employee consent to use learning and project data, and show how it helps them.
  • Performance support, not surveillance: Give people AI assistants for drafting, analysis and summarization. Ban keystroke logging and hidden scoring.
  • Accessible design: Offer voice, captions and font options. Include disability perspectives in testing.
  • Feedback loops: Add a "Was this fair/useful?" button in HR tools and route patterns to product owners and ERGs.

Metrics that prove it's working

  • Belonging and trust: Sentiment on fairness, voice and psychological safety - sliced by team and demographic.
  • Adoption and impact: Usage rates, task time saved, error rates and rework reductions.
  • Fairness: Adverse impact ratios in hiring, promotion and performance distributions.
  • Talent outcomes: Time-to-fill, internal mobility rate, quality of hire and early attrition.
  • Risk signals: Privacy incidents, escalations, and model exceptions closed within SLA.

Risks to watch - and how to respond

  • Shadow AI: People use unvetted tools when official ones lag. Provide approved options and simple guides.
  • Model drift: Periodically revalidate. If patterns shift, retrain or retire.
  • Over-automation: Don't replace human touch in sensitive moments. Measure employee experience, not just speed.
  • Compliance gaps: Align with employment guidance to avoid discriminatory impact. See the U.S. EEOC guidance on AI in employment.

What Mastercard's example makes clear

Decency, community and inclusion aren't slogans; they're operating rules. Put them into your policies, your product requirements, your training and your metrics.

Do that, and AI becomes a teammate - not a threat. People feel they belong. Performance follows.

Keep building your HR playbook

For practical use cases and how-tos across recruiting, talent and workforce planning, explore AI for Human Resources.

If you're setting enterprise strategy and governance, this AI Learning Path for CHROs can help you move fast and stay responsible.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)