Trust, human capital, and regulations: what HR needs to scale AI in energy
AI is everywhere in strategy decks, and yet most companies still aren't seeing returns. A recent discussion on "Responsible AI and advanced technology investment" opened with a stat that should make HR lean in: 85% of companies investing in AI reported no tangible ROI. That's not a tech problem alone. It's a people, trust, and governance problem.
Rakesh Jaggi, President of Digital & Integration at SLB, put it plainly: AI only works if it scales across assets and geographies, runs on trusted data, is managed the right way, and stays cost-effective over time. The catch? That requires people to work differently than they have for decades. That's HR's lane.
The HR mandate: make AI usable, trusted, and sustainable
Technology doesn't fail in isolation; adoption does. If your workforce doesn't trust the data, doesn't understand the model's limits, or can't see how it improves their day, they will default to old habits. HR can remove those friction points and turn pilots into practice.
- Translate AI strategy into roles, skills, and incentives that make new ways of working feel obvious and rewarding.
- Embed "explainability" in job design so people know when to trust a model, when to challenge it, and how to escalate.
- Tie performance goals to measured AI adoption and impact, not slideware.
Data you can trust, or no deal
Gary Hicok, Executive Board Chair of Utilidata, stressed the core of trust: consistent, coherent, third-party-tested data, plus proven cybersecurity practices across software and hardware. If employees suspect the data, they will reject the output-quietly or loudly.
- Stand up a data quality owner per domain with clear SLAs and issue resolution paths.
- Require independent validation on critical models that affect safety, production, or compliance.
- Partner with your CISO to make security training specific to AI workflows and devices.
Regulation isn't a blocker-it's a design constraint
Dr. Najwa Aaraj, CEO at the Technology Innovation Institute (TII), highlighted why many innovators choose the UAE: it operates like a live lab with clear guardrails. Frameworks require explainable and auditable AI, with a human in the loop. That clarity attracts startups and accelerates deployment.
- Codify your own Responsible AI policy: explainability, auditability, human oversight, and escalation procedures.
- Create an AI governance council (Ops, HR, Legal, Risk, Cyber) that approves use cases before they scale.
- Use established frameworks like the NIST AI Risk Management Framework to align policies and training.
Monetization needs capability, not just models
Harrison Lung, Group Chief Strategy Officer at e&, pointed to the opportunity: responsible data platforms can serve entire industries and even governments. Monetization depends on accurate insights-and on people who can use them.
- Build data literacy for frontline teams: reading model outputs, spotting drift, and decision-making with uncertainty.
- Develop analytics translators inside operations to convert model results into action on site.
- Recruit for hybrid profiles (process + data) instead of pure data roles siloed from the field.
Investor lens: scale, revenue, and mindset
Areije Al Shakar, CEO of BeVentures, sees a flood of solutions. Her filter is simple: what problem does it solve, can it scale, how does it make money, and does the team push real innovation? HR should use the same filter for internal bets.
- Run vendor and talent diligence together: solution viability plus do we have the people to adopt it at scale?
- Favor fewer, scalable platforms over a patchwork of tools that fragment skills and training budgets.
- Assess team mindset: curiosity, operational empathy, and a bias for measurable outcomes.
What HR can do this quarter
- Define an AI job architecture: product owners, data stewards, AI champions, and oversight roles.
- Launch a targeted upskilling path for operations teams (2-6 hour modules; job-specific scenarios).
- Publish a simple Responsible AI playbook: what the model does, how to challenge it, how to document exceptions.
- Stand up change champions per site with weekly office hours and adoption dashboards.
- Tie a small portion of variable pay to verified AI use and outcomes (time saved, defects avoided, emissions reduced).
- Add third-party validation to procurement criteria for AI affecting safety or compliance.
Metrics that prove progress (and keep funding)
- Adoption: percent of eligible tasks completed with AI assist; active users per site per week.
- Quality: data issue rate and time-to-resolution; model exception rate and reason codes.
- Impact: hours saved per task, downtime avoided, cost per prediction, emissions or waste reduced.
- Trust: employee confidence scores on model outputs; audit pass rate; phishing/safety test performance post-training.
Make explainability part of the job, not a footnote
- Job aids that show inputs used, last model update, and known failure modes.
- Two-click paths to flag bad recommendations and feed them back into retraining cycles.
- Clear red lines where human approval is mandatory.
Why this matters for HR
AI doesn't fail because models are weak; it fails because cultures are unchanged, incentives are misaligned, and rules are unclear. The panel's message was consistent: trust, human capital, and regulation determine whether AI scales or stalls. HR controls two of the three-and can influence the third.
If your team needs practical upskilling mapped to roles, explore curated paths by job function here: Complete AI Training - Courses by Job.
For governance alignment, use external anchors that boards recognize, like the NIST AI RMF. Pair that with clear roles, short-cycle training, and adoption incentives-and you'll turn AI from a cost center into a competitive advantage people trust.
Your membership also unlocks: