National Grid and Emerald AI Trial Real-Time Energy Scaling for AI Datacentres to Ease Grid Strain

National Grid pilots with Emerald AI to let datacentres vary electricity use to grid conditions. The trial aims to ease peaks, speed connections, and cut costs.

Categorized in: AI News Management
Published on: Sep 17, 2025
National Grid and Emerald AI Trial Real-Time Energy Scaling for AI Datacentres to Ease Grid Strain

National Grid pilots flexible energy use for AI datacentres with Emerald AI

National Grid is launching a live trial with Emerald AI to let AI datacentres flex how much electricity they draw based on real-time workload demands. The goal: ease pressure during peak periods, connect more capacity without unnecessary upgrades, and keep reliability high.

National Grid Partners has made a strategic investment in Emerald AI alongside the trial. The project will use the Emerald AI Conduction platform and Nvidia GPUs to dynamically adjust datacentre energy consumption in line with grid conditions.

What's happening

The trial will test whether AI facilities can shift or throttle activity when the grid is tight, then ramp back up when capacity returns. This includes changing the type of compute activity during stress events to stabilise demand without stalling progress on AI projects.

According to National Grid, the transmission network has redundancy outside of high-demand events such as heatwaves and cold snaps. That creates room to connect new datacentres if they can briefly dial down usage during peaks.

Why this matters for management

  • Accelerates connections: Flexible demand could unlock existing grid headroom and shorten timelines for new or expanding AI sites.
  • Controls cost: Smarter consumption helps avoid expensive infrastructure investment and peak-period charges.
  • Supports ESG: Reduces strain during critical periods while improving utilisation of existing assets.
  • Enables growth: Aligns AI build-out with grid realities, supporting the UK's digital economy.
  • De-risks operations: Structured demand flexibility can maintain reliability without blanket caps on capacity.

How flexible demand could work

Emerald AI's Conduction platform will orchestrate GPU-intensive workloads and dynamically adjust consumption in real time. Nvidia GPUs in the trial will be tuned to match workload needs to available capacity while keeping priority jobs on track.

Think scheduled training runs shifted away from peak windows, inference workloads kept steady, and non-urgent jobs queued during grid alerts-then cleared when capacity frees up. The intent is to keep business outcomes intact while smoothing datacentre load on the system.

What leaders should ask now

  • Operational impact: Which AI workloads can flex without breaking SLAs, compliance, or customer experience?
  • Governance: Who approves curtailment windows and workload prioritisation? How is this audited?
  • Commercials: What incentives or tariffs are available for flexibility and demand response participation?
  • Measurement: How will savings, reliability, and emission impacts be verified and reported?
  • Security and safety: How are orchestration controls protected and tested?
  • Ecosystem alignment: Are cloud providers, colocation partners, and network operators ready to coordinate?

What the organisations say

National Grid said there is often room on the existing grid to connect new datacentres if they can temporarily reduce usage during peak demand. The aim is to manage growing loads without over-investing in new infrastructure.

Steve Smith, chief strategy and regulation officer at National Grid, said the trial shows how innovation can optimise the grid, enable investment in advanced computing, and deliver benefits to the UK economy.

Varun Sivaram, founder and CEO of Emerald AI, said flexible AI facilities can advance AI innovation while improving reliability and affordability for everyone connected to the grid.

Action plan for management teams

  • Map AI workloads by flexibility: training, batch analytics, RAG updates, inference-define what can shift and for how long.
  • Set clear policies: peak-period rules, priority tiers, pre-approved curtailment playbooks, and escalation paths.
  • Run pilots: test scheduling, throttling, and job switching on a subset of GPUs; collect operational and financial data.
  • Engage early: align with your DNO/TSO, colocation partners, and cloud providers on flexibility options and incentives.
  • Report outcomes: tie operational data to ESG metrics and board-level investment cases.

Timeline and what to watch

The live trial is planned for later this year. Results should clarify how much flexible demand AI datacentres can provide, how to structure incentives, and where this approach can defer capital spend while protecting reliability.

Learn more about National Grid's system and responsibilities
See Nvidia's data center GPU platforms

Upskill your leadership team: For managers building AI capability while managing operational risk, explore curated programs at Complete AI Training: Courses by Job.