DOE and Lawrence Livermore launch AI testbed for energy sector security testing

The DOE has launched Mjölnir, an AI testbed built with Lawrence Livermore Lab that lets energy operators test AI models for security flaws before deploying them on critical infrastructure.

Categorized in: AI News Operations
Published on: Apr 16, 2026
DOE and Lawrence Livermore launch AI testbed for energy sector security testing

DOE launches AI testbed to evaluate models for energy operations

The Department of Energy's Office of Cybersecurity, Energy Security and Emergency Response has partnered with Lawrence Livermore National Laboratory to build an AI testbed that identifies weaknesses in models used across the energy sector. The platform, called Mjölnir, lets utilities, grid operators, vendors, and research organizations test AI systems before deploying them in critical infrastructure.

Users upload AI models to the testbed and run adversarial tests to assess security vulnerabilities. The platform measures how easily models can be manipulated, whether they leak sensitive data, and how resistant they are to attack.

"The testbed enables users to observe the effects of attacks and quantify how vulnerable the model is to manipulation and leaked information," DOE said. "This facilitates apples-to-apples comparisons between models, showing users which model options are most robust and by what margin."

Why this matters for operations

AI systems are increasingly embedded in energy grid operations and other critical workflows. They handle sensitive operational data and make decisions that affect power delivery. When these systems fail or get compromised, the consequences extend beyond the organization.

Researchers at the Japan AI Safety Institute warned last year that failures in AI security could result in privacy violations, operational disruptions, economic damages, and threats to public safety.

Threat actors are actively targeting AI models. Anthropic reported that its models have been targeted by competitors attempting to steal information about how the technology works. OpenAI narrowly avoided a supply-chain attack this month after a popular open-source library was compromised.

What the testbed does

Mjölnir allows energy-sector organizations to evaluate AI models in a controlled environment before integration into live systems. The testing covers attack scenarios, data exposure risks, and model behavior under adversarial conditions.

The platform supports compliance with presidential AI policy outlined in the AI Action Plan and Genesis Mission.

For operations professionals evaluating AI for Operations, the testbed provides a structured way to assess risk before deployment. Security teams can use it to understand vulnerabilities specific to AI for Cybersecurity Analysts perspectives on model resilience.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)