US government strikes deals with Google DeepMind, Microsoft and xAI to vet AI models for national security risks before release

The US Department of Commerce struck AI safety review deals with Microsoft, Google DeepMind, and xAI, requiring early model access to assess cybersecurity and bioweapon risks. Similar deals with OpenAI and Anthropic date back two years.

Categorized in: AI News Government
Published on: May 07, 2026
US government strikes deals with Google DeepMind, Microsoft and xAI to vet AI models for national security risks before release

US Government Strikes AI Safety Review Deals With Microsoft, Google DeepMind, xAI

The US Department of Commerce's Center for AI Standards and Innovation announced agreements Tuesday with three major AI developers to review their models before public release, focusing on national security risks tied to cybersecurity, biosecurity, and chemical weapons.

The deals with Microsoft, Google DeepMind, and xAI follow similar arrangements the Biden administration secured with OpenAI and Anthropic two years ago. CAISI has already completed more than 40 such evaluations on unreleased models.

Chris Fall, CAISI director, said the review process would help the federal government understand the capabilities of powerful new AI systems and protect national security. "Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," Fall said.

How the Review Process Works

Developers typically provide the government with early versions of AI models that have reduced or removed safety guardrails. This approach allows federal agencies to thoroughly evaluate national security-related capabilities and risks before the public gains access.

Microsoft said in a blog post that while the company conducts extensive AI testing internally, "testing for national security and large-scale public safety risks necessarily must be a collaborative endeavor with governments."

The company announced parallel agreements with the UK's government-backed AI Security Institute on the same day.

Context and Broader Concerns

The agreements arrive as concerns mount about releasing the newest, most powerful AI models to the public. Experts worry that advanced systems could enable hackers to exploit cybersecurity vulnerabilities at scale.

Anthropic, which released its Mythos model with limited access, launched Project Glasswing to coordinate with other tech companies on securing critical software systems.

The Trump administration is reportedly considering an executive order to formalize government oversight of AI tools, though officials characterized the reporting as "speculation."

Learn more about AI for Government and Generative AI and LLM to understand how these technologies intersect with policy and security frameworks.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)