NSA Uses Anthropic AI Model Despite Government Risk Assessment
The National Security Agency is deploying Anthropic's "Mythos" AI model even as the U.S. government has flagged the company as a potential risk, according to reporting from CNBC.
The move reveals a gap between official risk classifications and actual procurement decisions within federal agencies. Government officials have assessed Anthropic as a security concern, yet operational units continue integrating the company's technology into classified work.
Anthropic, the AI company founded by former OpenAI executives, has built Mythos as a large language model designed for complex reasoning tasks. The NSA's use of the system suggests federal agencies see practical value in the technology despite broader policy concerns.
The contradiction underscores ongoing tensions in how government evaluates and deploys emerging AI systems. Agencies must balance security protocols against operational needs and technological capability.
For government officials overseeing AI adoption, the situation highlights the need for clearer frameworks around vendor risk assessment and procurement. AI for Government programs and AI Learning Path for Policy Makers can help decision-makers understand how to evaluate these tradeoffs systematically.
The NSA did not immediately respond to requests for comment on its use of Anthropic's systems.
Your membership also unlocks: