Vance, Bessent questioned tech CEOs on AI security before Anthropic's Mythos release
Vice President JD Vance and Treasury Secretary Scott Bessent called leading tech CEOs last week to discuss artificial intelligence model security and how to respond if the technology is exploited by attackers, according to two people familiar with the conversation.
The private phone call occurred before Anthropic released its new Mythos model, which has significant cybersecurity applications. Anthropic's Dario Amodei, xAI's Elon Musk, Google's Sundar Pichai, OpenAI's Sam Altman, Microsoft's Satya Nadella, CrowdStrike's George Kurtz and Palo Alto Networks' Nikesh Arora participated.
The discussion focused on the security posture of large language models and their safe deployment. Officials also addressed what would happen if models were scaled in favor of attackers.
Government briefing before public launch
Anthropic said it briefed senior officials across the U.S. government on Mythos Preview's full capabilities before any external release. The company outlined both offensive and defensive cyber applications and identified risks and mitigation measures.
"Bringing government into the loop early - on what the model can do, where the risks are, and how we're managing them - was a priority from the start," an Anthropic official said.
Anthropic released Mythos to a limited group of companies on Tuesday. Apple, Google, Microsoft, Nvidia, Palo Alto Networks and CrowdStrike were among the initial launch partners.
Broader government concern
Bessent and Federal Reserve Chair Jerome Powell called a separate meeting this week with heads of major U.S. banks to address potential threats posed by Mythos. The meeting signals escalating concern from the Trump administration about advanced cyber tools.
The engagement shows Anthropic remains in conversations at the White House, even as President Donald Trump seeks to remove the company's Claude platform from federal agencies.
Legal battles continue
A federal appeals court denied Anthropic's request Wednesday to temporarily block a Department of Defense supply chain risk designation. This came weeks after a federal judge in San Francisco granted Anthropic's request for a preliminary injunction in a separate legal challenge.
The conflicting rulings leave Anthropic blocked from DOD contracts but able to continue work with other federal agencies while the cases proceed.
For government officials overseeing AI adoption and security, understanding these developments is critical. Learn more about AI for Government and explore how AI applies to cybersecurity analysis.
Your membership also unlocks: