Microsoft, Google’s DeepMind and Elon Musk’s xAI have agreed to share early versions of their powerful AI models with the US government for pre-clearances and security reviews, the Department of Commerce said Tuesday.
The department’s Center for AI Standards and Innovation said it will “conduct pre-deployment evaluations and targeted research” to better understand the capabilities and risks that come with new tools.
Previous agreements with Anthropic and OpenAI have also “been renegotiated” to reflect Commerce Secretary Howard Lutnick and President Trump’s new directives on security reviews, the center said.
The moves come amid mounting fears over new AI tools like Anthropic’s Mythos, which the company’s execs warned could cause a wave of hacks and terror attacks if it ever fell into the wrong hands.
Google DeepMind declined to comment. Tom Lue, the research lab’s vice president of global AI affairs, confirmed the partnership in a social media post Tuesday.
The White House, Department of Commerce, Microsoft and xAI did not immediately respond to The Post’s requests for comment.
“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” Chris Fall, director of the Center for AI Standards and Innovation, or CAISI, said in a statement Tuesday.
“These expanded industry collaborations help us scale our work in the public interest at a critical moment,” he added.
Fall was recently announced as the center’s director after ex-Anthropic researcher Collin Burns was pushed out following just four days on the job, according to the Washington Post. The change in leadership came as the White House and Anthropic have been feuding over safety policies.
The AI Safety Institute was established in 2023 under the Biden administration. It was renamed as CAISI under the Trump administration, with the White House’s aiming to lift AI safety guardrails to boost the rollout of new models.
Trump previously touted the need for rapid tech acceleration with the goal of beating China in the global AI race.
But Anthropic’s controversial rollout of Mythos has given some policymakers pause.
A nightmarish analysis from Anthropic itself showed that Mythos could easily exploit electric grids, power plants and hospitals if hacked.
The model has already “found thousands of high-severity vulnerabilities, including some in every major operating system and web browser,” the AI company previously trumpeted. It’s said access would be limited to a group of companies including Amazon, Google and JPMorgan.
Anthropic CEO Dario Amodei has predicted that other rivals will catch up in months.
OpenAI is planning a limited release of its latest model, called GPT-5.5-Cyber, over security concerns.
Americans appear to be growing skeptical of AI innovation.
A Pew Research Center poll last year found that 50% of Republicans and 51% of Democrats said they were more concerned than excited about the increased use of AI in daily life.


