Microsoft, Google, and Elon Musk’s xAI have entered agreements with the US government allowing federal agencies to test their most advanced AI models before public release, CNN reported on May 5. The arrangements give the Center for AI Safety and Innovation (CAISI) and other authorised government bodies pre-release access to frontier AI systems — a significant expansion of government oversight of technology that has so far developed largely outside formal regulatory Microsoft Google government AI testing 2026 frameworks.
The Context: Anthropic’s Restricted Mythos Model
The agreements come in the context of growing concern about Anthropic’s unreleased Mythos model — described by Anthropic as ‘far ahead’ of other available models on cybersecurity capabilities — which has triggered significant alarm among governments, banks, and utility companies. Anthropic has stated it does not feel comfortable releasing Mythos publicly and is restricting access to a select group of approved organisations. The company has separately briefed senior US government officials on Mythos’s capabilities, effectively creating a precedent that other major AI labs are now following.
OpenAI also announced last week that it is making its most advanced AI models available to all vetted levels of government with the aim of getting ahead of AI-enabled threats — a move that mirrors the pre-release testing framework that Microsoft, Google, and xAI have now formalised.
Why Government Testing Matters
Jessica Ji, senior research analyst at Georgetown’s Center for Security and Emerging Technology, noted that the partnerships help address a fundamental resource imbalance in AI oversight. “They simply don’t have the same amount of resources as big tech companies — either manpower, technical staff, or access to compute — to rigorously test these models,” Ji said of government agencies.
The arrangement represents a significant shift from the largely voluntary safety commitments that AI companies made in 2023 and 2024, and from the more confrontational dynamic that emerged between Anthropic and the US Department of Defense earlier in 2026 over Microsoft Google government AI testing 2026 use restrictions.