World News

Microsoft, Google, xAI give US access to AI models for security testing | Business and Economy News

May 5, 2026

Tech Giants Partner with U.S. Government for AI Security Testing

Microsoft, Google, and xAI have agreed to provide the U.S. federal government access to their advanced artificial intelligence models for national security evaluations. The announcement was made by the Center for AI Standards and Innovation (CAISI) at the Department of Commerce, just days after the Pentagon revealed its own agreement with seven technology companies to integrate AI into classified systems.

The new arrangement allows federal officials to assess AI models prior to their deployment, facilitating research aimed at understanding their capabilities and associated security risks. This step aligns with a commitment made by the administration of former President Donald Trump in July to collaborate with tech firms on evaluating AI models for potential national security threats.

Microsoft plans to collaborate closely with government scientists to examine AI systems, focusing on identifying unexpected behaviors. The company stated it will develop shared datasets and workflows for comprehensive testing of its models.

Additionally, Microsoft has established a similar agreement with the United Kingdom’s AI Security Institute. Concerns regarding the national security implications of powerful AI systems have intensified in Washington. By gaining early access to leading models, U.S. officials aim to identify risks, such as cyberattacks or military misapplications, before these tools are widely deployed.

The recent unveiling of Anthropic’s Mythos model has amplified discussions among U.S. officials and corporate leaders about its potential impact on hacking activities. CAISI Director Chris Fall emphasized the importance of rigorous measurement science in understanding frontier AI technologies and their implications for national security.

This initiative builds upon previous agreements made under President Joe Biden’s administration, when CAISI, previously known as the U.S. Artificial Intelligence Safety Institute, focused on establishing AI testing protocols, definitions, and voluntary safety standards. Elizabeth Kelly, who led those efforts, has since joined Anthropic, according to her LinkedIn profile.

CAISI has already conducted over 40 evaluations of cutting-edge AI models, including those not yet released to the public. Developers often submit versions of their models with certain safety features reduced to allow for thorough risk assessments.

Both Google and xAI did not provide immediate comments on the agreement. Following the announcement, Microsoft’s stock fell by 0.6 percent in midday trading on Wall Street, while Alphabet, Google’s parent company, saw an increase of 1.3 percent. xAI is not publicly traded.

This announcement follows a separate agreement between the Department of Defense and seven tech companies—Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX—to utilize their AI systems within classified networks. The Pentagon stated that this framework would enhance decision-making for military personnel in complex operational settings. Notably, Anthropic is absent from this agreement following past disputes with the Trump administration over the ethical and safety considerations regarding AI in military applications.

Read Full Article

Related Articles

Back to top button