This week, the Trump administration is focusing on the oversight of artificial intelligence (AI) models before they are released to the public. Major technology companies like Microsoft, Google DeepMind, and Elon Musk’s xAI have reportedly consented to offer the US government early access to their upcoming AI models for the purpose of national security assessments.
The National Institute of Standards and Technology has issued a press release confirming that the Centre for AI Standards and Innovation (CAISI) has formed an agreement with the three aforementioned companies, enabling the institute to conduct evaluations prior to deployment and targeted research aimed at enhancing the understanding of advanced AI capabilities and improving AI security.
It is crucial to note that the agreement arises amidst rising cybersecurity concerns related to AI models such as Anthropic’s Claude Mythos. This model has ignited a worldwide discussion about the potential dangers associated with powerful AI systems, prompting governments and regulators across the globe to tighten safeguards and advocate for increased transparency in AI development and implementation.
CAISI Director Chris Fall highlighted that independent and rigorous measurement science is vital for comprehending cutting-edge AI and its implications for national security. He stated that these extended collaborations with industry players enable the centre to advance its work in the public interest at a pivotal time. Besides evaluating models to ensure national security and public safety before their release, the centre will also carry out research and assessments after AI models have been deployed.
Meanwhile, the White House is working on assembling a team of specialists to provide insights and recommendations on how the government can review AI models effectively. Currently, discussions are ongoing with leaders from tech companies, including Google and OpenAI, to help shape the framework for AI security evaluations, safety standards, and oversight procedures.
