Highlights
Anthropic Challenges Supply Chain Risk Designation
US-based artificial intelligence company Anthropic announced on Saturday that it will legally contest any actions taken by the US government to classify it as a supply chain risk. This follows Secretary of War Pete Hegseth’s announcement regarding potential actions against the firm due to disagreements over the deployment of its AI model, Claude.
Government Plans to Label Anthropic as Risk
Earlier on the same day, Hegseth shared on X that he was instructing the Department of War to designate the company as a supply chain risk. This newly proposed step comes after months of discussions that reportedly reached a standstill primarily over two exceptions that Anthropic sought: a refusal to permit its AI to be used for mass domestic surveillance of Americans and for fully autonomous weaponry.
Anthropic’s Reaction to the Proposed Risks
In an official statement, Anthropic described the Trump administration’s intention to label the company as a supply chain risk as “unprecedented.” The firm noted that it had not yet received any formal information from either the Department of War or the White House concerning the designation.
The company stated that designating Anthropic as a supply chain risk would mark a historical action that has typically only been reserved for US adversaries and not applied to an American company. Anthropic expressed disappointment over these developments, emphasizing that as the pioneering frontier AI company to integrate models into the US government’s classified networks, it has been supporting American warfighters since June 2024 and intends to continue doing so.
Support for Lawful AI Uses
Anthropic insisted that it backs all lawful applications of AI related to national security, aside from the two disputed aspects. According to the firm, these exceptions have not interfered with any government missions to date.
Detailing its perspective, Anthropic asserted that present “frontier” AI models lack the reliability necessary for deployment in fully autonomous weapon systems. The firm believed that permitting current models to be used in such a manner could jeopardise the safety of America’s warfighters and civilians. Furthermore, it maintained that mass domestic surveillance of Americans infringes upon fundamental rights.
Concerns about Potential Designation
The potential designation has been characterised by Anthropic as “unprecedented,” highlighting that supply chain risk labels have historically been reserved for US adversaries rather than domestic firms. The company affirmed its support for US warfighters since June 2024, claiming to be the first frontier AI company to implement models within confidential US government networks.
Hegseth suggested that the supply chain risk designation could prevent military contractors from collaborating with Anthropic. However, the company challenged this interpretation, contending that under 10 USC 3252, such a designation would solely impact the application of Claude in Department of War contracts and would not prohibit contractors from utilising its services for other clientele.
Continuous Support for Clients
Anthropic assured its individual and commercial customers that there would be no disruption in accessing Claude via its API, website, or products. The company added that its sales and support teams remain available to address any concerns while reiterating its commitment to supporting US military operations within what it referred to as clear ethical guidelines.






