Highlights
OpenAI and Anthropic: Military AI Contracts and Risks
OpenAI, an artificial intelligence company, has announced a partnership with the US Department of War to implement its models within secure classified networks. In contrast, Anthropic has raised concerns about being labelled as a “supply chain risk” after not consenting to specific military applications of its technology.
On 28 February, OpenAI’s CEO, Sam Altman, shared on the platform X that the firm successfully struck a deal with the Department of War for classified network integration.
According to Altman, this agreement incorporates protections that adhere to the organization’s safety guidelines. He stated that two of their key safety principles include bans on domestic mass surveillance and ensuring human accountability for the use of force, particularly with autonomous weapons. Altman also mentioned that the Department concurs with these principles, reflecting them legally and politically, and has included them in the deal.
OpenAI indicated plans for technical controls related to this deployment. Altman commented on the intention to create robust technical safeguards that ensure proper functioning of their models, mentioning that they would assign field deployment engineers and work strictly on cloud networks.
The company is advocating for the same conditions to apply across the AI sector. Altman highlighted their request for the Department of War to extend these terms to all AI organisations, expressing a desire for matters to move away from legal disputes and towards constructive agreements.
The Importance of AI in Military Operations
This announcement highlights the increasing importance of advanced artificial intelligence systems in military activities, intelligence assessment, and logistics, which governments are increasingly viewing as strategically essential.
Anthropic’s Concerns and Risks
In a separate development, Anthropic has reported potential repercussions from the Department following stalled negotiations concerning two proposed applications of its Claude model.
The company disclosed that Secretary of War Pete Hegseth announced via X that he is instructing the Department of War to classify Anthropic as a supply chain risk.
According to Anthropic, the disagreement stemmed from their refusal to authorise “mass domestic surveillance of Americans” and the use of fully autonomous weaponry.
The firm asserted that current advanced AI models are insufficiently reliable for full deployment in autonomous weapons, cautioning that such applications would pose risks to both American military personnel and civilians.
Furthermore, Anthropic stated that extensive domestic surveillance infringes upon civil liberties, insisting that such practices contravene fundamental rights.
The company asserted that this classification would be unprecedented, noting that designating Anthropic as a supply chain risk is an action historically reserved for foreign adversaries and has never been publicly enacted against a US company.
Anthropic made it clear that they would contest any such designation in court. The firm reiterated that no degree of pressure from the Department of War would alter their stance on mass domestic surveillance or fully autonomous systems, asserting their intention to legally challenge any designation as a supply chain risk.
Despite this ongoing situation, Anthropic affirmed that its services for the majority of clients would remain intact, emphasising that any limitations would only affect work connected to Department of War contracts.






