The supply chain risk associated with Anthropic has made headlines following the U.S. Department of Defence’s designation of the company. President Donald Trump has mandated that all federal agencies cease the use of Anthropic products. With OpenAI’s recent agreement with the DoD now established, government entities, including the State Department, must depend on OpenAI’s AI solutions.
As the government imposes restrictions on the utilisation of Anthropic’s AI tools, tech professionals are voicing their opposition and urging Congress to overturn this designation, while Claude witnesses an uptick in user adoption.
An Exile for Anthropic from Government Usage
Scott Bessent, Treasury Secretary, took to X (formerly Twitter) to declare that his department is discontinuing the use of Anthropic products.
Bessent remarked that the American public deserves assurance that every tool within the government operates in the public interest, asserting that under President Trump, no private entity will dictate the parameters of national security.
Furthermore, Reuters has reported that the US Department of Health & Human Services has instructed its staff to transition towards alternative AI solutions such as ChatGPT and Gemini.
At the direction of @POTUS, the @USTreasury is terminating all use of Anthropic products, including the use of its Claude platform, within our department. The American people deserve confidence that every tool in government serves the public interest, and under President Trump…
— Treasury Secretary Scott Bessent (@SecScottBessent) March 2, 2026
In a similar vein, the U.S. State Department has issued a directive favouring OpenAI tools over Anthropic’s offerings, stating that “For now, StateChat will use GPT4.1 from OpenAI,” as quoted by Reuters.
This removal stems from the DoD’s dispute with Anthropic regarding regulations on using its AI for autonomous weapon targeting and extensive surveillance. Presently, Anthropic advocates for stringent limitations on these applications, whereas the government is calling for more leniency.
With Anthropic declining to meet the DoD’s requirements, OpenAI has stepped in, forging a deal with the Defence Department and igniting significant backlash.
Advocating to Remove Anthropic from the “Supply Chain” Blacklist
Tech professionals from various leading tech firms have collectively signed an open letter addressed to the Department of War and Congress, urging the removal of the “supply chain risk” designation.
This correspondence has garnered signatures from 121 representatives of prominent global technology companies, including OpenAI, Slack, Cursor, IBM, among others. The letter articulated that there is a strong belief that the federal government should not penalise a private enterprise for choosing not to agree to contract modifications.
These professionals are also encouraging Congress to scrutinise whether employing such extraordinary measures against an American tech firm is justifiable.
The letter further emphasised that it sets a concerning precedent when the government imposes penalties on a corporation for opting against contract alterations. The signatories expressed that other tech companies may feel coerced into acquiescing to government demands out of trepidation.
It concluded with a statement highlighting that the United States leads in AI innovation due to its commitment to free enterprise and the rule of law; undermining that commitment to penalise one organisation is short-sighted and contrary to the interests of national security.





