Tech Titans Rally Behind Anthropic in Legal Battle Against U.S. Government

Tech Titans Rally Behind Anthropic in Legal Battle Against U.S. Government



Legal Battle of Anthropic: Lawsuit Against US Defence Department

Anthropic Files Lawsuit Against US Defence Department

Anthropic is currently engaged in a legal battle against the U.S. Defence Department after being deemed a supply-chain risk. Over 30 employees from OpenAI and Google DeepMind have recently submitted an amicus brief on March 9, 2026, backing Anthropic in its conflict with the US government.

Support from Industry Employees

The amicus brief states that the government’s classification of Anthropic as a supply-chain risk was an arbitrary and improper exercise of authority, with significant implications for the industry.

Concerns About AI Discourse

According to the brief, there are serious concerns that the actions of the Defendants could undermine public discussion regarding the advantages and dangers of AI and reduce U.S. competitiveness in the broader sphere of AI and innovation. This letter was noted shortly after Anthropic initiated two lawsuits against the Department of Defence and other federal entities.

Contractual Issues Raised

In the legal documents, employees from Google and OpenAI argue that if the Pentagon was dissatisfied with the terms of its agreement with Anthropic, it was within its rights to cancel the contract and engage another leading AI firm.

Potential Alternatives Not Pursued

They assert that the Pentagon had the option to end the contract if expectations were not met, suggesting that the government could have sought other AI companies for similar services, which it eventually did by establishing a partnership with OpenAI.

Consequences for US Competitiveness

The brief continues to express that should this punitive action against a top U.S. AI firm be allowed to continue, it would almost certainly impact the country’s competitiveness in the industrial and scientific realms related to artificial intelligence and beyond.

Advocating for Safety in AI

The document also highlights Anthropic’s commitment to safety protocols, addressing concerns like mass surveillance and autonomous weaponry to avert the harmful or dangerous application of AI technology. It stresses that existing laws regulating AI are not sufficiently robust, making the internal safeguards encapsulated within AI systems crucial for preventing the serious misuse of this technology.


Exit mobile version