• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Terms and Conditions
Saturday, February 28, 2026
  • Login
  • Register
StartupSuperb
  • NewsLatest
    • Trending
    • International Insights
    • Reports
  • Funding FlowJust In
  • Artificial Intelligence
  • Tech
  • Marketing
  • Resources
    • Books
  • Shark Tank
    • Shark Tank India
  • Startup Stories
    • Founder Fridays
    • Superb Shepreneurs
No Result
View All Result
  • NewsLatest
    • Trending
    • International Insights
    • Reports
  • Funding FlowJust In
  • Artificial Intelligence
  • Tech
  • Marketing
  • Resources
    • Books
  • Shark Tank
    • Shark Tank India
  • Startup Stories
    • Founder Fridays
    • Superb Shepreneurs
No Result
View All Result
StartupSuperb
No Result
View All Result
  • News
  • Funding Flow
  • Artificial Intelligence
  • Tech
  • Marketing
  • Insights
  • Resources
  • Shark Tank
  • Startup Stories
  • Social Superb
ADVERTISEMENT
Home Tech

OpenAI Partners with U.S. Defense as Anthropic Navigates Supply Chain Challenges

Akash Das by Akash Das
February 28, 2026
in Tech
Reading Time: 6 mins read
0
A A
0
OpenAI Partners with U.S. Defense as Anthropic Navigates Supply Chain Challenges
ADVERTISEMENT
Share on LinkedInShare on FacebookShare on X.comSend on TelegramSend on WhatsApp



OpenAI and Anthropic: Military AI Contracts and Risks


Highlights

  • 1 OpenAI and Anthropic: Military AI Contracts and Risks
    • 1.1 The Importance of AI in Military Operations
    • 1.2 Anthropic’s Concerns and Risks

OpenAI and Anthropic: Military AI Contracts and Risks

OpenAI, an artificial intelligence company, has announced a partnership with the US Department of War to implement its models within secure classified networks. In contrast, Anthropic has raised concerns about being labelled as a “supply chain risk” after not consenting to specific military applications of its technology.

ADVERTISEMENT

On 28 February, OpenAI’s CEO, Sam Altman, shared on the platform X that the firm successfully struck a deal with the Department of War for classified network integration.

According to Altman, this agreement incorporates protections that adhere to the organization’s safety guidelines. He stated that two of their key safety principles include bans on domestic mass surveillance and ensuring human accountability for the use of force, particularly with autonomous weapons. Altman also mentioned that the Department concurs with these principles, reflecting them legally and politically, and has included them in the deal.

OpenAI indicated plans for technical controls related to this deployment. Altman commented on the intention to create robust technical safeguards that ensure proper functioning of their models, mentioning that they would assign field deployment engineers and work strictly on cloud networks.

The company is advocating for the same conditions to apply across the AI sector. Altman highlighted their request for the Department of War to extend these terms to all AI organisations, expressing a desire for matters to move away from legal disputes and towards constructive agreements.

The Importance of AI in Military Operations

This announcement highlights the increasing importance of advanced artificial intelligence systems in military activities, intelligence assessment, and logistics, which governments are increasingly viewing as strategically essential.

Anthropic’s Concerns and Risks

In a separate development, Anthropic has reported potential repercussions from the Department following stalled negotiations concerning two proposed applications of its Claude model.

The company disclosed that Secretary of War Pete Hegseth announced via X that he is instructing the Department of War to classify Anthropic as a supply chain risk.

According to Anthropic, the disagreement stemmed from their refusal to authorise “mass domestic surveillance of Americans” and the use of fully autonomous weaponry.

The firm asserted that current advanced AI models are insufficiently reliable for full deployment in autonomous weapons, cautioning that such applications would pose risks to both American military personnel and civilians.

Furthermore, Anthropic stated that extensive domestic surveillance infringes upon civil liberties, insisting that such practices contravene fundamental rights.

The company asserted that this classification would be unprecedented, noting that designating Anthropic as a supply chain risk is an action historically reserved for foreign adversaries and has never been publicly enacted against a US company.

Anthropic made it clear that they would contest any such designation in court. The firm reiterated that no degree of pressure from the Department of War would alter their stance on mass domestic surveillance or fully autonomous systems, asserting their intention to legally challenge any designation as a supply chain risk.

Despite this ongoing situation, Anthropic affirmed that its services for the majority of clients would remain intact, emphasising that any limitations would only affect work connected to Department of War contracts.


Tags: AIartificial intelligence
ShareShareTweetShareSend
ADVERTISEMENT
Akash Das

Akash Das

Hi, I’m Akash, an entrepreneur, tech enthusiast, digital marketer, and content creator on a mission to inspire innovation and drive transformation through technology and creativity.My expertise extends to digital marketing, where I craft data-driven strategies for SEO, social media, and branding to empower businesses and creators to grow their online presence. Alongside my entrepreneurial journey, I share my insights and discoveries through engaging blogs, tutorials, and YouTube content.

Related Posts

Anthropic Takes on Pentagon’s ‘Supply Chain Risk’ Label in Landmark Legal Battle

Anthropic Takes on Pentagon’s ‘Supply Chain Risk’ Label in Landmark Legal Battle

February 28, 2026
6
“Trump Targets Anthropic: A Clash Over Ethics and Defense Contracts”

“Trump Targets Anthropic: A Clash Over Ethics and Defense Contracts”

February 28, 2026
0
“Red Hat India Chief Highlights Path for India to Excel in AI Without Developing Proprietary Models”

“Red Hat India Chief Highlights Path for India to Excel in AI Without Developing Proprietary Models”

February 27, 2026
2
Deepinder Goyal Seeks Athletic Engineers for Temple’s Wearable Tech Venture

Deepinder Goyal Seeks Athletic Engineers for Temple’s Wearable Tech Venture

February 27, 2026
0
Google Unveils Nano Banana 2: Revolutionizing Rapid, High-Quality Image Creation

Google Unveils Nano Banana 2: Revolutionizing Rapid, High-Quality Image Creation

February 27, 2026
0
“Anthropic’s Dario Amodei Stands Firm Against Pentagon’s Push to Lift AI Safety Protections”

“Anthropic’s Dario Amodei Stands Firm Against Pentagon’s Push to Lift AI Safety Protections”

February 27, 2026
0

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

ADVERTISEMENT
StartupSuperb

©️ All rights reserved startupsuperb

Navigate Site

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Terms and Conditions

Follow Us

Welcome Back!

Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Sign Up with Google
Sign Up with Linked In
OR

Fill the forms bellow to register

*By registering into our website, you agree to the Terms & Conditions and Privacy Policy.
All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • News
    • Exclusive
    • International Insights
    • Reports
  • Funding Flow
  • Artificial Intelligence
  • Tech
  • Marketing
  • Insights
  • Resources
    • Books
  • Shark Tank
    • Shark Tank India
  • Startup Stories
    • Founder Fridays
    • Superb Shepreneurs
  • Social Superb

©️ All rights reserved startupsuperb

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version