Hegseth declares Anthropic a supply chain risk, restricting military contractors from doing business with AI giant

Pete Hegseth designates Anthropic as supply-chain risk amid feud

Defense Secretary Pete Hegseth deemed artificial intelligence firm Anthropic a "supply chain risk to national security" on Friday, following days of increasingly heated public conflict over the company's effort to place guardrails on the Pentagon's use of its technology. 

Hegseth declared on X that effective immediately, "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The decision could have a wide-ranging impact, given the sheer number of companies that contract with the Pentagon.

"America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," Hegseth wrote.

President Trump announced earlier Friday that all federal agencies must "immediately" stop using Anthropic, though the Defense Department and certain other agencies can continue using its AI technology for up to six months while transitioning to other services.

Anthropic vowed in a statement to "challenge any supply chain risk designation in court," calling the move "legally unsound" and warning it would set a "dangerous precedent for any American company that negotiates with the government." The company argued that Hegseth doesn't have the legal authority to ban military contractors from doing business with Anthropic, since a risk designation would only apply to contractors' work with the Pentagon.

Anthropic was awarded a $200 million contract from the Pentagon last July to develop AI capabilities that would advance national security. 

In his statement, Hegseth accused the company of attempting "to strong-arm the United States military into submission" and said he would not allow it "to seize veto power over the operational decisions of the United States military." Anthropic's stance, he said,  "is fundamentally incompatible with American principles." 

In an exclusive interview with CBS News Friday evening, Anthropic CEO Dario Amodei disputed that characterization and called the government's actions "retaliatory and punitive." 

"We are patriotic Americans," he said. "...Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security. Our leaning forward in deploying our models with the military was done because we believe in this country."

The conflict centers around Anthropic's push for guardrails that would explicitly prevent the military from using its powerful Claude AI model to conduct mass surveillance on Americans or to power fully autonomous weapons.  

The Pentagon, for its part, demanded the ability to use Claude for "all lawful purposes." The military's position is that it's already illegal for the Pentagon to conduct mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons. 

Amodei described the company's guardrails around surveillance and autonomous weapons "narrow exceptions," and stressed that the company still hoped to work with the Defense Department if an agreement could be reached.

"We're not going to move on those red lines," Amodei told CBS News. But he added, "For our part and for the sake of U.S. national security, we continue to want to make this work."


The Free Press: Will AI Doom Us All? The Market Can't Decide


Anthropic is the only AI firm whose model is deployed on the Pentagon's classified networks to date. But in a social media post Friday night, OpenAI CEO Sam Altman said his company had "reached an agreement with the Department of War to deploy our models in their classified network."

"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman wrote, adding that OpenAI is asking the Defense Department "to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept." 

The decision to cut off Anthropic came after an increasingly heated dispute with the Pentagon that highlighted sweeping disagreements about the role of AI in national security and the potential risks that the powerful technology could pose.

The Pentagon had given Anthropic a deadline of Friday at 5:01 p.m. to either reach an agreement or lose out on its lucrative contracts with the military.

Hegseth called Anthropic "sanctimonious" and arrogant on Friday, and accused it of trying to "strong-arm the United States military into submission." 

"Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable," Hegseth alleged.

But Amodei has argued that guardrails are necessary because Claude is not infallible enough to power fully autonomous weapons and a powerful AI model could raise serious privacy concerns. He says the company understands that military decisions are made by the Pentagon and has never tried to limit the use of its technology "in an ad hoc manner."

"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei said in a statement Thursday. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."

Amodei has been outspoken for years about the potential risks posed by unchecked AI technology, and has backed calls for safety and transparency regulations.

The company held firm to its position late Friday, writing: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."

"We are deeply saddened by these developments," Anthropic said. "As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so."  

On Thursday, the eve of the military's deadline to reach a deal, the Pentagon's chief technology officer, Emil Michael, told CBS News that the Pentagon had made concessions, offering written acknowledgements of the federal laws and internal military policies that restrict mass surveillance and autonomous weapons.

"At some level, you have to trust your military to do the right thing," said Michael, who also noted, "We'll never say that we're not going to be able to defend ourselves in writing to a company."

Anthropic called that offer inadequate. A company spokesperson said the new language was "paired with legalese that would allow those safeguards to be disregarded at will." 

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.