Trump orders federal agencies to stop using Anthropic's AI technology
Washington — President Trump announced Friday that he is ordering all federal agencies to "immediately" stop using Anthropic's artificial intelligence technology, as the company neared a Pentagon deadline to drop its push for guardrails over the military's use of its AI.
"I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," Mr. Trump wrote on Truth Social. "We don't need it, we don't want it, and will not do business with them again!"
The president said he will give certain agencies, like the Department of Defense, that use Anthropic's technology six months to phase out their use of its products and threatened to take additional action against the company if it does not assist during that period.
"Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow," he wrote.
Mr. Trump attacked the company as a "Radical Left AI company run by people who have no idea what the real World is all about."
About an hour and a half after Mr. Trump's Truth Social post, Defense Secretary Pete Hegseth followed through on his promise to designate Anthropic a supply chain risk.
"I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Hegseth wrote. "Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final."
In an exclusive interview with CBS News Friday evening, Anthropic CEO Dario Amodei called the government's actions "retaliatory and punitive."
He said the company had not yet received official notification from the Pentagon and would challenge the supply chain designation in court. But he also held the door open for a future agreement.
"We have wanted to strike a deal since the beginning," Amodei said, adding, "We are still interested in working with them as long as it is in line with our red lines."
Anthropic said in a statement that the Pentagon's action "would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government." The company wrote that Hegseth doesn't have the legal authority to ban military contractors from doing business with Anthropic, since a risk designation would only apply to contractors' work with the Pentagon.
The Free Press: Will AI Doom Us All? The Market Can't Decide
Anthropic and the Pentagon have been at odds over the military's use of its AI model, Claude, and the safeguards that the company wanted in place over the use of the technology. The Pentagon insisted Anthropic agree to give the military unrestricted access to its AI model, and Hegseth had set a deadline of 5:01 p.m. Friday for it to agree to drop its guardrails.
Chief Pentagon spokesman Sean Parnell said Thursday that the department required the ability to use Anthropic's AI model "for all lawful purposes."
Anthropic had asked the Defense Department to agree to certain limits on the use of its model, including a restriction against using Claude to conduct mass surveillance of Americans. The company also sought to ensure Claude wouldn't be used by the Pentagon for final targeting decisions in military operations without any human involvement. A source familiar with the matter said Claude is not immune to hallucinations and is not reliable enough to avoid potentially lethal mistakes, such as unintended escalation or mission failure without human judgment.
Anthropic was awarded a $200 million contract from the Pentagon last July to develop AI capabilities that would advance national security. It is currently the only AI company with its model deployed on the Pentagon's classified networks through a partnership with data analytics company Palantir. But a senior Pentagon official told CBS News that Grok, the model owned by Elon Musk's xAI, could be used in a classified setting.
Later on Friday, OpenAI CEO Sam Altman said his company "reached an agreement with the Department of War to deploy our models in their classified network."
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman wrote, adding that OpenAI is asking the Defense Department "to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept."
Increasingly heated dispute over access and guardrails
In an interview with CBS News on Thursday, the Pentagon's chief technology officer, Emil Michael, said the military "made some very good concessions" in order to reach a deal with Anthropic. The Pentagon offered to "put it in writing that we're specifically acknowledging" federal laws that restrict the military from surveilling Americans, he said, and offered language "specifically acknowledging these policies that have been in place for years at the Pentagon regarding autonomous weapons."
"At some level, you have to trust your military to do the right thing," Michael said.
But an Anthropic spokesperson said that the new contract language it received from the Pentagon "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."
"New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will," the company said.
In his own statement Thursday, Amodei said that the threats from the Defense Department would do nothing to change its position on the need for guardrails around the use of its AI systems.
"Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place," he said. "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required."
Michael lashed out at the CEO in a social media post, calling Amodei a "liar" with "a God-complex."
Mr. Trump's announcement targeting Anthropic was met with pushback from Democratic Sen. Mark Warner of Virginia, the vice chair of the Senate Intelligence Committee. Warner accused the president and Hegseth of "bullying" the company to deploy "AI-driven weapons without safeguards," and said it should "scare the hell out of all of us."
"The president's directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations," he said in a statement.