By Heather Somerville and Amrith Ramkumar
SAN FRANCISCO -- A federal judge on Tuesday said the U.S. government appeared to be punishing Anthropic by banning the artificial-intelligence company -- in retribution for bringing into the public view its contracting dispute with the Pentagon.
"It looks like an attempt to cripple Anthropic," U.S. District Judge Rita F. Lin of the Northern District of California said during a court hearing. Such actions "of course would be a violation of the First Amendment."
The hearing is part of a bid by Anthropic to get relief from the Trump administration's ban on government use of the company's AI models. Attorneys for the Silicon Valley company and the U.S. government made arguments before Lin, who hasn't yet ruled on the matter but expressed serious doubts about the Trump administration's actions in her opening remarks.
Lin, who was appointed to the federal bench by President Joe Biden, said that after Anthropic publicly disclosed its dispute with the Defense Department, the administration "seems to have a pretty big reaction to that" and its actions "don't seem to be really tailored to a stated national security concern."
"It looks like defendant went further than that because they were trying to punish Anthropic," Lin said.
Anthropic this month sued the U.S. government to halt its designation of Anthropic as a supply-chain risk, a designation that has pitted the Trump administration against one of the industry's leading artificial-intelligence labs. The dispute stems from a disagreement over how its AI tools can -- and cannot -- be used in national security applications.
Lin asked for more evidence before making her decision on Anthropic's request for an injunction.
Anthropic has said the designation as a supply-chain risk -- a ban usually imposed only on Chinese entities -- is government overreach and the administration has failed to justify its actions. The supply-chain risk designation is normally applied to foreign adversaries that pose serious security threats, defense and AI experts have said.
"This is something that has never been done with respect to an American company," said Michael Mongan, an attorney with firm WilmerHale who is representing Anthropic.
President Trump in late February also directed all federal agencies to stop working with the company. Defense Secretary Pete Hegseth posted on social media that contractors and suppliers that work with the military must cease using Anthropic.
Anthropic said the government's action already have cost it hundreds of millions of dollars in canceled contracts and aborted customer agreements. The company projects that it will lose billions of dollars in revenue this year, which will also make it more difficult for the company to fundraise from investors.
Even as the closely watched legal battle moves into the courtroom, Anthropic's models are currently in use in the war with Iran for targeting and planning airstrikes. How the spat between the Defense Department and one of the leading frontier AI labs shakes out is likely to have ramifications for the relationship between the Pentagon and a Silicon Valley that has only recently warmed to the business of war.
The government has argued Anthropic overstepped by seeking to enforce explicit limitations on how the Pentagon can use its Claude models. Anthropic sought assurances that its models wouldn't be used in fully autonomous weapons or for domestic surveillance. The Pentagon countered that such prohibitions were unnecessary because military policies or laws already restricted such uses.
Deputy Assistant Attorney General Eric Hamilton acknowledged in court that the Defense Department didn't follow protocol for applying the supply-chain risk designation, which includes briefing Congress and exploring less intrusive alternatives.
The administration has said its actions were justified because of concerns that Anthropic could in the future use its own corporate discretion over national security matters and change its models or disable its technology in the middle of a military operation.
"It's because of the risk of future subversion or sabotage," Hamilton said.
Anthropic has argued such sabotage is technically impossible as the company doesn't have access to the models after they are deployed.
In court filings, Anthropic disclosed an email sent by Pentagon official Emil Michael on March 4 -- five days after the president and defense secretary declared in social-media posts that the U.S. government wouldn't do business with Anthropic -- telling Chief Executive Dario Amodei that they "are very close here" to an agreement to keep the Anthropic's tech in the Pentagon.
"I hope this work as I am running out of time," wrote Michael, who is the undersecretary for research and engineering. The Pentagon letter informing Anthropic of its supply-chain designation is dated a day earlier, March 3.
Anthropic argued in a filing that it was "inconceivable" that Michael's active negotiations "reflected genuine concerns that Anthropic would sabotage military operations."
Write to Heather Somerville at heather.somerville@wsj.com and Amrith Ramkumar at amrith.ramkumar@wsj.com
(END) Dow Jones Newswires
March 24, 2026 19:51 ET (23:51 GMT)
Copyright (c) 2026 Dow Jones & Company, Inc.
Comments