A federal judge in California has blocked the Pentagon from taking punitive action against Anthropic, a leading artificial intelligence company, by labeling it a supply chain risk and attempting to sever government ties. The ruling, issued Thursday, found that the Pentagon’s actions infringed upon Anthropic’s constitutional rights, specifically its First Amendment protections and due process. The case highlights a growing tension between the Biden administration’s push for responsible AI development and the Defense Department’s desire for unfettered access to cutting-edge technology.
U.S. District Judge Rita Lin, in a stinging 43-page ruling, wrote that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. For expressing disagreement with the government.” The judge’s decision temporarily halts the implementation of the supply chain designation, but the Pentagon has one week to file an appeal. This legal battle centers on Anthropic’s refusal to grant the Department of Defense unrestricted access to its Claude AI model, particularly concerning its use in autonomous weapons systems and mass surveillance programs.
The dispute began earlier this year when Anthropic implemented contractual guardrails limiting how its Claude model could be used. The company explicitly stated it did not want its AI systems employed in the development of autonomous weapons or for domestic mass surveillance, positions it argued were protected speech. Defense Secretary Pete Hegseth, however, responded by designating Anthropic as a supply chain risk in February, a label typically reserved for companies linked to foreign adversaries. President Donald Trump subsequently ordered federal agencies to cease using Anthropic’s products and to cut ties with any companies that did business with them.
A Pattern of First Amendment Challenges
This isn’t the first time Secretary Hegseth’s actions have faced legal scrutiny. Earlier this month, a federal judge in Washington, D.C., ruled that Hegseth violated the First Amendment rights of several reporters by implementing a restrictive new press policy. The ruling found the policy unduly limited access to information. Similarly, in February, another D.C. Judge determined Hegseth infringed on the free speech rights of a Democratic senator who had urged U.S. Service members to refuse illegal orders. These cases suggest a pattern of the Defense Department under Hegseth aggressively pushing the boundaries of its authority, often at the expense of constitutionally protected rights.
Anthropic applauded Judge Lin’s ruling, stating it was “grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits.” A spokesperson for the company added, “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.” The company maintains its commitment to responsible AI development and believes its guardrails are essential to preventing misuse of the technology.
What Does ‘Supply Chain Risk’ Mean?
The “supply chain risk” designation carries significant weight. It effectively requires any company working with the military to demonstrate it does not utilize Anthropic’s products, creating a substantial barrier to entry and potentially jeopardizing hundreds of millions of dollars in contracts. Prior to this case, the label had been almost exclusively applied to companies with ties to foreign governments considered potential adversaries, making its application to a U.S.-based AI firm particularly unusual. Anthropic’s lawsuit argued the designation was a deliberate attempt to punish the company for its stance on AI ethics and to force it to comply with the Pentagon’s demands.
The Pentagon’s Position
The Department of Defense argued it needed “unfettered access” to Claude for “all lawful purposes,” particularly in wartime scenarios. Emil Michael, the Defense Department’s chief technology officer, told CNBC earlier this month, “You can’t have a company that has a different policy preference that is baked into the model… pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection.” This perspective underscores the Pentagon’s concern that ethical constraints built into AI systems could compromise military effectiveness. The department has emphasized the need for complete freedom to utilize AI systems without limitations, especially in high-stakes situations.
However, Judge Lin’s ruling directly challenged this justification, stating that the Pentagon’s actions “do not appear to be directed at the government’s stated national security interests.” She further noted that internal records indicated the supply chain risk designation was motivated by Anthropic’s “hostile manner through the press,” suggesting the Pentagon was retaliating against the company for publicly voicing its concerns.
What’s Next?
The Department of Defense is expected to file an appeal of Judge Lin’s ruling within the next week. A separate challenge by Anthropic to other authorities Hegseth invoked to make the supply chain risk designation remains pending before a federal court in Washington, D.C. The outcome of these legal battles will likely have significant implications for the future of AI development and its relationship with the military. The case raises fundamental questions about the balance between national security, free speech, and ethical considerations in the rapidly evolving field of artificial intelligence.
CNN has reached out to the Pentagon for comment and will update this story as it develops.
If you are interested in learning more about the ethical implications of artificial intelligence, resources are available from organizations like the Partnership on AI and the AI Now Institute.
Share your thoughts on this developing story in the comments below.
