It seems that Dario Amodei and Pete Hegseth’s meeting did not go well.
Hegseth had insisted on meeting in person in Washington with the CEO of Anthropic, a San Francisco-based artificial intelligence firm whose Claude models had surreptitiously found their way into the military’s secret workplace. A contract was the topic of discussion. The subtext was more akin to a cautionary tale. Nothing had been settled when the two men left the room. In a statement released two days later, Amodei declared that his organization would “rather not work with the Pentagon” than consent to technology applications that could “undermine, rather than defend, democratic values.” The following day, President Trump ordered federal agencies to completely cease using Anthropic’s products after Hegseth designated the company as a supply chain risk, a label the U.S. government has traditionally reserved for Chinese telecom companies and foreign adversaries.
It happened quickly. Additionally, the impact it had on the technology sector is still being felt.
| Detail | Information |
|---|---|
| Company | Anthropic — AI safety-focused company; headquartered in San Francisco, CA; founded 2021 |
| CEO & Co-founder | Dario Amodei — former VP of Research at OpenAI; co-founded Anthropic with sister Daniela Amodei and others |
| Core Product | Claude — a large language model used by government agencies, enterprises, and consumers |
| Pentagon Relationship | Providing AI to U.S. defense and intelligence agencies since late 2024 via Palantir; signed $200M DOD contract in July 2025 |
| Dispute Trigger | Pentagon demanded Anthropic accept “any lawful use” of Claude, including mass domestic surveillance and fully autonomous weapons |
| Anthropic’s Red Lines | Refused to allow Claude for: (1) mass domestic surveillance of U.S. citizens, (2) fully autonomous lethal weapons systems |
| Pentagon Response | Defense Secretary Pete Hegseth labeled Anthropic a “supply chain risk” — a designation previously reserved for foreign adversaries |
| Presidential Action | President Trump ordered federal agencies to “immediately cease” using Anthropic’s technology (late February 2026) |
| Threats Made | Pentagon threatened to invoke the Defense Production Act to compel Anthropic’s compliance |
| Anthropic’s Legal Move | Filed lawsuit in federal district court, San Francisco, alleging retaliation for exercising First Amendment rights; estimates hundreds of millions in losses |
| Silicon Valley Response | 37 top AI researchers filed amicus brief supporting Anthropic, including Google Chief Scientist Jeff Dean and researchers from OpenAI and DeepMind |
| OpenAI Contrast | Sam Altman announced DOD deal hours after Anthropic’s ban — with the same guardrails Anthropic had demanded |
| Reference | BBC News — Anthropic boss rejects Pentagon demand to drop AI safeguards |
On the surface, the disagreement sounds like a failed contract negotiation. However, the specifics are important and worth considering. Anthropic’s two declared red lines were specific: they prohibited Claude from being used for widespread domestic surveillance of Americans and from powering fully autonomous weapons systems, which are machines that make deadly decisions without consulting a human. These weren’t philosophical objections that were abstract. Amodei stated that current AI systems “are simply not reliable enough to power fully autonomous weapons” and that mass surveillance tools can put “scattered, individually innocuous data into a comprehensive picture of any person’s life — automatically and at massive scale” in an unusually direct letter for a CEO involved in a federal contract dispute.
The Pentagon’s stance was that Anthropic should accept a “all lawful purposes” standard rather than putting its own restrictions in a government contract because these issues were already covered by current law and military policy. The new contract language, according to Anthropic, was “paired with legalese that would allow those safeguards to be disregarded at will.” In the end, the two sides were not very close.
The story becomes truly challenging to read clearly at this point. Sam Altman of OpenAI revealed that his company had an agreement with the Pentagon to use its models on the military’s classified network, just hours after the Trump administration severed ties with Anthropic. The agreement contained restrictions on autonomous weapons and mass surveillance, which are essentially the same safeguards Anthropic had been requesting and which the Defense Department had just spent weeks rejecting. It was a startling contrast. According to one interpretation, the Pentagon’s stance was more concerned with who had the power to impose the restrictions than it was with their content. Another interpretation is that Anthropic was unwilling to compromise on the precise wording, while OpenAI was. Which interpretation is more accurate is still up for debate.
Observing all of this gives one the impression that a difficult-to-cross line has been crossed. The designation of “supply chain risk” is a significant administrative step. It prohibits Anthropic from conducting business with the Pentagon as well as the whole network of military partners, suppliers, and contractors that surround the Department of Defense. In a public blog post, Dean Ball, one of the main authors of the White House’s own AI action plan, referred to it as “attempted corporate murder”—a startling statement coming from someone within that ideological ecosystem. The implied message from Hegseth was put simply by Thomas Wright of the Brookings Institution: if you cooperate with us and we don’t agree, we will either blacklist and destroy your business through the supply chain designation or partially nationalize your company through the Defense Production Act. Other tech firms are not encouraged to engage in sincere negotiations with the federal government by that message.
The way the larger tech community has reacted is noteworthy. An amicus brief supporting Anthropic’s lawsuit was signed by 37 AI researchers, including Google Chief Scientist Jeff Dean and researchers from OpenAI and Google DeepMind who filed in their individual capacities. It is uncommon for rival organizations to align in this way, and it raises questions about how the research community perceives the stakes of this conflict. Whether Anthropic was treated fairly is not the only issue under litigation. It concerns whether any American technology company can impose safety restrictions on the use of its products without those restrictions being viewed as disobedient.
Because they wouldn’t allow their work to be used for drone targeting, Google’s own employees forced the company to cancel Project Maven, a Pentagon contract, less than ten years ago. At the time, it was portrayed as tech workers retaliating against the military. The Anthropic dispute is distinct in that the decision is made by a company’s leadership, who then publicly accept the repercussions. In many respects, the industry has shifted away from that previous resistance culture.
The sentiment in Silicon Valley has shifted from protest to engagement, and OpenAI, Microsoft, and Palantir have all strengthened their defense ties in recent years. Founded by former OpenAI researchers who left specifically due to concerns about AI safety and responsible development, Anthropic holds a somewhat unique position in that landscape: a company that has made a sincere effort to incorporate its values into its business model and is now learning the consequences when those values clash with the executive branch.
The case is being heard in a San Francisco federal court. The legal arguments are unproven, and the result is uncertain. There is no doubt that the battle has already damaged Washington’s relationship with the technology sector in a way that no single decision can undo.





