Defense Secretary Pete Hegseth has given Anthropic until Friday to accept Pentagon terms for military use of its AI model or lose its Pentagon contract. Hegseth met with Anthropic CEO Dario Amodei on Tuesday at the Pentagon. The meeting centered on the Pentagon's demand that Anthropic remove safety restrictions from its AI model for military use.
The deadline represents a collision between national security demands and AI safety principles. Anthropic has built its reputation on creating AI systems with built-in guardrails. The company argues these safeguards prevent misuse. The Pentagon contends that these same restrictions limit the military applications it needs. The Pentagon wants Anthropic to remove those safeguards for defense department use.
This isn't a quiet contract dispute between a vendor and a customer. If Anthropic agrees to the Pentagon's terms, it will have accepted government conditions that it previously resisted. If it refuses, it will lose the Pentagon contract.
The Pentagon has stated that military applications require unfettered access to AI capabilities. Anthropic has not publicly detailed its position, though its resistance to removing safeguards suggests concerns about the risks of doing so. Neither side has publicly detailed what specific terms the Pentagon is demanding or what military applications require them.
Hegseth's Friday deadline creates immediate pressure. Anthropic has less than three days to decide between its founding principle of building safe AI systems and maintaining a government contract. The company faces a choice: comply with the Pentagon's demands or watch the contract evaporate.
If Anthropic agrees to the Pentagon's terms by Friday, the company will have modified its safety restrictions and signaled willingness to adjust its approach for government contracts. If it refuses, the Pentagon will follow through on its threat, and Anthropic will lose this defense contract.
Defense Secretary Pete Hegseth has given Anthropic until Friday to remove safety restrictions from its AI model or lose its Pentagon contract. The ultimatum came during a Tuesday meeting between Hegseth and Anthropic CEO Dario Amodei at the Pentagon, where the two sides clashed over whether the company's artificial intelligence tools should be deployed for military applications without their current safeguards intact.
The deadline represents a collision between national security demands and AI safety principles. Anthropic has built its reputation on creating AI systems with built-in guardrails designed to prevent misuse. The Pentagon wants those same tools stripped of those protections for defense department use.
This isn't a quiet contract dispute between a vendor and a customer. Anthropic's relationship with the Pentagon shapes how quickly AI enters American military systems, hospitals, autonomous vehicles, and consumer devices. If the company capitulates, it signals that safety restrictions can be negotiated away under government pressure. If it refuses, it risks losing a major defense contract and potentially inviting regulatory retaliation.
The stakes extend beyond one company. Other AI developers watching this confrontation will draw conclusions about whether safety features are negotiable when national security officials demand them. The outcome could reshape how aggressively AI companies pursue safeguards across their entire product line.
Hegseth's Friday deadline creates immediate pressure. Anthropic has less than a week to decide between its founding principle of building safe AI systems and maintaining a lucrative government contract. The company faces a choice with no middle ground: comply with the Pentagon's demands or watch the contract evaporate.
The Pentagon's position is that military applications require unfettered access to AI capabilities. Anthropic's position, implied by its resistance, is that removing safeguards creates unacceptable risks. Neither side has publicly detailed what specific guardrails the Pentagon wants removed or what military applications require their elimination.
If Anthropic agrees to the Pentagon's terms by Friday, the company will have abandoned a core competitive advantage and signaled that government pressure can override safety commitments. If it refuses, the Pentagon will follow through on its threat, and Anthropic will lose access to one of the largest potential customers for AI technology in the world. The decision Anthropic makes in the next 72 hours will likely influence how every other AI company negotiates with federal agencies for years to come.
Highlighted text was flagged by the council. Tap to see feedback.