OpenAI has signed its first defense contract with the Pentagon, gaining access to classified military networks to supply artificial intelligence tools to U.S. military personnel. The agreement was announced the same day the Trump administration ordered the Defense Department to stop using Anthropic's AI systems and declared the company a supply-chain risk barred from all federal government contracts.
Anthropic had refused to remove safeguards on its Claude AI system that prevent use in domestic mass surveillance and fully autonomous weapons systems that can kill without human authorization. After Anthropic refused, OpenAI announced its own contract with the Pentagon.
Sam Altman, OpenAI's CEO, announced the Pentagon deal with explicit assurances that the military will not use the company's AI for domestic mass surveillance or autonomous killing systems. The company detailed layered protections built into the contract to enforce these restrictions.
OpenAI barred two specific uses—mass surveillance and autonomous weapons—while allowing other military applications that comply with existing law.
OpenAI's AI tools will begin rolling out to U.S. military personnel this quarter.
Anthropic refused to remove the safeguards, a decision that triggered the Pentagon's response. The company sought contractual assurances that its technology would never be deployed for mass surveillance or autonomous weapons. The Pentagon rejected those conditions.
The Pentagon designated Anthropic a supply-chain risk, a move that bars the company from new federal contracts. The decision followed Anthropic's refusal to remove safeguards on autonomous weapons and mass surveillance.
Dean Ball, a former senior AI adviser to President Trump, called the decision "attempted corporate murder" and warned that companies like Nvidia, Amazon, and Google will face pressure to divest from Anthropic if the Defense Department's position hardens.
Anthropic has built its business around refusing certain military and surveillance applications. The Pentagon and Trump administration view such refusals as obstacles to national security.
OpenAI now supplies its models inside classified Pentagon networks, deepening its involvement with military customers. The Pentagon's ban bars Anthropic from federal government contracts.
OpenAI has signed its first defense contract with the Pentagon, gaining access to classified military networks to supply artificial intelligence tools to U.S. warfighters. The agreement came Friday night, hours after the Trump administration ordered the Defense Department to stop using Anthropic's AI systems and declared the company a supply-chain risk barred from all federal government contracts.
The timing is not coincidental. Anthropic had refused to remove safeguards on its Claude AI system that prevent use in domestic mass surveillance and fully autonomous weapons systems that can kill without human authorization. When the company would not back down, the Pentagon moved to OpenAI instead.
Sam Altman, OpenAI's CEO, announced the Pentagon deal with explicit assurances that the military will not use the company's AI for domestic mass surveillance or autonomous killing systems. The company detailed layered protections built into the contract to enforce these restrictions.
Those guarantees appear narrower than what the Pentagon demanded. While OpenAI committed to prohibiting mass surveillance and autonomous weapons, the contract permits "all lawful use" of ChatGPT on classified networks. The distinction matters: it means uses beyond the two stated prohibitions are permitted if they are legal under military and federal law.
OpenAI's AI tools will begin rolling out to U.S. military personnel this quarter.
Anthropic's refusal was principled and costly. The company sought contractual assurances that its technology would never be deployed for mass surveillance or autonomous weapons. Defense Secretary Pete Hegseth rejected those conditions, viewing them as constraints on military capability.
The Pentagon's decision to label Anthropic a supply-chain risk is extraordinary. It does not mean Anthropic broke any law or failed to deliver products. It means the company's values conflict with how the Trump administration wants to use AI in defense and intelligence operations.
Dean Ball, a former senior AI adviser to President Trump, called the decision "attempted corporate murder" and warned that companies like Nvidia, Amazon, and Google will face pressure to divest from Anthropic if the Defense Department's position hardens.
The split reflects a broader clash over AI governance. Anthropic built its business around the idea that AI companies should refuse certain military and surveillance applications. The Pentagon and Trump administration view such refusals as obstacles to national security. One side sees ethical guardrails as essential. The other sees them as constraints on warfighting capability.
For OpenAI, the Pentagon contract represents a fundamental shift. The company moves from purely commercial AI to military applications, gaining a direct relationship with the Defense Department and access to classified networks. For Anthropic, the ban from federal contracts threatens the company's ability to work with the largest customer in the U.S. government.
The next test comes when Congress examines whether the Pentagon's ban on Anthropic survives legislative scrutiny, or whether the company's restrictions on autonomous weapons and mass surveillance become a model other agencies adopt.
Highlighted text was flagged by the council. Tap to see feedback.