The Ban That Changed Everything
If you use federal services—from visa processing to military intelligence—your government just switched AI suppliers. On Friday, President Trump ordered every federal agency to immediately stop using Anthropic's technology after the AI company refused the Pentagon's demand to remove safety guardrails. Hours later, OpenAI announced it had reached an agreement to provide technology for classified networks, though no contract has been signed. Some observers say the speed of that pivot reveals what this fight was really about: control over which company gets to power America's most sensitive operations.
Anthropic, the San Francisco startup behind the Claude AI model, had drawn a line in the sand. The Pentagon demanded it allow unrestricted military use of Claude. Anthropic refused, citing concerns that such unrestricted access could enable mass surveillance of Americans and autonomous weapons systems. The Pentagon countered that it operates within legal bounds and has procedures to determine appropriate use. CEO Dario Amodei said the company "cannot in good conscience accede" to those terms. Defense Secretary Pete Hegseth responded by designating Anthropic a "supply chain risk"—a label historically used for foreign adversaries like China's Huawei. Trump then issued an order for federal agencies to stop using Anthropic's technology.
The designation matters because, according to Anthropic, it doesn't just affect the Pentagon. The company argues it prevents any military contractor from using Claude for any purpose, even commercial work unrelated to defense. Anthropic argues the law doesn't allow that scope under statute 10 USC 3252, and the company vowed to sue.
Why the Pentagon Wanted Unrestricted Access
The military's position was straightforward: once the government buys a tool, the military decides how to use it, not the company that made it. Pentagon officials, including undersecretary Emil Michael, argued they needed the ability to deploy AI for "all lawful purposes" without having to negotiate each use case with a private company. They contended there are gray areas around what constitutes mass surveillance or autonomous weapons, and that it's unworkable to litigate individual cases.
But Anthropic and others in Silicon Valley saw it differently. The company worried that "lawful" doesn't mean safe. Anthropic was concerned the Pentagon could use AI to legally collect and analyze vast amounts of publicly available data—geolocation, web browsing, financial information from data brokers—to find patterns about Americans. Anthropic argued this constitutes legal surveillance and said no.
Emil Michael, the Pentagon official steering negotiations, grew frustrated. He called Amodei a "liar" with a "God complex" who was "putting our nation's safety at risk." Trump criticized Anthropic harshly, accusing them of poor judgment.
OpenAI Steps In—With the Same Red Lines
What happened next was extraordinary. Sam Altman, OpenAI's CEO, announced his company would draw the same red lines Anthropic had refused to abandon: no mass surveillance, no fully autonomous weapons. He wrote to staff that "regardless of how we got here, this is no longer just an issue between Anthropic and the Pentagon; this is an issue for the whole industry."
Then OpenAI struck a deal with the Pentagon. The difference: Altman negotiated terms the Pentagon accepted. OpenAI would maintain its safety principles, including restrictions on mass surveillance and autonomous weapons. The Pentagon agreed these principles align with law and policy, though sources indicate some uncertainty remained about whether the Pentagon would still seek certain data collection capabilities. OpenAI wants researchers with security clearances to monitor how the military uses the technology. It wants models confined to cloud systems rather than edge deployments like autonomous weapons. The Pentagon said yes.
Altman acknowledged the optics looked bad. "It may not 'look good' for us in the short term," he wrote. But OpenAI negotiated an agreement the Pentagon accepted, while Anthropic had refused the Pentagon's original demands for unrestricted access.
Silicon Valley Rallies, Then Fractures
The initial response from tech workers was unified. Hundreds of employees at Google and OpenAI signed a letter backing Anthropic's position, urging their own executives to resist Pentagon pressure. Senator Elizabeth Warren called the administration's tactics "extortion," while the administration characterized the action as necessary for national security. Democratic Representative Ro Khanna said "good for Anthropic" for refusing to bend.
But that solidarity fractured the moment OpenAI announced its deal. The company had negotiated an agreement the Pentagon accepted, while Anthropic had refused the Pentagon's original demands for unrestricted access. Critics and supporters disagree on whether OpenAI's approach was pragmatic or a form of capitulation.
What Happens Next
Anthropic has six months to wind down its Pentagon operations. The military must find alternatives to Claude, which had been integrated into some classified systems and was used in recent operations including the capture of Nicolás Maduro. Palantir, the defense contractor that uses Claude for sensitive military work, will need a new supplier.
Elon Musk's xAI agreed to let the military use its Grok model for "all lawful purposes," but defense officials say it's not a like-for-like replacement for Claude. Google's Gemini is being discussed for classified systems, though hundreds of Google employees have signed the same letter backing Anthropic's position.
The real question is whether Anthropic's court challenge succeeds. The company argues the supply chain risk designation exceeds the Pentagon's legal authority under statute 10 USC 3252, which it contends limits such designations to Pentagon contracts only. The Pentagon maintains it has authority to protect national security by restricting contractors' use of technology it deems a supply chain risk. If a judge agrees with Anthropic, it could limit how far the government can reach into private companies' business decisions. If the Pentagon prevails, Anthropic loses not just the Pentagon contract—valued at up to $200 million—but access to any military contractor on Pentagon contracts, though the company argues the law limits the designation's reach.