Anthropic vs Pentagon: What the Technical Evidence Actually Shows

Anthropic vs Pentagon: What the Technical Evidence Actually Shows

The Anthropic-Pentagon lawsuit just got interesting. Two sworn declarations filed late Friday pull back the curtain on what’s really happening here, and if you’re building AI systems or thinking about working with government clients, you need to understand what’s at stake.

I’ve been following this since the Trump administration cut ties with Anthropic in February. At the time, it looked like a straightforward policy disagreement about military AI use. Now we’re seeing something more complicated: a dispute where technical reality and political narrative have diverged so far that one side is arguing about capabilities that literally don’t exist in the system architecture.

The Kill Switch That Isn’t There

Thiyagu Ramasamy’s declaration is where things get technical. He spent six years at AWS deploying AI in classified environments before joining Anthropic, so he knows exactly how these systems work in practice.

His key point: once Claude is deployed in a government air-gapped environment, Anthropic has zero access to it. No remote connection. No kill switch. No way to push updates without the Pentagon explicitly approving and installing them. The government’s argument that Anthropic could interfere with military operations mid-mission is based on a technical architecture that simply doesn’t match reality.

This matters because the Pentagon’s entire national security risk designation hinges partly on this theoretical vulnerability. But if the deployment model makes that vulnerability impossible, what are we actually talking about?

I think about this from a developer perspective. When you ship software to a client who runs it in their own infrastructure, you don’t have magic backdoor access. That’s not how air-gapped systems work. The fact that this even needs to be explained in a legal declaration suggests someone on the government side either doesn’t understand the technical setup or chose not to care.

The Negotiation That Never Happened

Sarah Heck’s declaration is even more striking. She was in the room for the February meeting between Dario Amodei and Defense Secretary Hegseth. Her statement directly contradicts the Pentagon’s claim that Anthropic wanted veto power over military operations.

She says that concern was never raised during months of negotiations. It appeared for the first time in court filings, which means Anthropic had no opportunity to address it before being designated a national security threat.

Then there’s the email. On March 4, the day after the Pentagon finalized its supply-chain risk designation, Under Secretary Emil Michael emailed Amodei saying they were “very close” on autonomous weapons and mass surveillance, the two issues the government now cites as evidence Anthropic is dangerous.

Read that timeline again. The designation was already done. Then Michael says they’re nearly aligned on the exact issues supposedly justifying the designation. Then he publicly says there’s no negotiation happening and tells CNBC there’s “no chance” of renewed talks.

If you’ve ever dealt with enterprise sales or government contracts, you recognize this pattern. This is what negotiation-as-pressure-tactic looks like. I’m not saying that’s definitely what happened here, but Heck’s declaration lays out the facts in a way that makes the question unavoidable.

What This Means for AI Companies

The broader implication here is about precedent. This is the first time the supply-chain risk designation has been used against an American company. If it sticks, it creates a template for how the government can pressure artificial intelligence companies: declare them a national security threat based on technical claims that may not match the actual system architecture, then use that designation as leverage.

Ramasamy also addresses the foreign nationals angle. He notes Anthropic employees have undergone the same security clearance vetting required for classified access, and claims Anthropic might be the only AI company where cleared personnel actually built models for classified environments. That’s a detail worth sitting with if you’re thinking about the broader “national security risk” narrative.

The government’s position is that Anthropic’s refusal to allow all lawful military uses is a business decision, not protected speech, and therefore can’t be First Amendment retaliation. Anthropic argues it’s being punished for publicly stated views on AI safety. The technical evidence matters here because it shapes whether the government’s stated rationale holds up or looks like post-hoc justification.

The Tuesday Hearing

Judge Rita Lin will hear arguments on Tuesday. What I’ll be watching for is whether the court engages with the technical details or treats this as purely a First Amendment case. The air-gapped deployment architecture isn’t some minor technical footnote. It’s central to whether the government’s security concerns are even possible.

If Anthropic can’t access the systems, can’t see what users are typing, can’t push updates without explicit Pentagon approval, then the “operational veto” theory falls apart. At that point, you’re left asking what the designation was really about.

I don’t know how this ends, but the gap between the Pentagon’s public statements and what these declarations describe is wide enough that someone’s version of events is going to look very bad when this is over.

Read Next