The Pentagon used Claude from Anthropic in the operation to capture former Venezuelan president Nicolás Maduro. It then emerged that Anthropic's leadership asked Palantir whether their AI was used in the raid (it was Palantir that assisted with AI in this operation) — "the question was framed as if Anthropic disapproves," a Pentagon official said.
Now the military threatens to terminate the $200M contract with Anthropic. The reason: the company refuses to lift restrictions on using Claude for autonomous weapons systems and domestic mass surveillance.
The Pentagon is demanding that four major AI labs provide access to their technologies "for all lawful purposes" — including weapons development, intelligence, and combat operations. Anthropic turned out to be the most stubborn of all. Company CEO Dario Amodei publicly stated he would not allow the US to become "an autocracy through mass surveillance."
Within the company, things are uneasy too: last week Anthropic's head of safety resigned with a warning that "the world is in danger." Employees are unhappy with the military collaboration. And are literally ready to strike.
The paradox: the company that cares most about AI safety created a model that the Pentagon trusts with its most sensitive operations. And now it's being punished for asking how exactly it's being used.