The Pentagon used Anthropic's artificial intelligence model Claude during the military operation that led to the capture of Venezuelan President Nicolas Maduro in January 2026, according to a Wall Street Journal report. The revelation has sparked a public dispute between Anthropic and the Department of Defense that could end with the cancellation of a $200 million contract.
What exactly happened
According to the report, Claude was deployed through Anthropic's partnership with Palantir Technologies, whose tools are widely used by the Department of Defense and federal security agencies. The AI model was used within classified military systems during the planning and execution of the Venezuela operation.
What makes this case especially significant is that Claude is currently the only AI model available in certain classified Pentagon systems. Anthropic was the first AI company to have its model used in classified operations by the Department of Defense.
Why Anthropic objects
Anthropic, the company behind Claude, has usage policies that explicitly prohibit using its AI to:
- Facilitate violence
- Develop weapons
- Conduct surveillance
Upon learning how Claude was used in the Maduro operation, Anthropic questioned the Pentagon about the specific use of its software. This inquiry was not well received by government officials.
The $200 million contract at risk
The dispute has escalated to the point where the Trump administration is considering canceling Anthropic's contract, valued at up to $200 million and awarded last summer. A Pentagon official stated: "Any company that would jeopardize the operational success of our warfighters in the field is one we need to reevaluate our partnership with going forward."
Why this matters for the future of AI
This case sets a crucial precedent for the entire artificial intelligence industry:
- For AI companies: When you sell your technology to the government, do you lose control over how it's used? Where's the line?
- For governments: Can tech companies veto how the military uses tools they already purchased?
- For users: If AI is used in military operations, what implications does it have for the AI we use in our daily lives?
What's next
The situation is still developing. If the contract is canceled, the Pentagon would need to seek alternatives like OpenAI's GPT or Google's Gemini for its classified systems. Anthropic, for its part, would lose one of its largest contracts and a significant revenue source.
The debate about the ethical limits of military AI use is just beginning. This case will be studied for years as the first major confrontation between an AI company and a government over control of how technology is used.
We'll continue reporting on developments in this case. Follow us to stay updated.