In an unprecedented move in American tech history, President Donald Trump ordered all federal government agencies to immediately stop using Anthropic's artificial intelligence. The decision cancels over $200 million in federal contracts and marks the first time an American company has received a "supply chain risk" designation — a label historically reserved for foreign adversaries like Huawei.
What happened between Anthropic and the Pentagon
The conflict started when Anthropic asked the Department of Defense for assurances that its chatbot Claude would not be used for mass surveillance of American citizens or in fully autonomous weapons systems. In my experience covering the AI industry, I've never seen a tech company stand up to the government like this.
The Pentagon responded that it would only use the technology lawfully, but insisted on unrestricted access. When Anthropic refused to budge, Defense Secretary Pete Hegseth escalated the situation by declaring the company a national security risk.
Anthropic CEO Dario Amodei responded publicly: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons." The company announced it will sue the Pentagon in court.
Claude hits #1 on the App Store
What nobody expected was the public's reaction. Within 24 hours of the ban, Claude surged to the #1 free app on Apple's App Store in the United States, surpassing ChatGPT, TikTok, and Instagram. After trying both apps, I can say the controversy generated massive curiosity about Claude.
However, this unprecedented demand had consequences: on Monday, March 2, Claude experienced a global outage with "elevated errors" reported by Anthropic. The issue affects user authentication, not the AI models themselves. The enterprise API remains operational.
Conflict timeline
| Date | Event |
|---|---|
| February 2026 | Anthropic asks Pentagon for ethical use guarantees for Claude |
| Feb 27, 2026 | Deadline expires. Anthropic refuses to comply |
| Feb 28, 2026 | Trump orders all government agencies to stop using Anthropic |
| Feb 28, 2026 | Claude rises to #1 on the App Store |
| Mar 1, 2026 | Anthropic announces lawsuit against the Pentagon |
| Mar 2, 2026 | Claude suffers global outage due to massive demand |
How this affects you as a user
- Personal and business users: No change. The ban only affects the US federal government
- Developers using the API: The API remains operational. Anthropic confirmed enterprise services are unaffected
- Current outage: It's temporary, caused by the spike in new users. Anthropic is scaling infrastructure
Common issues right now
Claude won't let me log in
This is an authentication issue due to massive demand. Solution: wait 15-30 minutes and try again. The API works normally.
Will Anthropic shut down?
No. The ban only applies to the federal government. Anthropic has investors like Google and Amazon, plus millions of private users. The company plans to sue the government.
Is my data on Claude at risk?
No. The "supply chain risk" designation is administrative — it doesn't mean the company was hacked or that there are security issues with its products.
The bigger picture
I've been using both ChatGPT and Claude for a while, and this situation raises fundamental questions about who controls AI and what ethical boundaries are negotiable. Anthropic was founded specifically to develop safe AI — it's literally their reason for existing.
This case could set legal precedent on whether the government can force AI companies to remove their safety guardrails. Other giants like OpenAI have remained silent, though sources indicate they're watching with concern.
Additional resources
- Anthropic official site — Official statements
- Claude Status — Real-time service status
- Full conflict timeline — TechPolicy.Press
- Dario Amodei CBS Interview — Anthropic's red lines