Pentagon labels Anthropic a national security supply chain risk
The Pentagon officially labeled Anthropic (makers of Claude) a defense supply chain risk. The irony: Claude is already actively used in US military operations in Iran.
10 articles found
The Pentagon officially labeled Anthropic (makers of Claude) a defense supply chain risk. The irony: Claude is already actively used in US military operations in Iran.
OpenAI signed a $200M Pentagon contract hours after Trump banned Anthropic. Sam Altman admits it looked "opportunistic and sloppy" and announces contract amendments. 70+ OpenAI employees signed a letter supporting Anthropic.
Trump cancelled $200M in Anthropic contracts and declared it a "supply chain risk." In response, millions downloaded Claude, pushing it to #1 on the App Store before a global outage hit.
President Trump banned Anthropic products across the federal government after the company refused unrestricted military access to its AI models. OpenAI signed a Pentagon deal hours later.
Anthropic reveals that DeepSeek, Moonshot AI, and MiniMax ran industrial-scale campaigns to copy Claude using 24,000 fraudulent accounts and 16 million exchanges.
Anthropic launches Claude Sonnet 4.6 with Opus-level performance at a fraction of the cost. Users prefer it 70% of the time over the previous version.
Learn how to use Claude Code, Anthropic's AI terminal assistant, to code, debug, and automate like an expert developer.
The Pentagon used Anthropic's Claude in the Maduro operation in Venezuela. Anthropic objects to military use and the $200M contract could be canceled.
Anthropic spent millions on Super Bowl ads mocking OpenAI with the tagline "Ads are coming to AI. But not to Claude." Results were massive: 11% user growth and top 10 in App Store.
Anthropic releases Claude Opus with advanced autonomous agent capabilities that can execute complex tasks with minimal human supervision.