OpenAI signs Pentagon deal: Altman admits it looked "opportunistic"
Tech News

OpenAI signs Pentagon deal: Altman admits it looked "opportunistic"

6 min read
13 Views
Share:

In one of the most explosive controversies in AI history, OpenAI signed a $200 million contract with the Pentagon just hours after the Trump administration banned its rival Anthropic for refusing to yield on ethical boundaries. Sam Altman now admits the move "looked opportunistic and sloppy" and is announcing changes to the deal.

What exactly happened

On February 27, Defense Secretary Pete Hegseth gave Anthropic an ultimatum: allow unrestricted use of its AI models "for all lawful purposes" within the Pentagon's classified network. Anthropic refused, insisting on two hard limits: no mass surveillance of American citizens and no fully autonomous weapons systems.

The government's response was swift: Trump ordered all federal agencies to cease using Anthropic technology, and Hegseth designated the company a "supply chain risk to national security" — a label historically reserved for foreign adversaries like Huawei.

Hours later that same day, OpenAI announced it had closed a deal with the Pentagon to provide its models in classified networks. In my experience following the AI industry, I've never seen such a controversial move.

Conflict timeline

DateEvent
Feb 27Hegseth gives Anthropic ultimatum. Anthropic refuses. Trump bans it.
Feb 27OpenAI announces Pentagon contract hours later
Feb 28OpenAI publishes blog "Our agreement with the Department of War"
Feb 28Anthropic announces lawsuit, calls designation "legally unsound"
Mar 170+ OpenAI employees sign letter supporting Anthropic
Mar 1Claude surges to #1 on App Store, overtaking ChatGPT
Mar 2Workers from Google, IBM, Slack also sign the letter
Mar 3Altman admits it was "opportunistic" and announces amendments

What the OpenAI contract includes

The deal expands a previous $200M contract (originally held by Anthropic) to include deployment in classified environments. OpenAI established three "red lines":

  • No mass domestic surveillance
  • No directing autonomous weapons systems
  • No high-stakes automated decisions (e.g., "social credit" systems)

However, legal experts noted that the contract merely restated existing law without adding independent contractual protections — exactly what Anthropic feared.

The backlash: employees rebel

The response was immediate and unprecedented:

  • 70+ OpenAI employees signed an open letter titled "We Will Not Be Divided" supporting Anthropic
  • Hundreds of workers from Google, IBM, Slack, Cursor, and Salesforce also signed
  • OpenAI research scientist Aidan McLaughlin posted that he doesn't think "this deal was worth it" (~500,000 views)
  • Chalk graffiti criticizing OpenAI appeared outside the company's San Francisco offices
  • Claude overtook ChatGPT as the #1 app on the US App Store
  • Anthropic's paid subscribers more than doubled since January

Altman responds and announces amendments

On Monday, March 3, Altman published an internal memo (later shared on social media): "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."

The contract amendments include:

  • Explicit prohibition on surveillance of "U.S. persons and nationals" with reference to the Fourth Amendment
  • Intelligence agencies (NSA, DIA) excluded from access without separate contracts
  • Language locked to current laws: even if laws change in the future, usage must comply with standards at time of signing

How this affects you as a user

After testing both platforms extensively, here's what matters:

  • ChatGPT users: No functional change to the product. The military contract doesn't affect the civilian version
  • Claude users: Anthropic continues operating normally. Services are not affected by the government ban
  • Developers: Both APIs remain operational. The decision only affects federal government contracts

Common issues

Will OpenAI build AI weapons?

Per the contract, it cannot direct autonomous weapons. But critics point out the language has legal loopholes. The AI would be used for intelligence, logistics, and analysis — not directly in weapons.

Should I switch from ChatGPT to Claude?

That's a personal decision. I've been using both for a long time and each has distinct strengths. The important thing is that both civilian products work the same as before.

Will Anthropic shut down?

No. It has investors like Google and Amazon, and its user base doubled. The lawsuit against the government protects it legally.

Additional resources

J
Written by
Jesús García

Apasionado por la tecnologia y las finanzas personales. Escribo sobre innovacion, inteligencia artificial, inversiones y estrategias para mejorar tu economia. Mi objetivo es hacer que temas complejos sean accesibles para todos.

Share post:

Related posts

Comments

Leave a comment

Recommended Tools

The ones we use in our projects

Affiliate links. No extra cost to you.

Need technology services?

We offer comprehensive web development, mobile apps, consulting, and more.

Web Development Mobile Apps Consulting