Sam Altman, CEO of OpenAI, stated Friday that his company shares the same 'red lines' as rival Anthropic on military AI use. This support comes during a heated dispute between Anthropic and the Pentagon over limits on AI models like Claude. The Defense Department gave Anthropic until 5:01 p.m. ET today to drop its restrictions or risk losing a contract worth up to $200 million. Tensions have boiled over with threats to use old laws and blacklist the company.

Key Takeaways

  • OpenAI agrees with Anthropic's bans on using AI for domestic mass surveillance and fully autonomous weapons.
  • Pentagon demands AI firms allow use for 'all lawful purposes,' despite saying it won't cross those lines.
  • Altman criticized Pentagon threats to invoke the Defense Production Act against AI companies.
  • Over 200 Google and OpenAI workers signed a letter backing Anthropic's position.

Background

Anthropic built its AI model Claude with built-in safeguards. These block its use in ways the company sees as too risky. The main issues? Domestic mass surveillance in the U.S. And weapons that kill without human sign-off. The Pentagon wants full access. It says federal laws already prevent bad uses. But Anthropic won't budge.

Advertisement

This fight started months ago. It grew under the Trump administration. AI firms like Anthropic, OpenAI, Google, and xAI all have Defense Department deals. Anthropic got cleared first for classified systems. That made it a top pick. Now the government worries one holdout could slow things down.

And workers inside these companies feel strongly too. More than 200 from Google and OpenAI put their names on an open letter Thursday. They urge leaders to stand with Anthropic. 'The Pentagon is trying to divide companies,' the letter says. Signers include over 160 from Google. More than 40 from OpenAI. Some stayed anonymous. No ties to other AI groups or politics, they claim.

Congress members have weighed in before. Back in February 2025, some pushed the government to ease up. Google even dropped its own old ban on military AI work around then. But this clash feels different. It's public. It's loud. And it's about control.

Key Details

The Pentagon set a hard deadline. 5:01 p.m. ET Friday. Anthropic must allow Claude for all legal military needs. Or lose the big contract. Officials say they don't plan surveillance or robot killers anyway. But they want no company limits.

Threats go further. The Defense Department could invoke the Defense Production Act. That's from the Korean War days. It forces companies to hand over tech for national security. They also warn of labeling Anthropic a 'supply chain risk.' That could bar it from all government work.

Altman spoke out on CNBC Friday morning. He backs Anthropic's lines. But he wants companies to help the military. As long as laws hold and red lines stay.

"I don't personally think the Pentagon should be threatening DPA against these companies," Altman said. He added that OpenAI trusts Anthropic on safety and support for troops.

In a note to staff Thursday night, Altman said OpenAI is talking to the Pentagon. Goal? Get models into classified systems. But with bans on U.S. surveillance and unapproved autonomous weapons. That's per someone who saw the note.

Anthropic CEO Dario Amodei fired back Thursday. He called the Pentagon the 'Department of War' – its new name. 'We cannot in good conscience accede,' he wrote. Anthropic knows the military calls the shots. It never blocked specific ops before. But surveillance and full auto weapons? Too far. Tech can't handle it safely yet.

Pentagon's Emil Michael hit back on X. He's undersecretary for research. He called Amodei a liar with a God complex. 'He wants to control the US Military,' Michael wrote. The department follows law. Won't bow to one tech firm.

Michael told CBS News: trust the military to do right. Policies already ban those uses.

Worker Pushback

The open letter from staff adds heat. It warns the Pentagon is pressuring Google and OpenAI separately. To make one give in. 'Set aside differences and unite,' it says. Signatures hit over 200 by late Thursday. Google's huge workforce – nearly 200,000. OpenAI's smaller, under 10,000. Still, voices inside matter.

Experts call this rare. Jerry McGinn runs a center on defense industry at a DC think tank. Contractors don't usually dictate terms. You'd negotiate every use case. Not practical. But AI is new. Untested. So this public brawl makes sense.

Check out Anthropic's full clash with the Pentagon on AI limits for more on that side.

What This Means

OpenAI's stance complicates things for the Pentagon. Altman says he trusts Anthropic. He's happy they've helped troops. If Anthropic walks, OpenAI might fill the gap. But with its own red lines. That undercuts the government's push for no-limits access.

Military AI use now tests company power versus government needs. Firms want ethical walls. Pentagon sees it as overreach. Success for Anthropic could set precedent. Other AI players might follow. xAI and Google watch close.

Workers' letter shows split inside tech. Not just execs. Base-level staff push ethics. That pressure might sway deals. Or harden lines.

Contracts at stake are huge. $200 million for Anthropic. OpenAI has its own. Losing one hurts. But bending principles? Costs trust with users worldwide.

Broader ripple. AI races ahead. Military lags rivals like China. Pentagon needs top tools fast. But if firms dig in, delays hit. National security hangs.

And laws lag too. Defense Production Act is ancient. Fits factories. Not AI code? Debates grow on updating rules for this tech.

Some in Congress back firms. Others want full access. Watch negotiations. Deadlines passed. But talks continue.

This ties into bigger Mistral AI enterprise push, showing AI ethics spread global.

Frequently Asked Questions

What are Anthropic's red lines on AI?
Anthropic bars its Claude model from U.S. domestic mass surveillance. And from fully autonomous weapons without human control. It says current tech can't do these safely.

Why is the Pentagon pushing back?
The Defense Department wants AI for all legal military uses. It says laws already block bad applications. No need for company limits.

Will OpenAI replace Anthropic if the contract ends?
OpenAI has its own deal. Altman wants to deploy models in classified systems. But with the same red lines intact.

Frequently Asked Questions

What are Anthropic’s red lines on AI?

Anthropic bars its Claude model from U.S. domestic mass surveillance. And from fully autonomous weapons without human control. It says current tech can’t do these safely.

Why is the Pentagon pushing back?

The Defense Department wants AI for all legal military uses. It says laws already block bad applications. No need for company limits.

Will OpenAI replace Anthropic if the contract ends?

OpenAI has its own deal. Altman wants to deploy models in classified systems. But with the same red lines intact.