The Day Anthropic Made History
Anthropic Said No
No to mass surveillance. No to autonomous weapons.Even when it cost them everything.
0 people have pledged
Watch
Dario Amodei speaks to CBS News.
In Their Own Words
What Anthropic’s CEO actually said.
What Happened
The hardest ethical stand in AI history.
The Contract
Anthropic held a $200 million Pentagon contract to deploy Claude for national security. They proactively served the Department of War and the intelligence community.
The Demand
The Pentagon demanded unrestricted access — including for mass surveillance of American citizens and fully autonomous weapons with no human in the loop.
The Stand
Anthropic drew two red lines. They said no. Not to serving the military — to these two specific uses that threaten democratic values. They forfeited hundreds of millions in revenue.
The Retaliation
Trump banned Anthropic from all government use. Hegseth designated them a “supply chain risk to national security” — a label normally reserved for adversarial nations like China.
The Hypocrisy
Hours later, OpenAI struck a deal with the Pentagon. With the exact same red lines. The Pentagon accepted from OpenAI what they punished Anthropic for demanding. This isn’t a policy dispute. It’s punishment for saying no first.
The Official Response
Straight from Anthropic.
Take Action
Add Your Name
Show policymakers that the world stands with responsible AI.