Pentagon Issues Ultimatum to Anthropic Over AI Model Access and Safeguards

The U.S. Pentagon has issued an ultimatum to AI firm Anthropic, demanding the removal of safeguards from its large language models to permit “any lawful use” by the military, including potential applications in mass domestic surveillance and fully autonomous weapons. Failure to comply by a specified deadline would result in Anthropic being designated a “supply chain risk” and the invocation of the Defense Production Act (DPA), specifically Title I, to compel compliance. This action, outlined in a January memorandum emphasizing the policy to sustain U.S. global AI dominance for national security, marks an unprecedented application of the DPA against a domestic company to dictate product features. The Pentagon asserts that Anthropic’s refusal jeopardizes critical military operations, despite the reported extensive current deployment of Anthropic’s Claude models in classified military systems, including recent U.S. operations.

Anthropic, through CEO Dario, has rejected the demands, affirming its deep commitment to U.S. national security—demonstrated by proactive model deployments and economic actions against adversaries—but maintaining red lines against mass domestic surveillance and fully autonomous weapons, citing ethical concerns and current technological limitations regarding safety and reliability. Anthropic has proposed collaborative research and development to enhance system reliability for these use cases, an offer reportedly not accepted by the Department of Defense. The firm also highlighted the inherent contradiction of being labeled a security risk while simultaneously being deemed essential for national security. This governmental pressure has catalyzed a broader industry reaction, with employees from Google and OpenAI issuing an open letter of solidarity, stating their refusal to allow their models to be used for mass surveillance or autonomous killing without human oversight, and urging their leadership to take a unified stand.