Dario Amodei's Statement on Anthropic's Discussions with the Department of War

Anthropic's CEO Dario Amodei has outlined the company's extensive AI deployment across national security agencies while affirming that Anthropic will not agree to the Department of War's demands to remove safeguards against mass domestic surveillance and fully autonomous weapons, despite threats of removal from government systems.

anthropic Feb 26, 2026

Dario Amodei has expressed a deep conviction in the critical importance of leveraging AI to protect the United States and fellow democracies, and to counter autocratic adversaries.

Anthropic has proactively worked to deploy its models to the Department of War and the intelligence community. The company was the first frontier AI company to deploy models on the US government's classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, including intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

Anthropic has also taken steps to defend America's AI leadership, even when doing so conflicts with the company's short-term financial interests. The company chose to forgo several hundred million dollars in revenue by cutting off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and advocated for strong export controls on chips to maintain a democratic advantage.

Anthropic acknowledges that the Department of War, not private companies, makes military decisions. The company has never raised objections to particular military operations nor attempted to restrict use of its technology in an ad hoc manner.

However, in a narrow set of cases, Anthropic believes AI can undermine, rather than defend, democratic values. Some uses also fall outside the bounds of what today's technology can safely and reliably accomplish. Two such use cases have never been part of Anthropic's contracts with the Department of War, and the company believes they should not be included now:

  • Mass domestic surveillance. Anthropic supports the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life-automatically and at massive scale.

  • Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. Anthropic will not knowingly provide a product that puts America's warfighters and civilians at risk. The company has offered to work directly with the Department of War on R&D to improve the reliability of these systems, but the Department has not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don't exist today.

To Anthropic's knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of Anthropic's models within the armed forces to date.

The Department of War has stated it will only contract with AI companies who accede to "any lawful use" and remove safeguards in the cases mentioned above. The Department has threatened to remove Anthropic from its systems if the company maintains these safeguards; it has also threatened to designate Anthropic a "supply chain risk"-a label reserved for US adversaries, never before applied to an American company-and to invoke the Defense Production Act to force the safeguards' removal. These latter two threats are inherently contradictory: one labels the company a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change Anthropic's position: the company cannot in good conscience accede to the Department's request.

It is the Department's prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic's technology provides to the armed forces, the company hopes they reconsider. Anthropic's strong preference is to continue serving the Department and warfighters-with the two requested safeguards in place. Should the Department choose to offboard Anthropic, the company will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Anthropic's models will be available on the expansive terms proposed for as long as required.

Anthropic remains ready to continue its work supporting the national security of the United States.