WASHINGTON, United States — U.S. President Donald Trump has ordered all U.S. federal agencies to stop using artificial intelligence technology developed by Anthropic after the firm refused to grant the Pentagon unrestricted access to its systems.
In a statement posted on the social media platform Truth Social on February 27, Trump accused the AI developer of attempting to influence U.S. military decision-making and said the government would no longer rely on its technology.
“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars,” Trump wrote. “That decision belongs to your commander-in-chief, and the tremendous leaders I appoint to run our military.”
The directive requires federal departments to immediately halt new use of Anthropic systems, with a six-month phase-out period for agencies currently relying on the company’s products, including the United States Department of Defense.
Trump warned the company to cooperate during the transition or face possible legal action. “Anthropic better get their act together, and be helpful during this phase-out period, or I will use the full power of the presidency to make them comply, with major civil and criminal consequences to follow,” he said.
The order follows a public dispute between the Pentagon and the AI developer over restrictions built into the company’s flagship model, Claude.
Anthropic confirmed on February 26 that it had refused a request from the Pentagon to remove safeguards from the system that limit certain uses of artificial intelligence in military operations.
Chief executive Dario Amodei said the company could not accept the request despite pressure from defence officials, who had warned that failure to comply could result in cancelled contracts or the company being designated a supply-chain risk.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei said in a statement.
The Pentagon has maintained that contractors providing artificial intelligence systems to the government must permit “any lawful use” of their technology and remove technical safeguards that restrict operational deployment.

Anthropic said it supports U.S. national security efforts but believes some applications of AI pose unacceptable risks. The company said it would continue to block two uses of its systems: mass domestic surveillance and fully autonomous weapons capable of selecting and attacking targets without human oversight.
According to the company, its AI tools are already widely used across U.S. defence and intelligence operations for intelligence analysis, modelling and simulation, operational planning, cyber operations, and mission support.
Anthropic also said it has taken steps to protect U.S. technological advantages over geopolitical rivals, including cutting off access to firms linked to the Chinese Communist Party, shutting down Chinese-sponsored cyberattacks targeting its systems, and supporting export controls on advanced computing technologies.
Amodei said those measures cost the company hundreds of millions of dollars in lost revenue but were necessary to safeguard democratic societies.
“I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” he said.



