
The Department of Defense’s clash with Anthropic over the integration of artificial intelligence into military operations, and who sets the limits on usage, reached a peak this week with Defense Secretary Pete Hegseth giving the AI company until 5:01 p.m. ET Friday to cede to the government’s demands. Anthropic has not budged, to date at least, but the battle between military and industry over AI is just getting started. The Pentagon is colliding with the private companies that control AI in a way that has not been tested in the post-World War II era.
OnThursday, AnthropicrefusedDefense Secretary Pete Hegseth’s demandto loosen certain safeguardsof its models formilitary use,includingmassdomestic surveillance or fully autonomous weapons, because itviolatescompany policies.CEO DarioAmodei’sdecisioncomes after the Pentagonwarnedit couldterminatethe partnership if the company refuses to support”all lawful uses.”
“It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei wrote ina statement on Thursday.“But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
The standoffhighlights the emerging reality that private firms developing frontier AI mayseekto set their own limits on how the technology is deployed, even in national security contexts.
InJulythe Defense Departmentawarded contracts worth up to $200 millioneach to four companies—Anthropic, OpenAI, Google DeepMind, and Elon Musk’sxAI—to prototypefrontier AI capabilities tied to U.S. national security priorities. The awards signal how aggressively the Pentagon is moving to bringcutting-edgecommercial AI into defense work.
The urgency is reflected in internal Pentagonplanning as well.AJanuary 9 memorandumoutlining the military’s artificial intelligence strategy calls for the U.S. to become an”AI-first”fighting force and to accelerate integration of leading commercial AI models across warfighting, intelligence, and enterprise operations.
“There are no winners in this,” Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technology, told CNBC in a recent interview about the standoff between the Pentagon and Anthropic. “It leaves a sour taste in everyone’s mouth.”
What it does do, though, is mark a shift — a departure from decades of defenseinnovation during which governments themselves controlled the technology as it was created.
“For most of the post–World War II era, the U.S. government defined the frontier of advanced technology,”saidRear Admiral Lorin Selby,former chief of naval research and current general partner at Mare Liberum, an investment firm that specializes in maritime technology and infrastructure.”Itsetthe requirements, funded the foundational research, and industry executed against government-driven specifications. From nuclear propulsion to stealth to GPS, the state was the primary engine of discovery,and industry was the integrator and manufacturer.”
AI, Selby said,has inverted that model.
“Todaythe commercial sector is the primary driver of frontier capability. Private capital, global competition, and commercial data scale are advancing AI at a pace that traditional government R&D structures cannot easily replicate. The Department of War is no longerdefining the edge of what is technically possible in artificial intelligence — it is adapting to it,” he said.
United States Secretary of War Pete Hegseth speaks during a visit to Sierra Space in Louisville, Colorado on Monday, Feb. 23, 2026.
Aaron Ontiveroz | Denver Post | Getty Images
This reversal in the balance of power over technology carries both opportunity and risk.
“We shouldn’t be in a place where private companies feel that they have leverage over the U.S. government orWestern allies because of the technological capability they are providing,” said Joe Scheidler,a former associate director and special advisor at the White Houseand co-founder and CEO ofAI start-upHelios. “Technologistsshould build and do that responsibly, but governments should be the entities making the decisions.”
Anthropic and the DoD did not respond to requests for comment
Why the military needs private AI
Public-private partnerships have long supported U.S. defense innovation, from World WarIIindustrial mobilization to modern aerospace and cybersecurity programs.But artificial intelligence is different because the most advanced capabilities are increasingly concentrated in commercial firms rather than government labs.
“Strong public-private partnerships are what gives America its edge,” Scheidler said.”You will not find a more dynamic and innovative talent pool than that of the American entrepreneurial community. The idea of trying to replicate that level of innovation within government itself … is difficult.”
That concentration is precisely why governments seek partnerships, but according to Selby, the dependency isalso primarilydriven by speed.”The innovation cycle in venture-backed firms movesin months. Traditional acquisition cyclesmove inyears. Without commercial AI providers, the government would be slower, less adaptive, and far more expensive,” he said.

When critical national security tools are developed by private companies,”the main change is that the government no longer fully controls the development of its most advanced technological tools,”saidBetsy Cooper, director of the AspenPolicyAcademyand former advising attorney for theU.S. Department of Homeland Security.
Commercial AIsystems are typically built first for broad markets rather than military missions, which can create gaps between how companies design their technology and how governments want to deploy it, Cooper said.
That misalignment can become more pronounced when corporate policies, reputational concerns, or global customer pressures conflict with governmentobjectives, adynamic now visible in theAnthropic dispute.
“Companies may not want to risk negative reaction from their customer base if their product is used for highly controversial reasons—for instance,to create autonomous lethal weapons or commit preemptive killings before crimes are committed,” Cooper said.
Government has longer-term leverage
Despite the shift toward commercial technology, defense leaders are unlikely to relinquish control over mission critical systems.
“The first thing to understand is thatfrom what I have seen to date,theDoDis not going to give up final control,”saidBrad Harrison, founder ofScoutVentures,an early-stage venture capital firm investing at the intersection ofnationalsecurity andcriticaltechnology Innovation.”Thegovernment stillwantsto understandeverything that goes into it and all the dependencies and risks.”
Harrison, who isa formerU.S. Army Airborne Ranger and West Point graduate,said AIcould eventually influence decisions such as how to intercept incoming threats, so “the government is going to be extremely cautious with how they let AI interact with those data layers,”hesaid.”Nobody wants to be the person responsible for Skynet,” he said, referring to a fictional AI from the “Terminator” universe that caused a nuclear war.
Governments alsoretainpowerful tools to influence companies, including procurement decisions, export controls, and regulatory authority.”The government has a lot ofleverage,”Harrison said.”If you don’t want to work with them, they have a lot of ways to make that a very difficult decision,” he added.
But leverage flowsin both directions, at least for now, according to Selby.”In the short term, companies with scarce AI talent and proprietary models mayholdsignificant influence. In the long term, sovereign governments retain regulatory authority,contracting power,funding scale, and if necessary,legal compulsion,” he said.
The most important question, in Selby’s view, is”whether we build a durable public-private compact that treats AI as foundational national security infrastructure rather than just another vendor relationship.”
Risks in new military-Silicon Valley industrial complex
Expertssay the issue is ultimately less about whether companies or governments hold permanent leverage and more about how the relationship evolves as AI becomes central to national power.
“If we build alignment and resilience into the public-private relationship, AI can strengthen national security while preserving innovation,” Selby said. “If we fail to do so, we risk a future in which capability isabundantbut alignment is brittle,” he added.
There are many new forms of risk in the emerging military-Silicon Valley industrial complex. For example, reliance on externally developed AI could introduce vulnerabilities if systems fail unexpectedly or become unavailable, particularly if military units grow accustomed to them during operations.
“Over-reliance could prove deadly,”saidShankaJayasinha, founder ofOntoAI, a company that develops AI tools for military, healthcare, financial organizations,and enterprise solutions, describing scenarios where special operations units depend on AI-enhanced mission-coordination tools during deployments.If those systems fail after prolonged use,”many lives would be in danger,”he said.
Vendor lock-in is another concern. As AI platforms become embedded in workflows, replacing them may become difficult.”With the current speed of progress in AI, it is tough to unseat any incumbent,”Jayasinhasaid.
Harrison, however, says one risk the Pentagon won’t expose itself to is being captive to a single company.”The U.S. government is not going to be dependent on any one Silicon Valley company,”he said “They will very methodically test systems, control the data layer, and move step by step.”
In fact, the Pentagon issued its own very clear statement on the importance of Anthropic or any single company in a post on X from Under Secretary of War for Research and Engineering Emil Michael on Thursday night: “It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”
Anthropic said in its statement that should the government “offboard” Anthropic, “we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”
One approach is building what some technologies call”sovereign AI architectures”—systemsdesigned toallow governments tomaintainindependence from vendorswhile stillbenefitingfrom commercial innovation.
“We talk a lot internally about this notion of sovereign intelligence and vendor independence,”Scheidler said, contending that the U.S. ecosystemremainsbroad enough to prevent over-reliance on any single provider.”There are new ideas emerging on a daily basis, and we don’t have to rely on one vendor to do that,” he said.
