Anthropic standoff and America’s new power struggle

Avatar photo
Uriel Irigaray Araujo
  • Update Time : Saturday, February 28, 2026
Foreign Policy, Anthropic, Pentagon, Nicolás Maduro, Venezuela, Defense Production Act, Silicon Valley, crypto, gallium, germanium, lithium,  Meta, OpenAI, SpaceX, Elon Musk, Sam Altman, Donald Trump, US Army, Big Tech, AI-driven

A simmering feud between Anthropic and the US Department of Defense has burst into the open, and it is, as Foreign Policy put it, “a very bad sign”. The dispute centers on a $200 million Pentagon contract granting the military access to Anthropic’s Claude model for classified tasks. What looked like another milestone in civil-military AI cooperation has instead exposed deep tensions between private-sector guidelines and state imperatives of control, coercion, and warfighting.

In this case, the clash was almost inevitable, in a way: Anthropic’s internal policies explicitly ban the use of its models for violence, weapons development, lethal autonomous systems, and mass surveillance. The Pentagon in turn, operating under a radically different logic, expects frontier AI to be available for “all lawful purposes” (whatever “lawful” is) including precisely those domains.

Tensions escalated after reports that Claude assisted analytical tasks tied to the politically explosive episode that was the capture of Venezuela’s Nicolás Maduro last month. Defense Secretary Pete Hegseth reportedly issued an ultimatum: remove the guardrails or face contract cancellation, supply-chain penalties, or even the invocation of the Defense Production Act.

So much for the narrative that AI governance is a shared, consensual project. The point is that by now this is not merely a contractual dispute; it is a power struggle over who ultimately decides how artificial intelligence is deployed in the real world.

At the center of this storm stands Dario Amodei, Anthropic’s co-founder and CEO. Amodei arguably occupies a peculiar and increasingly isolated position among Big Tech moguls. Unlike Sam Altman, or Demis Hassabis, or a more flamboyant figure such as Elon Musk, Amodei has built his public persona around an image associated with restraint rather than speed; and risk mitigation rather than “alpha” domination. He has even warned that AI could wipe out up to 50 percent of entry-level white-collar jobs, describing it as an “unusually painful” disruption already visible in law, media, and software.

Amodei has also been arguing that deceptive or authoritarian AI misuse (and power concentration) could become a public threat. It is hard to understand how one can reconcile such rhetoric with Pentagon contracts.

Amodei’s discomfort with the overnight concentration of power in a handful of AI firms, albeit a fair point, is notable, to say the least, given that Anthropic itself is valued at roughly $350 billion, with Amodei’s personal net worth estimated near $7 billion. Be as it may, he has consistently called for international regulation akin to nuclear nonproliferation treaties, precisely to prevent mass surveillance and uncontrolled weaponization. In the current context, Amodei has basically framed Anthropic’s red lines as independent safeguards against what he views as an unreliable steward of unprecedented power.

This places Amodei at odds not only with the Pentagon but with much of Big Tech, which thus far has aligned itself, sometimes quite enthusiastically, with Donald Trump. One may recall that some Silicon Valley CEOs and top executives have even been formally commissioned as US Army officers, receiving military rank to advise on cyberwar, AI, and data-driven warfare.

As I pointed out in previous pieces, Trump’s policies, including tariffs, despite the populist rhetoric, have been deeply shaped by AI, crypto, and tech interests. From Ukraine and Venezuela to Greenland and Central Asia, Washington’s strategic priorities increasingly reflect the material needs of advanced computation: energy, data, and above all minerals.

The quest for rare earths and critical minerals should not be underestimated. AI hardware, data centers, and semiconductors require vast quantities of gallium, germanium, lithium, and rare earth magnets, many of which remain dominated by Chinese processing capacity (of up to 90 percent in some categories). To counter this, the Trump administration has mobilized over $30 billion in investments, equity stakes, and bilateral deals (with countries ranging from Ukraine to Saudi Arabia and the DRC), explicitly tying AI leadership to mineral security.

Programs like Project Vault and the Pentagon’s AI-driven OPEN pricing mechanism are examples of how deeply tech firms are embedded in national security planning.

This is not new. As I have argued elsewhere, Big Tech’s relationship with the US intelligence and defense apparatus is long-standing and structural – and the relationship got deeper under Trump’s administration. Companies such as Palantir, Meta, OpenAI, and SpaceX have provided surveillance tools, metadata access, and battlefield connectivity, thereby reinforcing what is often called the “Deep State”. Musk himself, despite his vicious enough feud with Trump, remains one of the Pentagon’s largest contractors.

What is a new development is the emergence of visible fractures within this tech-security complex. The Anthropic feud highlights a growing divide between those willing to provide unrestricted AI “for all lawful purposes” and those insisting on ethical vetoes (and corporate control). OpenAI, Google, and xAI have expanded their cooperation with the Pentagon into classified environments with relatively few constraints. Anthropic has not, and the state response has been heavy-handed enough.

Is the Big Tech “deep state,” so to speak, becoming divided? Thus far, the answer appears to be yes. Ideological differences, so to speak, antitrust pressures, job-loss fears, and global reputational risks are pulling executives in different directions. It remains to be seen what other interests (besides purely ethical considerations) could be behind such developments.

Trump’s presidency, which has been “at war” with sectors of the Deep State, is, again, confronted with a paradox. Whether driven by Big Tech or targeted by its criticism, his foreign policy cannot escape the gravitational pull of AI capital and expertise. The Anthropic-Pentagon standoff in any case is a warning sign that the marriage between Silicon Valley and the national security state is no longer as monolithic as once thought. In other words, the struggle over who controls artificial intelligence, and for what ends, has entered a new and dangerous phase.

Please follow Blitz on Google News Channel

Avatar photo Uriel Araujo, researcher with a focus on international and ethnic conflicts.

Please Share This Post in Your Social Media

More News Of This Category
© All rights reserved © 2005-2024 BLiTZ
Design and Development winsarsoft