Skip to content
Back to articles

Microsoft Breaks Ranks to Back Anthropic Against Pentagon

March 11, 2026ยท7 min readยท1,327 words
AIPentagon AI policyMicrosoft AnthropicAI weapons ethicsgovernment AI contracts
CNBC Squawk Box panel discussing Microsoft's amicus brief supporting Anthropic in its Pentagon dispute
Image: Screenshot from YouTube.

Key insights

  • Microsoft, the U.S. government's largest tech contractor, filed a legal brief backing Anthropic. It has more to lose from retaliation than almost any other company.
  • Amazon, an early Anthropic investor, reportedly did not support Anthropic when its chief executive met with Secretary of Defense Pete Hegseth.
  • OpenAI signed virtually the same Pentagon deal one day after Anthropic's dispute began, which Sorkin calls 'demonstrative of how political the decision was.'
  • Anthropic's stance may be attracting engineers and customers who value companies willing to hold a line on AI ethics.
SourceYouTube
Published March 11, 2026
CNBC Television
CNBC Television
Hosts:Andrew Ross Sorkin, Joe Kernen, Becky Quick

This is an AI-generated summary. The source video includes demos, visuals and context not covered here. Watch the video โ†’ ยท How our articles are made โ†’

In Brief

Microsoft has filed an amicus brief in support of Anthropic, urging a judge to block the Pentagon's designation of Anthropic as a supply chain risk. An amicus brief is a legal document filed by a party not directly involved in a case. It literally means "friend of the court." What makes this unusual: Microsoft is one of the largest government technology contractors in the United States. It has more to lose from White House retaliation than almost any other company in the industry. Meanwhile, Amazon, one of Anthropic's earliest investors, reportedly declined to defend the company. Amazon's chief executive officer (CEO) Andy Jassy reportedly did not support Anthropic when he met with Secretary of Defense Pete Hegseth. The CNBC Squawk Box panel, led by Andrew Ross Sorkin, argues that this contrast reveals how much corporate fear is shaping the AI industry's response to the current administration.


The central claim

The Squawk Box panel's core argument is that Microsoft's legal brief is a meaningful act of corporate courage, made more significant by how few others have been willing to do the same.

Sorkin frames the situation bluntly. Microsoft is one of the biggest government contractors in the country, which means it has the most to lose from political retaliation. And yet it stepped forward. The calculus, Sorkin suggests, may be that Microsoft is so deeply embedded in government systems that removing it would be "supremely painful" for the Pentagon itself, giving the company unusual insulation from blowback.

The contrast the panel draws is stark. About 130 developers in Silicon Valley filed their own amicus briefs in support of Anthropic. But no other major tech company did. Sorkin says he talks regularly about "CEO silence": the pattern of industry leaders staying quiet when political pressure mounts. This case, he argues, is a vivid illustration.

The Amazon angle

The panel reserves particular focus for Amazon. Amazon was among the earliest investors in Anthropic, giving it both a financial stake and a natural reason to defend the company publicly. Yet reportedly, when Amazon's CEO Andy Jassy met with Secretary Hegseth, he did not support Anthropic's position. Sorkin's read: companies like Amazon have been "scared about standing up to this administration."

Whether that fear is rational or excessive, the panel doesn't fully resolve. But the implicit argument is that silence has costs too, and that watching Microsoft step up while Amazon stayed quiet is instructive.

OpenAI's ambiguous position

Sam Altman, CEO of OpenAI, was publicly supportive of Anthropic when he appeared on CNBC. But OpenAI signed a Pentagon deal with virtually the same terms as Anthropic's disputed contract, and did so just one day after the dispute began. The panel describes this as "opportunistic." Sorkin calls it "demonstrative of how political the decision was". If the Pentagon accepted the same terms from OpenAI without controversy, Anthropic's supply chain risk designation was about politics, not contract substance.

Sorkin does offer a more charitable reading. Altman's public support was genuine, and having multiple government AI contracts is probably good for national security. Still, the optics of signing the same deal the next day are hard to ignore.

The Palantir complication

One detail the panel highlights: Palantir's Maven software is a Pentagon AI program used for military intelligence analysis, and it is operated by Anthropic. This matters because it draws Palantir directly into the dispute. If Anthropic is effectively cut off from Pentagon work, Palantir's own defense contracts are affected. The panel also raises a deeply uncomfortable possibility. A tragic incident in which children were reportedly targeted may have involved an AI-generated targeting list connected to the Anthropic system. The panel flags this as speculation and the facts are not confirmed.


Opposing perspectives

The political target argument

Not everyone on the panel reads Microsoft's move as purely principled. One panelist raises the point that Anthropic's internal memo, which referred to a foreign leader in terms the Defense Department found offensive, contributed to making the company a political target. In that reading, Anthropic's difficulties are partly self-inflicted, and Microsoft's backing does not erase that.

Sorkin's counter is direct. The Pentagon accepted virtually identical contract terms from OpenAI. If the terms were acceptable from one company, the rejection of another company's similar terms looks like a political decision, not a security one.

Musk's counternarrative

Elon Musk, who competes with Anthropic through his own AI company xAI, has been publicly critical of Anthropic on X, calling the company hypocritical. The panel acknowledges this but dismisses it as competitive trolling rather than substantive critique. Musk has obvious reasons to want Anthropic weakened.

The autonomous weapons question

The deepest tension in the discussion is not about the legal dispute. It is about the underlying question Anthropic's position raises: should AI systems be used in weapons targeting without human oversight? Dario Amodei, Anthropic's CEO, has argued that AI is not ready for autonomous weapons. These are systems that select and engage targets without a human making the final call. The panel stops short of endorsing or rejecting this view, but the implication is that Anthropic's ethical red lines may reflect a genuine technical judgment worth taking seriously.


How to interpret these claims

The panel makes a compelling case, but several questions deserve careful consideration before accepting the conclusions at face value.

How much of this is about principles, and how much about protection?

Sorkin's argument that Microsoft filed the brief partly because it is so embedded in government systems that retaliation would be impractical is interesting, but it cuts both ways. If Microsoft's protection comes from its indispensability, then its stand is safer than it looks. The risk it takes is real but bounded, which complicates the "corporate courage" framing. True courage would be a company with more to lose and less structural protection stepping forward. That has not happened.

The OpenAI deal comparison depends on details not yet public

The claim that OpenAI signed "the same deal" the next day is doing a lot of work in the panel's argument. If the terms were genuinely identical, Sorkin's conclusion is well-supported. But if there were meaningful differences in scope, usage restrictions, or language, the comparison weakens. The panel treats this as settled; it may not be.

Anthropic's values as a business asset

Sorkin argues that standing up for values is actually good for Anthropic's business. Engineers want to work there, and companies want to partner with an organization that holds a line. This is plausible, and there is some evidence for it. But losing Pentagon contracts and becoming politically radioactive could damage Anthropic's commercial prospects in ways that take longer to show up. The "values as brand" thesis is attractive but unproven at this scale.

The broader industry silence

Perhaps the panel's most credible observation is the simplest one: no other major tech company filed a brief. Whatever one thinks of Anthropic, Microsoft, or the Pentagon's rationale, that silence is a data point. The panel frames it as fear; others might call it prudence or strategic neutrality. Either way, it tells you something about how the AI industry relates to government power right now.


Practical implications

For AI companies

The case illustrates that having clear, public values is no longer just an internal culture question. It has legal, regulatory, and reputational consequences. Companies that have not thought through where their ethical lines are may find themselves unprepared when government pressure arrives.

For policymakers and observers

The dispute raises a legitimate question about how the U.S. government should engage with AI companies that have safety restrictions built into their products. If the Pentagon can designate a company as a supply chain risk for asking questions about how its AI is used, the incentive for AI developers to build in safety checks is weakened. That tradeoff deserves more public debate than it has received.


Glossary

TermDefinition
Amicus briefA legal document filed by someone not directly involved in a case, offering arguments to help the court decide. Literally "friend of the court" in Latin.
Temporary restraining order (TRO)A court order that temporarily blocks an action until the case can be properly heard. Intended to prevent harm while legal proceedings unfold.
Supply chain risk designationA government label that flags a company as a security risk in military supply chains, restricting its products from government contracts.
MavenA Pentagon AI program that uses AI for military intelligence analysis. Palantir operates it using Anthropic's AI technology.
Human-in-the-loopA system design where a human must approve AI decisions before they are carried out. The opposite is autonomous operation, where AI acts without human sign-off.
Autonomous weaponsWeapons that can select and engage targets without human intervention. Anthropic's CEO argues AI is not yet ready for this role.

Sources and resources

Share this article