Skip to content
Back to articles

Anthropic Helps US Strike Iran. The Pentagon Wants Them Out.

April 17, 2026/6 min read/1,119 words
AnthropicClaudeAI SecurityAI and MilitaryAI Regulation
Gregory Allen interviewed on Bloomberg Television about Anthropic's Mythos model
Image: Screenshot from YouTube.

Key insights

  • Mythos was not deliberately trained for cyber capabilities. They emerge as a side effect of getting extremely good at code. That means other frontier models are likely to develop the same skills as they get good enough
  • Allen calls the Pentagon's supply chain risk label on Anthropic completely unreasonable. That category is normally reserved for suspected Chinese intelligence fronts, not for critical national security suppliers
  • The shift in the Iran war is not just the number of targets struck. Behind 1,000 strikes sit 10,000 to 20,000 micro-decisions no human analyst team could have produced
  • The 6 to 8 month lead is the most valuable thing Anthropic holds right now. Allen's question is whether the US uses it to patch critical infrastructure or burns it in a contracting fight
SourceYouTube
Published April 17, 2026
Bloomberg Television
Bloomberg Television
Hosts:Bloomberg Television
Wadhwani AI Center, CSIS
Guest:Gregory AllenWadhwani AI Center, CSIS

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Gregory Allen is senior adviser at the Wadhwani AI Center at CSIS and one of the most cited US voices on AI and national security. On Bloomberg Television on April 17, he made a striking claim: Anthropic is becoming comparable to the US National Security Agency, the NSA, in raw cyber capability.

Mythos can run the entire cyber kill chain on its own. It finds the vulnerability, exploits it, escalates access, and finishes the intrusion without a human hacker in the loop. Anthropic has chosen to keep the model off the open market, giving only 40 selected companies, banks, and government agencies access so they can patch systems before the capability leaks.

Meanwhile, the company is getting hit by the same public sector it's trying to help. The Pentagon has labeled Anthropic a supply chain risk, the same category the US reserves for suspected Chinese intelligence fronts. That's happening while Anthropic's technology is being used every day in the ongoing Iran war.

An NSA in private hands

Allen's key observation is that the cyber capability isn't something Anthropic deliberately trained into the model. The company trained it to be very good at writing code. As Allen puts it:

"If you're really, really good at writing computer code, it's not that far away to be really, really good at hacking in computer code."

That's the central claim. Hacking skill is a byproduct of coding skill, not a separate thing. Mythos can find a bug, exploit it, walk the entire kill chain, and execute a successful attack autonomously.

Why compare it to the NSA? Because cybersecurity has been gripped by a massive labor shortage for 20 years. Hackers are expensive, hard to train, and rare. When a single model can do most of the job without a human operator, Anthropic starts to look comparable to the NSA in terms of what the company is actually capable of doing.

The patch race

Mythos is a dual-use capability: it can be used for offense, but also for defense. The new bar Allen sketches out is that all new software has to pass what he calls the Mythos test before it ships. Can it survive this gauntlet?

The problem is the transition. The internet runs on enormous volumes of open source code maintained by volunteers, and on critical infrastructure run by banks and energy companies. The amount of code that needs to be reviewed and patched over the next 12 months is massive.

At the same time, Allen says Anthropic has a 6 to 8 month lead on every other frontier lab in the private sector, and that the US government isn't even on the field. That's the window we have.

ActorStatus
AnthropicHas Mythos today, sharing with 40 selected companies
Other frontier labs (OpenAI and others)6 to 8 months behind
Hacker communitiesComing, but coming
ChinaAllen's biggest worry
US government"Not even trying," in Allen's words

The estimate is Allen's, not an official figure. But the framing is clear: the window is short, it's valuable, and it can be wasted.

The Pentagon paradox

Here is where the story gets strange. The US government needs Anthropic to reach Mythos-class capability at all. Allen says directly that "the only way that they can get access to these capabilities is to work with private industry." Anthropic is one of the most important suppliers to intelligence and warfighting right now.

And yet the Pentagon has labeled the company a supply chain risk, the kind of designation the department uses when a vendor is exposed as a front for Chinese intelligence. Allen's verdict is blunt:

"It's completely unreasonable. And the government is in this moment of utter confusion because on the one hand they're saying, Anthropic, you've got to help us secure the financial arteries of our economy on cyber. On the other hand, we don't trust you at all. And actually we'd prefer if you go out of business."

The dispute is rooted in contracting terms, not in any suggestion that Anthropic is actually a foreign front. But the practical effect is the same: the 6 to 8 month window, which Allen calls critical, can be burned through before the public sector even gets to use it.

The Iran war and Project Maven

The concrete example Allen brings up is the ongoing Iran war, where the US military is using Anthropic models every day. The entry point is Project Maven, the Pentagon's program for bringing AI into intelligence and targeting.

Project Maven started modestly back in 2018 and 2019 by analyzing surveillance video from drones. The AI could look at a satellite image and say: this part is empty water, but here, there's a warship.

Today the capability is something else. With large language models like Anthropic's underneath, the AI can say: yesterday this area was empty, today there are ten warships here, they're moving in this direction, and here is a first draft of the intelligence report on what they're likely doing.

EraWhat the AI could doWhat analysts had to do
Project Maven 2018–2019Identify objects in imagesWrite the entire intelligence report
Mythos era todayIdentify, compare across time, draft reportsVerify and decide

The numbers Allen cites are staggering. The US struck 1,000 targets in 24 hours during the first phase of the Iran war. The previous benchmark was about 50 targets a day. Many of the targets were mobile, many were disguised.

Behind those 1,000 strikes, Allen says, sit 10,000 to 20,000 high-quality intelligence decisions made in the same window. His conclusion is simple: the US government could not have done this without AI.

The real shift

Most of the attention on Mythos has been about benchmarks and coding ability. Allen's point is different. The real shift is that one company now holds a capability on par with state intelligence, and that capability is being delivered into an active war from day one.

The questions Allen leaves hanging:

  1. Can the US convert a 6 to 8 month lead into patched critical infrastructure, or does the window go to waste?
  2. When open-weight models get good enough at coding, they get the same cyber capability for free. Who controls them then?
  3. If Anthropic is critical to national security today, why is the Pentagon labeling them a supply chain risk?

Allen's own read is that the long-term outlook is good because Anthropic has chosen a strategy of engagement and support. What the company and the world face in the short term, he says, is "an incredibly messy middle."

Glossary

TermDefinition
Cyber kill chainThe steps an attacker has to walk through to succeed: discover a system, find a vulnerability, exploit it, escalate access, and exfiltrate data
Computer visionAI that "sees" and interprets images or video. What Project Maven used in 2018 to tell empty water from a warship
Frontier modelThe most advanced AI models out there. Today they're built by a handful of companies including Anthropic, OpenAI, and Google DeepMind
Supply chain riskThe Pentagon's official label for vendors deemed a security threat. Normally used against companies suspected of being Chinese intelligence fronts
Project MavenThe Pentagon's 2017 program for bringing AI into intelligence and targeting. Started by analyzing drone video, now uses large language models
VulnerabilityA flaw in software that lets an attacker get in. Patches close them if hackers don't get there first

Sources and resources

Share this article