Skip to content
Back to articles

OpenAI's Pentagon Deal: 'Opportunistic and Sloppy'

March 3, 2026ยท7 min readยท1,477 words
AIOpenAIAnthropicDefenseOpinionVideo Summary
CBS News segment on OpenAI's Pentagon deal featuring Bloomberg reporter Katrina Manson
Image: Screenshot from YouTube.

Key insights

  • Sam Altman admitted OpenAI's Pentagon deal 'looked opportunistic and sloppy' and is now adding safeguards to exclude intelligence agencies
  • Anthropic submitted a bid for the Pentagon's $100 million autonomous drone swarming challenge, despite its stated objections to unrestricted military AI
  • Every major AI company has military ties, from OpenAI and xAI on classified networks to Meta partnering with Anduril on battlefield technology
SourceYouTube
Published March 3, 2026
CBS News
CBS News
Hosts:CBS News correspondent
Bloomberg News
Guest:Katrina Manson โ€” Bloomberg News

This article is a summary of OpenAI changing deal with U.S. military after backlash. Watch the video โ†’

Read this article in norsk


In Brief

Hours after the Trump administration ditched Anthropic over its refusal to grant the military unrestricted access to its AI, OpenAI struck its own deal with the Pentagon. The backlash was immediate. CEO Sam Altman has since admitted the move "looked opportunistic and sloppy" and is now working to add safeguards. Meanwhile, Bloomberg has revealed that Anthropic itself submitted a bid for the Pentagon's $100 million autonomous drone swarming challenge, complicating the narrative of a company that drew firm ethical lines. Bloomberg reporter Katrina Manson, author of the forthcoming book Project Maven, argues the lines between AI companies' stated principles and their military engagement are getting murkier by the day.

$100M
Pentagon drone swarming prize pool
5 companies
major AI firms with military ties
3 limits
new safeguards OpenAI is adding

Background

This article continues the ongoing coverage of the AI-Pentagon conflict. For the full backstory, see our articles on the initial standoff, Dario Amodei's response, and the federal use ban.

New developments since those reports: sources tell CBS News the US military used Claude, Anthropic's AI model, for its attack on Iran over the weekend (0:00). This happened despite the ongoing fallout with the company. OpenAI then moved quickly to fill the vacuum.


OpenAI swoops in, then retreats

The Friday night deal

Hours after the Trump administration publicly distanced itself from Anthropic, OpenAI announced its own deal with the Pentagon (0:24). The company reportedly said it would work on the classified cloud and claimed the deal would also be "comfortable for Anthropic" (3:25). Anthropic has given no public indication that it agrees.

Walking it back

The backlash hit fast. Altman posted on X: "We shouldn't have rushed to get this out on Friday. The issues are super complex and demand clear communication." He added that OpenAI was "generally trying to de-escalate things and avoid a much worse outcome" but acknowledged the move "just looked opportunistic and sloppy" (2:41).

According to the Wall Street Journal, Altman described the situation as "really painful" in an all-hands meeting (3:05). OpenAI is now adding three new safeguards to the agreement:

  1. Classified cloud only. Work will happen on the classified cloud, not at the edge, meaning not on devices in the field (3:47).
  2. No intelligence agencies. OpenAI will not strike deals with the National Security Agency (NSA) or the Defense Intelligence Agency (DIA) (3:51).
  3. No domestic surveillance. The safeguards aim to prevent any risk of mass domestic surveillance, which Manson notes the Pentagon already says it is not permitted to do under US law (4:08).

Manson describes this as "a very frenetic stage" and says it is not clear if OpenAI has done enough to satisfy its own internal critics, let alone external ones (4:14).


Anthropic's own military ambitions

The biggest revelation may be about Anthropic itself. Bloomberg has reported that Anthropic submitted a bid for the Pentagon's $100 million Autonomous Vehicle Orchestrator Prize Challenge (1:10), a competition to build voice-controlled autonomous drone swarming technology. The challenge is run jointly by the Defense Autonomous Warfare Group (DAWG), a unit under US Special Operations Command, and the Defense Innovation Unit (DIU), the Pentagon's Silicon Valley-based technology accelerator (1:21).

Anthropic's bid was not selected (2:00). But according to a person familiar with the matter, the company did not believe the bid crossed its own red lines (2:07).

This aligns with a nuance Manson highlights about Anthropic CEO Dario Amodei's public statements. Amodei has said he opposes fully autonomous weapons, but only because the AI is not yet reliable enough. He is "not against the idea in theory" (2:19).


The full landscape: nobody is sitting out

Manson's reporting paints a picture where every major AI company has some form of military engagement (4:37):

CompanyMilitary involvement
OpenAINamed on two bids for the drone swarming challenge as a secondary contractor. New Pentagon cloud deal under revision.
AnthropicClaude used by the military for the Iran operation. Submitted a bid for the drone swarming challenge.
xAIRecently struck a deal to access classified Pentagon networks. Part of the drone swarming bids.
GoogleLong-term defense partner. Dropped its Project Maven AI contract in 2018 but later re-engaged with cloud and other services.
MetaPartners with Anduril Industries on military XR visors that bring data to the battlefield. Open-source models used by defense subcontractors.

SpaceX and xAI were also named on bids for the drone swarming challenge (1:47). The question, as Manson puts it, is not whether AI companies will work with the military, but where exactly each one draws the line.


Opposing perspectives

The national security argument

Proponents argue that the US military needs access to the best available AI technology to maintain its strategic advantage. If American companies refuse to provide it, adversaries will not show the same restraint. The Pentagon's interest in AI for command and control, drone coordination, and intelligence analysis reflects a genuine security need that predates the current controversy.

The ethical constraints argument

Critics counter that rushing AI into military applications without solid guardrails risks enabling mass surveillance, autonomous targeting decisions, or other uses that cross ethical lines. The speed at which these deals are being struck, and then revised, suggests the industry is moving faster than its own ethical frameworks can keep up.

The "murky lines" reality

Manson describes the situation as one where "the lines really are so murky" (1:10). A company can oppose autonomous weapons in principle while bidding on autonomous drone swarming in practice. A CEO can call his own deal "sloppy" while negotiating to keep it. The gap between stated principles and actual behavior appears to be widening, not narrowing.


How to interpret these claims

This segment raises important questions, but several factors deserve careful consideration.

The timeline matters

Events are moving extremely fast. The Anthropic fallout, OpenAI's deal, the backlash, and the revisions all happened within days. Positions stated today may look very different by next week. Manson herself notes that we are in "a very frenetic stage" with a lot of "back and forward on the language."

Stated safeguards vs. enforcement

OpenAI's three new safeguards sound meaningful, but their practical enforcement remains unclear. Working only on the classified cloud, for example, does not address what the AI is actually being used for. Excluding the NSA and DIA still leaves dozens of other defense and intelligence entities. The safeguards describe where the technology lives, not what it does.

Everyone's red lines are moving

Anthropic drew firm lines against unrestricted military access, yet submitted a bid for autonomous drone swarming. OpenAI initially said its deal would be fine for everyone, then admitted it was rushed. Google famously dropped Project Maven, then quietly rebuilt its defense relationships. The ethical landscape for AI in military use is still being negotiated in real time, and no company has found a stable position.

What to watch for

The clearest signal will come from the actual terms of these agreements, most of which are classified. Public statements from CEOs are part marketing, part damage control. The real question is whether any AI company will refuse a lucrative military contract on principle when the technology is ready and the government is willing to pay.


Glossary

TermDefinition
Classified cloudA secure, government-controlled network for handling classified information. AI systems running here are isolated from the public internet.
Edge computingProcessing data on devices in the field (like battlefield equipment) rather than in centralized servers. OpenAI is now restricting its Pentagon work to the cloud, not the edge.
DAWGDefense Autonomous Warfare Group. A Pentagon unit under US Special Operations Command that runs autonomous weapons research programs, including the drone swarming challenge.
DIUDefense Innovation Unit. The Pentagon's technology accelerator based in Silicon Valley, focused on bringing commercial technology into military use.
Drone swarmingCoordinating multiple drones to act together as a group using AI, responding to voice or text commands from a human operator.
NSANational Security Agency. A US intelligence agency responsible for signals intelligence and cybersecurity. OpenAI has committed to not working with the NSA under its revised Pentagon deal.
DIADefense Intelligence Agency. A US military intelligence agency under the Department of Defense. Excluded from OpenAI's revised Pentagon deal alongside the NSA.
Project MavenA Pentagon initiative launched in 2017 to use AI for analyzing drone surveillance footage. Google initially supplied the AI but withdrew after employee backlash. Also the title of Katrina Manson's forthcoming book.
Anduril IndustriesA defense technology company founded by Palmer Luckey (creator of Oculus VR). Partners with Meta on military XR technology and participates in the drone swarming challenge.

Sources and resources