Skip to content
Back to articles

Anthropic's Project Glasswing Aims to Secure World's Code

April 8, 2026/5 min read/1,050 words
AnthropicAI SecurityClaudeOpen Source
Project Glasswing launch video showing multiple partner organization logos including AWS, Apple, Google, Microsoft, and NVIDIA united under Anthropic's initiative to secure critical software.
Image: Screenshot from YouTube.

Key insights

  • Anthropic let security experts use its new AI model to find flaws in software before releasing it to everyone. The opposite of Facebook's old motto: move fast and break things.
  • A bug that sat hidden for 27 years in a widely used operating system was found by the AI model. It shows that AI can catch things that even experienced security experts miss.
  • $100M in free model access means Anthropic is paying to have Mythos used defensively. They are not making money from this directly, but building trust.
  • The Mythos leak happened on March 30. The official launch on April 7 reads differently knowing that. Anthropic accelerated the announcement after the leak revealed the project before they were ready.
SourceAnthropic
Published April 7, 2026
Anthropic
Anthropic
Hosts:Anthropic

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Anthropic has officially launched Project Glasswing. It is a partnership between 12 major organizations, including AWS, Apple, Google, Microsoft, NVIDIA, Cisco, Broadcom, CrowdStrike, JPMorganChase, the Linux Foundation, and Palo Alto Networks. The goal is to find and patch critical security flaws in software before attackers can exploit them. At the center of the project is Claude Mythos Preview, an unreleased top-tier model that Anthropic says has become so good at code that it can match professional security researchers at finding and exploiting flaws. Anthropic is contributing $100 million in free model access to Glasswing participants, plus $2.5 million to the Linux Foundation and $1.5 million to the Apache Software Foundation.

What is Claude Mythos Preview?

Mythos is a general-purpose top-tier model that Anthropic has not specifically trained for cybersecurity. It became exceptionally good at security as a side effect of being extremely good at code. As one speaker in the launch video explains, the model was trained to be good at code, and being good at code turned out to mean being good at finding and exploiting vulnerabilities in code.

The benchmark numbers back up the claim. On Mythos CyberGym, Anthropic's own security test, Mythos Preview scores 83.1% compared to Claude Opus 4.6's 66.6%. That means Mythos solves four out of five security tasks that the previous top model could only handle two out of three. On SWE-bench Verified, a widely used test measuring how well AI models can fix bugs in real software, it scores 93.9% compared to 80.8%. To put that in perspective: the gap between 80% and 94% is like going from failing one in five exams to passing nearly all of them.

What makes Mythos especially significant for security work is its ability to chain together vulnerabilities: finding two, three, sometimes five separate flaws that each look harmless in isolation, then combining them into a single sophisticated exploit. This is exactly how skilled human attackers operate, and it requires thorough analysis across all the code in an entire software project, the kind of work that previously took an experienced security researcher a full day.

Anthropic is not releasing Mythos widely. As one speaker states directly: "We won't be releasing this model widely." It is available to Glasswing partners via the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Azure AI Foundry. Pricing for API access (the programming interface used to interact with the model) is $25 per million input tokens and $125 per million output tokens. Tokens are the unit of measurement for text and code sent to and from the model. Anthropic has committed to publishing a public report on Glasswing findings within 90 days.

What they found

Working with Glasswing partners, Anthropic has already scanned open-source code across every major platform. A security researcher in the video summarizes the experience simply: "I found more bugs in the last couple of weeks than I found in the rest of my life combined."

The most concrete example: a 27-year-old bug in OpenBSD, an operating system used in everything from banks to servers that keep the internet running. The flaw allowed anyone to crash any OpenBSD server by sending a small, specific packet of data. It had been sitting, undetected, in code reviewed by expert humans for nearly three decades.

On Linux, the team found multiple serious flaws that let an ordinary user with no special permissions gain full administrator access simply by running a standard system file on their machine.

A 16-year-old vulnerability was also found in FFmpeg, the open-source multimedia framework embedded in countless applications and services worldwide.

All of these were disclosed responsibly to the software maintainers before going public. The patches have been deployed. Anyone running updated versions of these systems is no longer exposed.

The partnership and the money

Project Glasswing currently includes 12 founding partners and more than 40 organizations in total. The founding partners, listed in the video description, are: Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.

Anthropic is putting real money behind the promise. $100 million in free model access gives participating organizations the ability to use Mythos at scale without paying for it themselves. $2.5 million goes to the Linux Foundation, specifically to the Alpha-Omega project and the Open Source Security Foundation (OpenSSF), which coordinate security improvements across open-source infrastructure. $1.5 million goes to the Apache Software Foundation, which maintains a wide range of popular open-source tools for servers.

The goal is to give defenders a head start. As one speaker puts it, "No single organization sees the whole picture and can tackle this on their own." Getting the organizations that maintain the world's most critical code equipped with Mythos before the model is available more broadly is designed to shrink the window of exposure.

The dual-use problem: a tool for both sides

Anthropic is open about the fundamental dilemma. The same capabilities that make Mythos useful for defenders make it dangerous in the wrong hands. If a model can find and chain vulnerabilities at the level of a professional security researcher, it can just as easily help an attacker do the same thing.

This is not a problem that can be solved by keeping Mythos secret indefinitely. More powerful models will come, from Anthropic and from others. The only solution, as Anthropic sees it, is to make sure that the people who maintain the world's most critical software have already scanned and patched their code before models like this become widely available.

The video closes on a long view: this is a multi-year effort, not something that gets solved overnight. The goal is a world where the systems that run banks, hospitals, and the internet are meaningfully safer than they are today.

One final thing: the timeline matters. The Mythos leak accidentally exposed thousands of internal files and sent stock prices of cybersecurity companies falling. That happened on March 30. The official launch came on April 7. It is hard to look at the polished announcement without thinking that Anthropic moved it forward because the leak had already made Mythos public.

Glossary

TermDefinition
Zero-day vulnerabilityA software flaw that no one knew about until now, meaning defenders have had zero days to prepare a fix.
Vulnerability chainingCombining multiple small security flaws that look harmless on their own into a single powerful attack sequence.
Privilege escalationA bug that lets a regular user gain administrator access to a system they should not be able to control.
Responsible disclosureTelling the people who maintain the software about a security flaw privately so they can fix it before attackers find out.
Open-source softwareSoftware whose code is publicly available for anyone to read, use, and improve. Linux, FFmpeg, and OpenBSD are all open-source.
Frontier model (top-tier model)The most advanced AI model a company has built. Called "frontier" because it sits at the edge of what is technically possible.
API (Application Programming Interface)A programming interface that lets software talk to other software. Here it gives partners access to Mythos from their own systems.
TokenThe unit of measurement for text and code sent to and from an AI model. Roughly 750 English words equals 1,000 tokens.

Sources and resources

Share this article