Skip to content
Back to articles

OpenClaw at Five Months: Growth, AI Slop, and Saying No

April 18, 2026/8 min read/1,516 words
OpenClawAI SecurityOpen SourceAI AgentsPeter Steinberger
Peter Steinberger on stage at AI Engineer delivering State of the Claw
Image: Screenshot from YouTube.

Key insights

  • OpenClaw receives 16.6 security advisories per day, twice the rate of the Linux kernel, because AI has made it nearly free to generate plausible CVE reports
  • Steinberger's response to the slop storm is institutional rather than technical: a foundation that can hire people, not more AI filters
  • He rejects the dark factory approach where everything is planned upfront. Software grows iteratively, and the path to the mountain is never a straight line
  • When anything can be prompted in seconds, taste and the ability to say no become the real bottlenecks, not coding
Published April 9, 2026
AI Engineer
Hosts:Shawn Wang
OpenClaw
Guest:Peter SteinbergerOpenClaw

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

OpenClaw just turned five months old, and its creator Peter Steinberger walked onto the AI Engineer stage to deliver the numbers. Roughly 30,000 commits, almost 2,000 contributors, and close to 30,000 pull requests, all from a personal AI agent that didn't exist in October 2025. A commit is a single saved code change; a pull request is a proposed change waiting to be reviewed and merged. A friend calls the growth curve "stripper pole growth." It shows no sign of flattening.

But that's not the story Steinberger came to tell.

The story is what happens when the world's fastest-growing open-source AI project becomes a magnet for AI-generated security reports. In a 44-minute keynote and AMA, an open audience Q&A session, moderated by Shawn Wang (known as swyx), Steinberger walks through 1,142 security advisories, a nation-state attack, an academic paper that ignored the security docs, and his plan to build a Swiss-style foundation to keep the project independent. Along the way, he lands on the two skills he thinks matter most in the age of AI coding: taste, and saying no.

The growth curve, and the two jobs

At the start of the talk, Steinberger asks the room how many are running OpenClaw. Thirty to forty percent of hands go up. Then come the numbers. OpenClaw is, by his count, the fastest-growing project in GitHub's history. Other projects have bigger stars, but they are mostly educational repositories. "No other software project is that big," he says. "It's around 30,000 commits. We're closing in on 2,000 contributors, soon to be 30,000 PRs."

The contributor chart is a hockey stick with no curve. A friend called the shape "stripper pole growth," and the nickname stuck. Steinberger tells the audience that the team has been adding engineers from Nvidia, Microsoft, Red Hat, Tencent, ByteDance, Telegram, and Salesforce. Chinese companies are his largest source of users by continent.

None of this is his full-time job. He also joined OpenAI in February to work on bringing agents to everyone. Running the foundation, he says, is "running a company on hard mode" because the volunteers can't be directed. He has two jobs, and the project keeps accelerating.

Drowning in slop

Then the tone shifts. "Open Claw is so insecure," Steinberger says, reading back the memes. In the last five months, the project has received 1,142 security advisories, 16.6 per day. Ninety-nine of them are rated critical. The team has published around 469 and closed roughly 60 percent.

For context: the Linux kernel receives eight or nine advisories a day. The curl project, one of the most widely used open-source tools for fetching data over the web, has received 600 total across its lifetime. OpenClaw is getting twice the kernel's rate and nearly double curl's entire history every five months.

Most of it is slop: AI-generated reports that look plausible but don't describe real risk. "The higher they're screaming how critical they are, the more likely it's slop," Steinberger says. He describes hundreds of people firing up their agents to try to break OpenClaw, not because they expect anything to come of it, but because CVE credits count as career capital. A CVE (Common Vulnerabilities and Exposures) is an officially registered vulnerability in a shared public catalog. The more your name is attached to, the more visible you get.

The dynamic is a new one. AI tools can now identify "even the most weird multi-chained exploits" (attack methods that string together several vulnerabilities), and they can write them up in convincing English. Filing a report used to take hours of skilled human work. Now it takes minutes of compute.

One example Steinberger reads out has a CVSS score of 10 out of 10, the highest possible. The actual bug is an edge case in the iPhone app, which has not yet shipped, where a read-only permission could be escalated. In practice, he says, nobody is even using that feature. But the CVSS formula doesn't care about practical impact, and the rules give high numbers even to bugs that affect no one.

Real threats exist too. A North Korean package called GhostClaw impersonates the project and drops a rootkit (hidden malware that gives the attacker full control of the machine) on anyone who downloads from the wrong site. A supply-chain flaw in Axios, a widely used JavaScript library for fetching data from the web, swept up OpenClaw because one of the project's own code libraries pulled Axios in without pinning the version. Those are genuine incidents. They just get buried in the slop.

The fearmongering industry

Steinberger is blunter about another pattern. There is, he says, a whole industry trying to put the project in negative light: companies selling security products, and universities chasing headlines.

He points to an academic paper called "Agents of Chaos." It devotes four pages to the OpenClaw architecture and zero to the security page that explains how to install it safely. The researchers ran the agent in sudo mode (which requires actively changing code to bypass the default limits) because they wanted to maximize what the agent could break. They skipped sandboxing. They put the agent in a group chat where any user could talk to it. Then they published the results as if this were the default behavior.

A similar pattern played out with a Centre for Cybersecurity Belgium advisory: a remote code execution alert for a flaw that only triggers if the user ignores the recommended setup. "If you actively fight the setup," Steinberger says, "you might be able to get this working."

He is careful to name the real risk as well. Any agentic system with access to your data, untrusted content, and a way to communicate has an unsolved class of vulnerabilities. That is not specific to OpenClaw. It applies to every serious agent. The more power you give an agent, the more it can do for you, and the more you have to understand what it does.

Building Switzerland

The response to all of this is not another AI filter. It is a foundation.

Steinberger is setting up the OpenClaw Foundation as a neutral non-profit, modeled after what Ghostty did. The goal is to hire full-time maintainers, improve code quality, and make sure no single company can capture the project. OpenAI has offered resources, but he is deliberately keeping the headcount balanced across Nvidia, Microsoft, Telegram, Salesforce, Slack, Tencent, and ByteDance. "I'm kind of building Switzerland with the foundation," he says. The last thing holding it up is the American bank system, which he describes as slow and confused by non-American applicants.

The message behind the structure is simple. When AI makes generating plausible attacks trivially cheap, the only sustainable defense is paid humans whose full-time job is to read, judge, and close. More code or more automation can't outrun a problem that is itself code and automation.

The skills that still matter

Swyx closes the AMA with a question about Ryan Carson's "dark factory" workflow, where engineers don't review the code their agents produce. Steinberger disagrees. He has tried it, and he doesn't think good software gets built that way:

"The way to the mountain is usually never a straight line. Sometimes you go a little bit off track and you see something new that inspires you. The first idea you have about your project is very unlikely to be the final project."

Instead, he talks about taste, which he defines from the floor up. The lowest bar of taste, he says, is that the output doesn't stink like AI. Writing with too many dots. Agentic UI with a purple gradient and a colored border on the left. A chatbot that sounds like a customer service agent because it was trained to.

Above taste comes system design. When the agent can write any code, knowing which code is worth writing is the hard part. And above system design, Steinberger names the skill he's had to learn himself: saying no.

"The wildest idea is just a prompt away. And usually one idea is never the problem. It's this idea and this idea and this idea, and how all of that fits together. That's the problem."

What this update really says

Five months in, OpenClaw looks like an early dispatch from a new era of open source. Software gets built faster than ever. It also gets attacked faster than ever, often by the same tools that built it. Volunteer maintainers can't outrun that on their own, and the industry around them isn't always helping.

What Steinberger is doing in response is old-fashioned: hire people, build institutions, make hard choices about what to say no to. The surprise is that this is what an AI-native maintainer ends up recommending. The bottleneck of open source in 2026 is no longer writing code. It is a person deciding what is worth keeping, and what is just noise dressed up as a CVE.

Glossary

TermDefinition
SlopAI-generated content that looks plausible but has low actual value. In security, slop means AI-written reports describing flaws that are either not real or not exploitable in practice
Security advisoryAn official report of a vulnerability in a piece of software, usually with a suggested fix and a severity score
CVSSCommon Vulnerability Scoring System. A 0-to-10 scale used to rate how severe a vulnerability is. The formula focuses on theoretical impact, not whether anyone is affected in practice
CVECommon Vulnerabilities and Exposures. A public catalog of known software vulnerabilities, with a unique ID per issue
Prompt injectionAn attack where hidden instructions in content the AI reads trick it into doing something the user did not ask for
SandboxAn isolated environment that limits what software can access. Reduces security risk but also reduces the agent's capabilities
Remote code execution (RCE)A vulnerability that lets an attacker run their own code on another person's machine. Generally treated as one of the most serious classes of bug
Supply chain attackAn attack that hits you through a dependency you rely on, not through your own code. If a library you use gets compromised, you get compromised
Dark factoryA factory that runs fully automated, with the lights off. As a workflow metaphor: a pipeline where AI does all the work without human review
FoundationA non-profit organization that owns and governs a project on behalf of its community, so no single company can capture it

Sources and resources

Share this article