Skip to content
Back to articles

OpenClaw and the Age of Personal AI Agents

March 6, 2026ยท10 min readยท2,012 words
AIAI AgentsOpenClawOpen SourceSecurity
Peter Steinberger speaking with Lex Fridman about OpenClaw and AI agents
Image: Screenshot from YouTube.

Key insights

  • OpenClaw became the fastest-growing GitHub project in history with over 180,000 stars, built from a 1-hour prototype that connected WhatsApp to Claude Code
  • Prompt injection remains an unsolved industry-wide problem, and giving an AI agent system-level access to your computer multiplies the potential damage
  • Steinberger claims AI agents will replace 80% of apps by acting as a universal interface between users and services
  • Both Meta and OpenAI have made acquisition offers, but Steinberger insists OpenClaw must remain open source
SourceYouTube
Published February 12, 2026
Lex Fridman
Lex Fridman
Hosts:Lex Fridman
OpenClaw
Guest:Peter Steinberger โ€” OpenClaw

This article is a summary of OpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger | Lex Fridman Podcast #491. Watch the video โ†’

Read this article in norsk


In Brief

Peter Steinberger, creator of OpenClaw, joins Lex Fridman for a three-hour conversation about what happens when you give an AI agent full access to your computer. OpenClaw is an open-source AI agent framework that has become the fastest-growing project in GitHub history, reaching over 180,000 stars in roughly three months. Steinberger describes the shift from chatbots that talk to agents that act as one of the most important transitions in AI. But that power comes with real risks: prompt injection (where hidden instructions trick an AI into doing something the user did not ask for) remains unsolved, and the security implications of system-level access are far from settled. Both Meta and OpenAI have reportedly made acquisition offers, while Steinberger insists the project must stay open source.

180,000+
GitHub stars in ~3 months
6,600
commits in January alone
80%
of apps agents could replace

From prototype to phenomenon

The origin story of OpenClaw reads like a startup parable compressed into fast-forward. Steinberger describes wanting a personal AI assistant since April 2025, but assuming the major AI labs would build one (6:23). When November came and nothing had materialized, he built the first prototype in a single hour by hooking WhatsApp to Claude Code (11:23).

That prototype quickly revealed something unexpected. The agent taught itself to handle audio messages by finding ffmpeg (a media-processing tool) and the OpenAI Whisper API (a speech-to-text service) on Steinberger's system, without being explicitly programmed to do so (15:36). This was, according to Steinberger, the moment he realized the project was something genuinely new.

The growth that followed was explosive. OpenClaw reached over 180,000 GitHub stars, making it reportedly the fastest-growing repository in the platform's history (2:06). Steinberger claims to have made 6,600 commits in January alone, running four to ten AI agents in parallel to keep up with development (21:28).

This is not Steinberger's first major project. He previously built PSPDFKit, a PDF framework used on reportedly one billion devices (4:37). After selling that company, he stepped away from programming for three years before rediscovering it through AI-assisted development.


Self-modifying software and the agentic loop

The technical claim at the heart of OpenClaw is that it is self-modifying software. The agent understands its own source code, knows which AI model it runs, and can rewrite parts of itself when a user requests changes (22:53). In Steinberger's words: "People talk about self-modifying software. I just built it."

This works through what is called an agentic loop: the cycle where an AI agent reads a task, takes action, observes the result, and decides what to do next. Unlike a traditional chatbot that only generates text, an agent in this loop can take real actions on a computer, such as reading files, installing software, running commands, or browsing the web.

Steinberger prefers the term "agentic engineering" over the more casual "vibe coding," which he describes as a slur (0:33). The distinction matters to him: agentic engineering involves deliberately designing systems for AI agents to navigate, while vibe coding is loose prompting without clear specifications. He claims to design his entire codebase for agents to read and modify, not for himself (1:12:38).

He also describes using voice input extensively to direct multiple agents simultaneously, to the point of temporarily losing his voice (1:15:44). The picture that emerges is closer to an orchestra conductor than a traditional programmer.


The security problem nobody has solved

The most significant tension in the conversation is the gap between OpenClaw's ambition and its security posture. An AI agent with system-level access to your computer can read your files, send messages on your behalf, execute code, and interact with online services. Fridman frames this clearly in his introduction: a powerful AI agent with system-level access is a security minefield (4:10).

Prompt injection remains unsolved

Steinberger acknowledges directly that prompt injection is "still an open problem industry-wide" (53:44). Prompt injection is an attack where someone embeds hidden instructions in content the AI reads, tricking it into following those instructions instead of the user's. For a chatbot, the worst case is a misleading response. For an agent with system access, the worst case is far more serious.

OpenClaw's skill system, where community-created abilities are defined in markdown files, presents an especially large attack surface. Steinberger describes partnering with VirusTotal (a Google security platform) to scan skills with AI, but concedes this "is not going to be perfect" (54:14).

Smarter models, higher stakes

Steinberger argues that smarter AI models are more resilient to prompt injection attacks (56:47). He warns against using cheap or weak models, pointing out that smaller models are "very gullible" and far easier to trick. His security documentation explicitly advises against using lightweight models like Haiku or local models for OpenClaw.

But Fridman raises an important counterpoint: as models get smarter and harder to trick, they also become more capable of causing damage when they are compromised. The attack surface shrinks, but the potential impact grows. Steinberger agrees this is "pretty much exactly what's going to happen" (57:32).

OpenClaw now offers sandboxing (running the agent in an isolated environment that limits its access) and allow lists (restricting which tools the agent can use). These are meaningful safeguards, but they also reduce the agent's usefulness. The core tension remains: the more access you give the agent, the more useful it becomes, and the more dangerous it is if compromised.


Agents will replace 80% of apps

Steinberger makes a sweeping prediction: AI agents will replace 80% of apps (2:54:04). His reasoning is that most apps are just interfaces between users and services. If an agent can talk to the service directly, through an API (a programming interface that lets software communicate with other software) or by literally clicking buttons on a screen, the app itself becomes unnecessary.

He gives a concrete example: instead of opening a food delivery app, you tell your agent to order food. The agent figures out whether to use an API or navigate the app's interface itself (2:55:57). He applies the same logic to his Sonos speakers, his home cameras, and even Gmail.

Fridman compares this to what the internet did to traditional businesses: a forced rethinking of how companies sell and make money (2:57:02). Steinberger agrees, suggesting that companies which fail to offer agent-facing APIs will simply be bypassed, with agents clicking through their websites instead.


The acquisition question

Despite being the fastest-growing open-source project in GitHub history, OpenClaw is currently losing money. Steinberger reports spending between $10,000 and $20,000 per month out of pocket, mostly on supporting the project's dependencies (2:21:54).

Both Meta and OpenAI have made acquisition offers, and Steinberger describes these two as the most interesting (2:22:30). His non-negotiable condition: OpenClaw must remain open source, potentially following a model similar to Chrome and Chromium, where an open-source core powers a commercial product (2:23:00).

The project had roughly 3,000 open pull requests at the time of recording (2:29:18), and Steinberger acknowledges he is "still mostly me who's pulling the project." The sustainability question is real: the world's fastest-growing open-source AI project is primarily maintained by one person who is losing money on it.


How to interpret these claims

Steinberger presents a compelling vision, but several aspects of the conversation deserve careful scrutiny before drawing conclusions.

The prototype narrative may be misleading

The "built in one hour" framing is powerful but potentially incomplete. Steinberger is a veteran software engineer who spent 13 years building PSPDFKit. The hour it took to wire WhatsApp to Claude Code was preceded by decades of engineering experience and months of experimentation with AI tools. The implication that anyone could replicate this trajectory deserves skepticism.

Security claims are aspirational

Steinberger acknowledges the security problems honestly, which is commendable. But the proposed mitigations, such as AI-powered skill scanning through VirusTotal, sandboxing, and allow lists, are described in terms of progress rather than solutions. Prompt injection remains fundamentally unsolved, and no timeline is offered for when it might be. For a tool that encourages users to grant system-level access, "we're making progress" is a significantly different claim than "this is safe to use."

The 80% prediction lacks evidence

The claim that agents will replace 80% of apps is presented as a personal estimate without supporting data, methodology, or timeframe. It may ultimately prove directionally correct, but the specific number appears to be intuition rather than analysis. The economic disruption this would cause is enormous, and the conversation does not seriously engage with the transition costs, job displacement, or regulatory responses such a shift would require.

Open source as a negotiating position

Steinberger's insistence that OpenClaw remain open source is presented as a principled stance. It is also worth noting that this position serves as a powerful negotiating tool with potential acquirers, since it limits what they can do with the project while maximizing community goodwill. The Chrome/Chromium model he references has drawn criticism over the years for Google using open-source goodwill to entrench a commercial product.


Practical implications

For developers and technical users

OpenClaw represents a real shift in how software gets built. Steinberger's approach of designing codebases for agents, running multiple AI agents in parallel, and using voice to direct them points toward a future where the developer's role changes from writing code to orchestrating agents. Anyone working in software development should pay attention to this shift, even if they are skeptical of the specific tool.

For everyday users

The promise of a personal AI agent that can handle tasks across your digital life is genuinely exciting. But the current version of OpenClaw is designed for technically skilled users who understand the risks. Steinberger himself noted that people asking "what's a CLI?" probably should not be using it yet (58:03). Until sandboxing and security mature significantly, caution is warranted.

For the AI industry

The acquisition interest from Meta and OpenAI signals that major companies see personal AI agents as a critical category. Whether OpenClaw itself becomes the dominant platform or simply proves the concept, the direction is clear: the industry is moving from models that generate text to agents that take action.


Glossary

TermDefinition
AI agentA program that can take actions on your behalf, not just generate text. It can read files, run code, send messages, and interact with services.
Agentic loopThe cycle where an AI agent reads a task, takes action, observes the result, and decides what to do next. This is what separates an agent from a chatbot.
Agentic engineeringA deliberate approach to AI-assisted development where systems are designed for agents to navigate and modify, as opposed to casual "vibe coding."
Prompt injectionAn attack where hidden instructions trick an AI into following them instead of the user's commands. A major unsolved security problem.
Self-modifying softwareSoftware that can read, understand, and rewrite its own source code. OpenClaw does this through the agentic loop.
SandboxAn isolated environment that limits what software can access. Reduces security risk but also reduces the agent's capabilities.
Soul.mdA configuration file that defines an OpenClaw agent's personality, behavior rules, and boundaries.
CLICommand-line interface. A text-based way to interact with software by typing commands, as opposed to clicking buttons in a graphical interface.
APIApplication Programming Interface. A way for software to communicate with other software, like a universal plug that lets different systems talk to each other.
Open sourceSoftware whose source code is publicly available for anyone to use, study, modify, and distribute.
MCPModel Context Protocol. A standard for connecting AI models to external tools and data sources.
ForkCreating an independent copy of a project's source code to develop separately.

Sources and resources