OpenClaw at TED: Peter Steinberger Lets the Lobster Loose

This is an AI-generated summary. The source video may include demos, visuals and additional context.
In Brief
A burnt-out founder spent three years trying therapy, moving countries, and waking up every morning with no reason to get out of bed. Then Peter Steinberger tried an AI coding agent, and 18 months later he walked onto the TED stage with what Jensen Huang (NVIDIA) calls the operating system for personal AI.
Related reading:
The "holy shit" moment
In early 2025, Steinberger sat down to test the new AI coding agents. What he calls a "holy shit moment" came quickly. Boilerplate, plumbing, all the boring parts of software development: AI could do the whole thing. The bottleneck was no longer typing. It was thinking. And thinking was what he had done for 25 years.
In a few months he built 44 projects. The last one was a WhatsApp bot he put on his own computer so he could talk to his own code through the apps he already used. That bot came with him to Marrakech.
Marrakech and the voice message
On a trip to Marrakech, Steinberger was testing the WhatsApp bot he had built for himself. It already handled text and images, but he had never added voice support. When he sent a voice message anyway, the typing indicator appeared and a reply came back nine seconds later.
When he asked the agent how it had done it, the walk-through was a chain of things he had not coded: inspect the file, detect it was audio, convert to a standard format, find an OpenAI key lying around, send the clip off for transcription, return the answer. Every step was improvised on the spot.
This is the distinction the talk keeps circling back to. A chatbot takes a prompt, produces a response, and stops if it hits an obstacle. An agent hits the obstacle and tries to go around it.
A public Discord, 800 messages before breakfast
The second big moment was a mistake. Steinberger put the agent in a public Discord server and invited random people, stayed up watching strangers play with it, then went to bed. What he had forgotten was that the agent was set to auto-restart.
By morning there were over 800 messages waiting. He read every one before doing anything else, just to check if the agent had leaked anything private. Nothing had, but something could.
That was the moment it went viral. The project now called OpenClaw has a lobster mascot, its own conferences, and growth Steinberger's friend described as "not hockey stick, this is stripper pole." A graph pointing straight up.
Success came packaged with a trademark claim from AI company Anthropic that forced a rename, an attempt to push him away from the lobster, and a cutoff from the model most of his users preferred to use in OpenClaw. The project was originally called Clawdbot (from Claude + claw), became Moltbot after the January 2026 complaint, and landed on OpenClaw three days later because Moltbot never rolled off the tongue.
Gerhard and the beer machine
At ClawCon Vienna, Steinberger met Stefan and his 60-year-old father Gerhard, a beer sommelier (a certified beer expert) who has never written a line of code. They had connected OpenClaw to their brewing rig over Bluetooth, sent it one prompt, and let the agent run the 90-minute brew: temperature ramps, hop additions, the whole process. Then they asked the agent what to do with all that beer, and it suggested a website. A few prompts later they had payments and a real product.
Almost all of it was done from a phone. This is the hinge of the talk. The point is not that Gerhard now has a small brewery, but that the gap between "I have an idea" and "I have a product" has collapsed for someone with no software background.
China's lobster paradox
In China, installing OpenClaw is called "raising lobsters." Thousands lined up at the Tencent office in Shenzhen to get their lobster installed, and the city hands out subsidies to people running businesses on the agent.
Steinberger met a Chinese entrepreneur who tracked each employee's daily OpenClaw task. The rule is simple: one automated task per day. Miss too many days and you are fired. At other companies, installing OpenClaw on a work machine is itself grounds for dismissal. Fired for using it, fired for not using it.
The heartbeat and the future
By default the agent waits for a prompt. Steinberger added a feature he calls "heartbeat" that wakes the agent on a timer to check email, check the calendar, follow up on loose ends. His first test prompt was "Surprise me," meaning the agent now had permission to act without being asked.
"And yes, that's kind of as scary as it sounds," he says. "No large company would ship something like that. But I'm just a random builder from Austria, with no legal department."
The point isn't that Austria lacks lawyers, but that a big company's legal team would block an agent that acts on its own long before it reached users. He has no such brake.
His reasoning for sharing it is simple: he built the sandbox for himself but released it as open source so others could play. From the stage he sketches a future of multiple specialized agents per person (one for work, a personal claw, maybe one for health), all collaborating securely. In meetings he imagines bidirectional models where one sub-agent fact-checks a statistic as the discussion continues and another sends a follow-up email before the meeting ends. To make that openness permanent, he announces OpenClaw Foundation from the stage: a nonprofit, open source, forever.
The real shift
For Steinberger it is not about the technology, but about the access. OpenClaw moved AI from something scary and nebulous into something fun, useful, and maybe a bit weird: lobsters, headbands, beer businesses. What we need in the future is more people spending more time with AI, he says, to understand how powerful and transformative the technology really is.
At ClawCon New York, thousands gathered to discuss what their lobster did that week. Gerhard and his beer machine. The vet in Shenzhen who automated their groceries. The teenager in São Paulo who built a tutoring business on OpenClaw. None of them are programmers. All of them are builders.
That is the real transformation. It is not the technology, it is the access. Agents change who can build things, and that door is not closing again. When you can prompt a prototype into existence in an hour, anything is possible. The next breakthrough can come from anyone, any country, any cafe.
He closes the circle on his own opening: even a burnt-out founder staring at the screen, wondering if his spark is gone, can do something like that. The spark is not gone. It is just waiting.
"The lobster is loose, and it's not going back into the tank."
Anderson's pushback
When Steinberger finished, TED curator Chris Anderson came on stage for a direct exchange. He told Steinberger, with love, that he found the talk genuinely frightening, and that if a Pandora's Box movie needed a star, Steinberger could be cast. The concern was that AI researchers usually frame their work around safety precautions, while Steinberger seems to take glee in pushing things out and watching what happens.
Steinberger's reply had two parts. On risk, the answer is practical: people are not running agents in public Discords anymore, they give them their own machines (often a Mac mini), and sandboxing keeps the blast radius small. On the bigger question, his bet is that more people using agents leads to faster collective learning about what goes wrong.
Glossary
| Term | Definition |
|---|---|
| AI agent | A program that takes a goal, decides which tools to use, and keeps going until it either succeeds or concludes it cannot. Different from a chatbot, which replies once and stops. |
| Heartbeat | An internal timer that wakes an agent on a schedule instead of waiting for a prompt. Lets the agent act on its own initiative. |
| Sub-agent | A secondary agent spun up by a main agent to handle one specific task, such as fact-checking a statistic mid-meeting. |
| Sandbox | An isolated environment, often a separate computer, where an agent can only access what is inside that environment. Limits the damage if something goes wrong. |
Sources and resources
Want to go deeper? Watch the full video on YouTube →