Skip to content
Back to articles

Garry Tan rebuilt a startup in five days for $200

May 8, 2026/6 min read/1,265 words
Claude CodeAI AgentsVibe CodingOpen SourceOpenClaw
Garry Tan explains his AI workflow on Y Combinator's Lightcone podcast
Image: Screenshot from YouTube.
SourceYouTube
Published May 8, 2026
Y Combinator
Y Combinator
Hosts:Jared Friedman, Dalton Caldwell
Y Combinator
Guest:Garry TanY Combinator

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Garry Tan didn't write a single line of code for thirteen years. Then he found Claude Code, and in five days for $200 he rebuilt a full web platform that originally took six people, a year and a half, and four million dollars. The numbers are striking enough on their own. But the philosophy behind it is worth taking away, and the defining question he asks at the end concerns everyone who uses AI: will you control your tools, or will they control you?

Rebuilt for the third time, this time for $200

Garry Tan is president and CEO of Y Combinator (the world's most prominent startup accelerator), but he started as a developer. In 2008 he built Posterous, a blogging platform that grew into one of the 200 most visited websites on the internet and was acquired by Twitter for around $20 million. Then thirteen years passed without a line of code.

In January 2026 he decided to build Posterous again, this time as the foundation for Garry's List, his new nonprofit platform for California politics. He opened Claude Code and started.

First time Posterous was built: $4 million, six or seven people, a year and a half. Second time, with a co-founder and some funding: $100,000 and three months. Third time, with Claude Code and $200 (the price of a Claude Code Max subscription): five days.

The result wasn't just a blogging platform. It was a complete agentic publishing engine (a platform where AI agents do the work), with RAG search (see glossary), recursive web crawling, deep analysis of any topic, and automatically generated articles with citations. "For $5 or $10 in API calls it does the work of a real human being who would have sat with dozens of articles, read entire books on a subject, and annotated them."

Thin harness, fat skills

The central conceptual contribution of the episode is a phrase coined by YC partner Pete Koomen: thin harness, fat skills.

The harness is the thin code loop that keeps the system running: take user input, send it to an AI model, execute the response. It's relatively simple to build, and good tools already exist for it. Claude Code is one. Garry's view is that you should use what already exists and not spend time rebuilding this from scratch.

The skills are the opposite: the rich, detailed markdown prompts (text instructions) that tell the AI who it is, what to do, and what standards to hold. This is where human knowledge and taste make a difference. Garry compares it to writing a wedding planner's checklist: everything a human expert would do step by step, written down so the next person can pick it up.

The problem many people run into is trying to hardcode things in software that should be in a prompt. "Code is brittle. It doesn't understand edge cases. It doesn't know who you are or what you want." A machine executes instructions without understanding them. Language models have something code lacks: they understand context. The engineering challenge is finding the right balance between what belongs in code and what belongs in a prompt.

Boiling the ocean: never skimp on tokens

Tokens are the unit of measure for text that AI models process. The more context you give a model, the more it costs. Most people try to economize.

Garry thinks this is the wrong instinct. He calls the philosophy tokenmaxxing: give the model everything it needs, not just what's "good enough." Use twenty sources instead of one. See what thirteen of them agree on and what the other seven dispute. "You don't have to settle for what we did when we were humans writing the code ourselves."

He compares it to San Francisco rent: it sounds expensive, but not doing it is more expensive. For founders, tokenmaxxing is like living inside the network rather than on the outskirts. You pay more, but you get back far more in return.

Tokens don't just buy text either. They buy time. Garry estimates he directed up to fifteen AI agents in parallel and shipped thirteen pull requests (finished code ready for review) in 48 hours. Normalized for logical lines of code (see glossary), he was working 400 times faster than in 2013, the last time he coded full-time. "I wasn't the one writing the code. I was directing fifteen agents at a time."

The workshop: GStack, GBrain, and OpenClaw

Garry has built three open tools to make this possible. All are free and open source (see glossary).

GStack is a collection of 23 Claude Code skills (slash commands) that turn Claude Code into a virtual software team: a CEO role that asks the right product questions, a designer role, an engineering manager role, a QA role, and more. GStack had 70,000 GitHub stars at the time of recording.

The tool didn't start as a project. Garry simply noticed he was writing the same instructions to Claude Code over and over, gathered them in Apple Notes, and tested them. When he posted them publicly, a single post got 200,000 views. GStack happened.

GBrain is the personal AI brain he built on top of OpenClaw. It contains over 17,000 pages of his own emails, tweets, and documents, indexed with RAG so the AI can retrieve the right context when it needs it. Built in twelve days.

OpenClaw is an open-source agent with 247,000 GitHub stars, created by Peter Steinberger. Garry runs it as the AI engine underneath GBrain. He describes using it like this: "Using OpenClaw is like driving a Ferrari. Exhilarating, insane. It figures things out you would never think a machine could figure out. But then it's also like a Ferrari: you better be a mechanic. It will break down on the side of the road when you need it most, and you need to get out with your wrench and fix it yourself."

For hard problems, he uses OpenAI Codex as a complement: "Sometimes you need the 200 IQ nearly nonverbal CTO."

The inspiration to start came in part from Boris Cherny at Anthropic, who sat next to Garry during a Lightcone recording and mentioned that his team doesn't write a single line of code. That was the moment it clicked.

Will your AI tools control you, or will you control them?

The philosophical core of the episode isn't the productivity number. It's the question Garry asks toward the end.

The personal computer was the greatest liberating force in the history of technology because it gave individuals control over computing power that previously only companies had. We are entering the same shift with AI. The question isn't whether everyone will have access to powerful AI. The question is who controls it.

One version of the future: you have your own AI, with your own data, your own prompts, full transparency. Another version: you get an AI someone else has configured, with unknown algorithms and unknown business models behind it.

"Either you're the one writing your own prompts, or you're below the API layer for some PM who doesn't know you, doesn't understand your needs, and never will."

Garry isn't against AI tools you didn't build yourself — he uses them constantly. The point is that you should understand them well enough to shape them around you, not the other way around. If you don't write your own instructions, someone else writes them for you.

"You can have infinite time by borrowing the time from the machines."

Glossary

TermDefinition
HarnessThe thin code loop that takes user input, sends it to an AI model, and executes the response. Garry's view: use an existing harness (e.g. Claude Code) rather than building your own.
SkillsMarkdown files with detailed instructions that tell an AI agent what its role is and what to do. This is where human knowledge makes a difference.
TokenmaxxingThe philosophy of never skimping on AI tokens. Give the model all the context it needs rather than stopping at "good enough."
RAGShort for retrieval-augmented generation. Means the AI looks up information in a knowledge base (e.g. your own documents) before responding, so answers are grounded in actual information.
Open sourceSoftware whose source code is publicly available and free to use, modify, and share. GStack and GBrain are both open source under the MIT license.
Logical lines of code (LLOC)Lines of code normalized for comments, blank lines, and formatting. A fairer measure of actual productivity than raw line count.
Pull request (PR)A request to merge finished code into a project, ready for review. "13 PRs in 48 hours" means 13 separate changes were complete and ready to be approved.
Vibe codingCoding where you describe what you want in plain language and the AI writes the actual code. You direct; the machine builds.
Plan modeA Claude Code feature where the AI creates a detailed plan and waits for your approval before writing any code.

Sources and resources

Share this article