Skip to content
Back to articles

How Emergent Built 7 Million Apps Without Engineers

March 16, 2026ยท7 min readยท1,467 words
AIvibe codingEmergentno-code platformsSaaS disruption
Mukund and Madhav Jha, founders of Emergent, on the Y Combinator Lightcone podcast
Image: Screenshot from YouTube.

Key insights

  • Emergent started by solving verification, not code generation. That foundation lets their AI agents detect and fix their own errors, running longer without human oversight, which competitors who began with front-end prototyping struggle to replicate.
  • Every new AI model generation resets the playing field. Starting second let Emergent learn from Lovable and Bolt's weaknesses and build a broader architecture from day one.
  • The cost drop from $500,000 to $5,000 is only half the story. When the people who understand the problem build their own software, the 'lost in translation' problem between business owners and developers disappears entirely.
  • SaaS faces pressure from two directions at once: AI agents that consume existing workflows directly, and custom-built software that can replace off-the-shelf tools like Asana and Jira.
SourceYouTube
Published March 16, 2026
Y Combinator
Y Combinator
Hosts:Jared Friedman, Diana Hu, Harj Taggar
Emergent
Guest:Mukund Jha, Madhav Jha โ€” Emergent

This is an AI-generated summary. The source video includes demos, visuals and context not covered here. Watch the video โ†’ ยท How our articles are made โ†’

In Brief

Mukund Jha and Madhav Jha, twin brothers and co-founders of Emergent, joined Y Combinator's Lightcone podcast to explain how their platform went from a coding-agent research project to hosting 7 million apps in eight months. The striking detail: 80% of their users have zero programming knowledge. Emergent went through YC in summer 2024, and the founders argue that their verification-first approach produces production-ready software, not just prototypes. The conversation raises big questions about what happens to SaaS (Software as a Service, cloud software you pay a subscription for) when anyone can build custom tools for a fraction of the cost.


What Emergent promises

Emergent's founders argue that building software is no longer the hard part. The real bottleneck has always been verification: making sure the code actually works before shipping it. Mukund Jha co-founded Dunzo, an Indian delivery service for food and groceries (similar to DoorDash or Deliveroo), where he ran a team of 300 engineers. Software testing was the biggest obstacle to shipping fast, and solving that problem turned out to be the foundation for everything Emergent does today.

Their claim goes further: the coding itself is only 20% of the job. The other 80% is everything that happens around it. Code reviews, automated testing, debugging, deployment, security, hosting. Emergent built a platform that handles the entire pipeline, which is why non-technical users can describe what they want and get a working, deployed application.

Supporting evidence

The numbers are significant, though they come from the company itself:

MetricFigure
Apps built7 million in 8 months
Non-technical users80% have zero coding experience
Global reach190+ countries, 70-80% in US and Europe
Engineering team12 people
Performance boost20-30% above base models using their testing and verification system
Apps with built-in AI agents20% of all apps built on the platform

The verification-first approach

Most vibe coding platforms (tools where you describe what you want in plain language and AI builds it) started from the user-facing side. Lovable, Bolt, and similar tools focused on front-end prototyping, making it easy to get a good-looking interface quickly. Emergent went the other direction. They started by building coding agents that topped SWE-bench, a benchmark measuring how well AI can solve real software engineering problems. Only then did they package that power for non-technical users.

This matters because the approach is hard to reverse. As Mukund explains, competitors who started with front-end prototyping made architectural choices that are difficult to undo. You can simplify a powerful system for beginners, but bolting production-grade infrastructure onto a prototyping tool is a much harder problem.

How it works in practice

Emergent built its own infrastructure from the ground up rather than relying on outside providers. The reasoning: if the agent builds code in the same environment where it gets deployed, fewer things break when the app goes live.

Their multi-agent architecture splits work across specialized agents. A main "driving" agent handles the overall task, while sub-agents take care of testing, design research, and API integrations. This keeps each agent's context window (the amount of text an AI model can process at once) manageable, even for complex applications.

The platform also has a form of long-term memory. Agents learn not just from a single session but across sessions. If the agent struggled with a calendar integration three weeks ago, that solved problem feeds back into its knowledge base through an automated verification process. Emergent calls this continual learning, a system where the AI improves over time from its own past experiences.


The "lost in translation" problem

The founders highlight several user stories that illustrate a point beyond cost savings. A clinical psychologist from Alaska named Christy wanted an app combining sports psychology with equestrian coaching. She looked everywhere, found nothing that fit, tried hiring a dev shop in Nova Scotia, and gave up on the price. She built the app herself on Emergent. It's now on the app store as "Equine" with hundreds of users.

A business developer from Norway built a CRM (customer relationship management system) for lawyers after selling his previous company. He describes himself as the sole builder on his team. As Madhav puts it, the real breakthrough is that nothing gets lost in translation when the person who understands the problem builds directly, without a developer misunderstanding the requirements.

Emergent itself practices what it preaches. A quality assurance engineer on the team built an Asana replacement that the entire company now uses. It started as curiosity, with the first prompt being "clone Jira." The tool is fully customized to how Emergent works (they ship three times a day) and saves $3,000 to $4,000 a month in subscriptions.


Opposing perspectives

The moat question

When models from Anthropic, OpenAI, and Google keep getting more powerful, what stops them from building a "Claude Code for non-technical users" and eating Emergent's market? Mukund's answer is that understanding customer needs and building close to them is what matters, the same fundamentals that have always applied to startups. But that argument works for any startup in any era. The question is whether a company with a vastly larger research budget and an existing user base could replicate Emergent's approach faster than Emergent can stay ahead.

Self-reported metrics

All of Emergent's numbers come from Emergent. Seven million apps sounds impressive, but there is no independent breakdown of how many are actively used, how many generate revenue, or how many were abandoned after a first prompt. The $50 million in annual recurring revenue reported by outside sources has not been independently audited. The $500,000-to-$5,000 cost comparison assumes traditional dev shop pricing, which varies enormously.

Second-mover advantage has limits

The founders argue that being a second mover in AI is an advantage because each new model generation resets the playing field. That logic holds for now, but it also means a third or fourth mover could leapfrog Emergent just as easily. If the advantage is "we learn from what came before," every new entrant has that same advantage.


How to interpret these claims

The Emergent story is compelling, but the interview happens on a YC podcast where the hosts have a direct interest in the company's success. That context does not make the claims false, but it does explain why tough follow-up questions are largely absent.

What to watch for

The quality gap between prototypes and production software is real and well-documented across the vibe coding space. Emergent's verification-first architecture is a plausible explanation for why their users report better outcomes. But "better than Lovable" is a different claim from "production-ready for mission-critical software," and the interview does not clearly distinguish between the two.

Jevons' paradox at work

One of the most interesting moments comes when the hosts observe that more powerful tools are not reducing work at YC. Instead, people are working more hours and shipping more software per week. Mukund calls this Jevons' paradox, a concept from economics where making something more efficient leads to more total use, not less. Named after economist William Stanley Jevons, the idea was originally about coal consumption, but it maps well onto software development. If building software gets 100 times cheaper, the world does not build the same amount of software for less money. It builds 100 times more software.

This framing directly challenges the "AI will destroy jobs" narrative. If demand for software is essentially unlimited, cheaper production expands the market rather than shrinking the workforce.


Practical implications

For non-technical founders

Emergent and similar platforms make it possible to build and test a business idea without hiring developers. The cost of a first version has dropped from hundreds of thousands to a few thousand dollars. But "possible to build" is not the same as "easy to maintain." Anyone using these tools should plan for what happens when the app needs updates, handles real user data, or scales beyond a prototype.

For SaaS companies

The threat is coming from two directions. First, AI agents are starting to consume SaaS workflows directly, bypassing the user interface entirely. Second, domain experts are building custom tools that fit their specific needs better than generic software ever could. SaaS companies that cannot pivot toward agent-first architectures or offer deep customization may find their subscription revenue under pressure.

For the AI industry

Emergent's model-agnostic approach (using different foundation models for different strengths, like Opus for long-horizon tasks and Gemini for front-end work) suggests that the models themselves are becoming interchangeable. If that is true, the long-term value may sit in the platform layer, not the model layer. Open-source models trailing three to six months behind frontier models would accelerate this trend.


Glossary

TermDefinition
Vibe codingBuilding software by describing what you want in plain language, without writing code yourself.
Multi-agent architectureA system where multiple AI agents work together, each handling different subtasks like testing, design, or deployment.
SWE-benchA benchmark that measures how well AI coding agents solve real software engineering problems from open-source projects.
Verification loopAn automated system that checks whether the agent's code actually works before moving on to the next step.
Context windowThe amount of text and code an AI model can process at one time. Larger windows allow working with bigger codebases.
Test time computeUsing extra computing power when the AI model runs (rather than during training) to improve results.
Jevons' paradoxWhen making something more efficient leads to more total use, not less. Named after economist William Stanley Jevons.
SaaS (Software as a Service)Cloud software you pay a monthly subscription for, like Asana, Slack, or Salesforce.
Agent swarmMany AI agents working together on a single task at the same time, coordinated by an overseeing agent.
Continual learningAn AI system that improves over time by learning from its own past sessions and experiences.
Fine-tuningTraining an existing AI model on specialized data to make it better at specific tasks.
CRM (Customer Relationship Management)Software for tracking interactions with customers, managing sales pipelines, and organizing business contacts.

Sources and resources

Share this article