Skip to content
Back to articles

Greg Brockman on OpenAI's Superapp Bet and the Path to AGI

April 5, 2026/8 min read/1,528 words
OpenAIAI AgentsChatGPTSam AltmanCodexAI Infrastructure
Greg Brockman speaking in studio on the Big Technology Podcast
Image: Screenshot from YouTube.

Key insights

  • OpenAI is shutting down its most viral product, Sora, because video generation sits on a different technology branch than the GPT reasoning models the company is betting its future on.
  • The superapp vision turns ChatGPT from a chatbot into a unified operating layer for coding, browsing, and personal assistance, directly competing with Anthropic's Claude ecosystem.
  • Brockman puts AGI at 70-80% achieved, close enough to plan around but uncertain enough that the definition itself is still debated.
  • An automated AI researcher by fall 2026 means AI systems improving AI systems will move from theory to practice inside OpenAI.
Published April 1, 2026
Big Technology Podcast
Hosts:Alex Kantrowitz
OpenAI
Guest:Greg BrockmanOpenAI

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Greg Brockman, President and co-founder of OpenAI, joins Alex Kantrowitz on the Big Technology Podcast for one of the most revealing conversations yet about where AI is headed. Brockman lays out a major strategic shift: OpenAI is pulling back from Sora video generation, unifying ChatGPT, Codex, and browsing into a single superapp, and racing to build an automated AI researcher by fall 2026. Along the way, he puts a number on AGI progress (70-80%), defends a $110 billion infrastructure bet, and confronts the growing public skepticism around AI.

Why OpenAI killed its most viral product

Sora was a sensation. OpenAI's video generation tool captured public imagination like few AI products before it. So why shut it down?

Brockman's answer comes down to what he calls a "different branch of the tech tree". The Sora models are built on diffusion-based architecture, a fundamentally separate approach from the GPT reasoning models that power ChatGPT. Pursuing both branches at once requires more compute than OpenAI has, and demand for existing products is already outstripping supply.

The company is not abandoning the research entirely. Sora's underlying work continues in the context of robotics, which Brockman describes as still being in a research phase. But for the products OpenAI plans to ship this year, the GPT reasoning tree wins. Image generation inside ChatGPT stays because it runs on the GPT architecture, not the diffusion model branch.

When asked whether OpenAI might miss something by not pursuing world models (the approach Google DeepMind CEO Demis Hassabis has said feels closest to AGI), Brockman acknowledged the risk directly: "Absolutely. You do have to make choices."

The superapp: one app to replace them all

So what is OpenAI building instead? A superapp that brings together coding, browsing, and ChatGPT into one endpoint, a single application that combines multiple services in one interface.

The vision starts with ChatGPT becoming a personal AGI. An AI that knows you, remembers your interactions, connects to your email and calendar, and acts on your behalf. Codex, currently a tool for software engineers, expands to everyone. Brockman describes using it to configure laptop settings, noting that computers "were always supposed to contort to the human rather than me contort to them."

The superapp will ship in pieces over the coming months. The first step: making the Codex app more useful for general knowledge work, since OpenAI is already seeing organic adoption from non-developers internally. Someone on their communications team uses it hooked up to Slack and email to synthesize feedback.

Asked about Anthropic's Claude ecosystem (which already combines a chatbot, Claude Code, and collaborative features), Brockman admitted OpenAI was behind on usability 12-18 months ago. He says they have caught up, and that the real competitive advantage is unifying the AI layer so that thin application-specific UIs can sit on top of one general-purpose system.

"Spud" and the next wave of models

Brockman confirmed that OpenAI has completed a new pre-training run internally known as "Spud". Pre-training is the first phase of building an AI model, where it learns patterns from enormous amounts of data before being refined through reinforcement learning (a training method where AI improves by practicing tasks and learning from successes and mistakes).

This particular run represents roughly two years of accumulated research. Brockman describes it not as any single breakthrough but as a better foundation that makes every downstream step in the model pipeline more effective. A smarter base model learns faster during reinforcement learning and needs fewer corrections.

What will users notice? Models that "bend to you" more. Less of the frustrating experience where you ask something and the AI does not quite get it. Brockman talks about "big model smell," the feeling users get when a model is genuinely smarter and follows instructions with less friction. Sam Altman, OpenAI's CEO, reportedly told staff to expect a very strong model within weeks.

The broader pattern matters more than any single release. Brockman frames OpenAI as running an "engine of progress" that accelerates with each cycle. As soon as one model ships, it becomes an early version of what follows.

AGI at 70-80%: close enough to plan for

NVIDIA CEO Jensen Huang recently claimed that AGI has already been achieved. Brockman takes a more measured position. "I'm basically like 70-80% there," he says, acknowledging that current AI is "jagged." It is superhuman at many tasks, like writing code, but still struggles with basic things humans do easily.

The key claim is about timeline, not definition. Brockman says it is "extremely clear" that within the next couple of years, AI will handle almost any intellectual task you do on a computer. The floor of capability will rise so high that the debate over whether it counts as AGI (Artificial General Intelligence, AI that can perform any intellectual task a human can) becomes secondary to the practical reality.

He points to a concrete inflection point: in December 2025, new model releases jumped from handling 20% of tasks to 80%. That shift forced people to retool their entire workflow around AI. Brockman describes his personal test prompt: building a website that once took him months. By December, the AI produced it in a single attempt.

The automated AI researcher

The most consequential part of the conversation may be about what comes after current models. OpenAI is building an automated AI researcher, expected this fall, that would perform the full end-to-end work of a research scientist "in silicon."

Brockman is careful to frame this as managed, not autonomous. He compares it to a junior researcher: left on its own too long, it will go down unproductive paths. But paired with a senior researcher who provides vision, reviews outputs, and sets direction, the system could massively accelerate model development.

This is the self-improvement loop that AI safety researchers have long discussed. The AI gets better, which helps make the next AI better, which accelerates the cycle. Brockman sees it as a core driver of what he calls "takeoff," the phase where AI moves from incremental progress to exponential momentum. He describes multiple tailwinds converging at once: better chips, a growing ecosystem of developers building on top of AI, and internal model improvements feeding back into training.

OpenAI also recently brought Peter Steinberger, founder of OpenClaw (an open-source autonomous AI agent project), in-house. Brockman frames this as part of a broader push to figure out how agents slot into people's daily lives, calling it "an emerging very valuable skill in this new economy."

The $110 billion compute bet

OpenAI raised $110 billion in infrastructure funding earlier this year. Brockman frames compute not as a cost but as a revenue center, similar to hiring salespeople. "How much compute should we buy? All of it," he recalls telling his team after the ChatGPT launch. Every year since, demand has outpaced supply.

The math behind the bet rests on two claims. First, better pre-training makes all downstream improvements cheaper and more effective. Second, knowledge work is an enormous market that is just beginning to adopt AI. Enterprise customers are shifting from experimentation to dependence, and consumer subscriptions remain strong.

When asked about Anthropic CEO Dario Amodei's criticism that some companies are "YOLOing" on infrastructure spending, Brockman pushed back: "I just disagree. We've been very thoughtful." He argues that competitors realized late last year they needed more compute and found none available. OpenAI's early commitment, he says, is foresight rather than recklessness. On the bankruptcy risk that Amodei raised about his own company, Brockman suggested there are "more degrees of offramp" than a binary success-or-failure scenario.

Public trust and the perception gap

Brockman addresses a statistic that should concern every AI company: three times as many Americans expect AI to have a negative impact on society as expect a positive one. His diagnosis is that the industry has failed to show people personal utility rather than abstract economic growth.

He points to individual stories: a family that used ChatGPT to argue for an MRI that revealed their child's brain tumor, a friend who researched cancer treatments the AI helped identify. These real outcomes, he says, are not reaching the broader public narrative. The polling data shows a clear divide: people who have actually used AI tend to be positive. Those who have only heard about it are far more skeptical.

On data centers specifically, Brockman pushes back on concerns about water usage and energy costs. He claims OpenAI's Abilene facility uses about the same water as a single household per year, and the company has committed to not driving up local energy prices. He points to North Dakota, where he says a data center arrival actually lowered electricity rates by helping upgrade aging grid infrastructure.

His message to people who are scared of AI is simple: try the tools. He argues that the most negative sentiment comes from people who have heard about AI but never used it themselves. Actual users, even casual ones, tend to be positive. The challenge for OpenAI is closing that gap before public opinion hardens further.

Glossary

TermDefinition
SuperappA single application that combines multiple services. Here, ChatGPT, Codex, and browsing unified in one interface.
Pre-trainingThe first phase of building an AI model, where it learns patterns from massive amounts of data before refinement.
Reinforcement learningA training method where AI improves by trying tasks repeatedly and learning from its successes and mistakes.
AGI (Artificial General Intelligence)AI that can perform any intellectual task a human can. OpenAI's stated goal.
Tech treeA branching path of possible technologies to develop. Choosing one path means fewer resources for others.

Sources and resources

Share this article