Skip to content
Back to articles

Hassabis: AGI Is 10x the Industrial Revolution

April 9, 2026/7 min read/1,377 words
Google DeepMindAI ResearchAI RegulationAI in Healthcare
Demis Hassabis speaking with Harry Stebbings on the 20VC podcast
Image: Screenshot from YouTube.

Key insights

  • Hassabis frames AGI as '10x the Industrial Revolution at 10x the speed,' unfolding over a decade instead of a century. Every institution from education to labor policy needs to adapt at unprecedented speed.
  • The real gap between current AI and general intelligence is not scale but missing capabilities: continual learning, consistent reasoning, and long-term planning. Bigger models alone won't close it.
  • Labs that can invent new algorithms will pull away from the rest as existing ideas get 'wrung out.' The frontier gap may widen, not narrow, challenging the popular story that AI is becoming a commodity.
  • Hassabis wants AI drugs to eventually prove reliable enough that regulators allow skipping steps like animal testing. A Nobel laureate pushing to reshape drug approval carries real weight.
Published April 7, 2026
20VC with Harry Stebbings
Hosts:Harry Stebbings
Google DeepMind
Guest:Demis HassabisGoogle DeepMind

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Demis Hassabis, CEO of Google DeepMind and 2024 Nobel laureate in Chemistry, sat down with 20VC host Harry Stebbings for a wide-ranging 32-minute conversation about where AI stands, where it is heading, and what worries him most. Hassabis gave a concrete timeline: there is a very good chance AGI arrives within the next 5 years. He described it as "10 times the Industrial Revolution at 10 times the speed" — bigger than the internet, mobile, and every previous technological leap combined. He also talked about drug discovery, safety regulation, energy, and what he sees as the philosophical questions everyone is ignoring.

A 5-Year Window

Harry Stebbings opened by asking the question most people have: how close is AGI — artificial general intelligence, an AI that can do any mental task a human can?

Hassabis did not dodge it. "There's a very good chance of it being within the next 5 years." He added that this is not much of a revision from DeepMind's original estimate. When the lab was founded in 2010, co-founder Shane Legg published blog posts predicting AGI in roughly 20 years. That prediction looks to be roughly on track.

That framing matters. This is not a hype-cycle estimate. It is a 15-year-old prediction that has not been significantly revised upward, and is now closing in on its target.

What Is Actually Missing

Compute (the raw processing power needed to train and run AI) remains the biggest bottleneck, and not just for building bigger models. Hassabis pointed out that researchers also need massive compute just to run experiments. "The cloud is our workbench," he said. If you have a promising new idea, you have to test it at real scale, or the results don't transfer to production systems.

Scaling laws describe the pattern where making AI models bigger tends to make them smarter. Hassabis pushed back on the idea that we have hit a wall. The exponential gains of early scaling have slowed, but "the returns are still very substantial." The question is what comes next.

And there is a lot missing. Hassabis listed four capabilities that current AI still lacks:

  • Continual learning. Today's AI systems stop learning the moment training ends. They cannot absorb new information the way humans do every day. The brain does this elegantly, probably through something like sleep-based memory consolidation. AI researchers have not cracked the equivalent.
  • Better memory systems. Long context windows (the amount of text an AI can "hold in mind" at once) are a blunt workaround, not a real solution. Smarter architectures are needed.
  • Long-term planning. Current systems struggle with goals that unfold over months or years.
  • Consistency. Hassabis calls today's AI systems "jagged intelligences." They can be brilliant when a question is framed one way, and fail at the same task if framed slightly differently. A genuinely general intelligence should not work like that.

The Race at the Frontier

On whether AI capabilities are becoming a commodity (whether all models will soon be roughly equal), Hassabis disagreed. He claimed that about 90% of the breakthroughs underpinning the modern AI industry came from Google Brain, Google Research, or DeepMind. AlphaGo, reinforcement learning, and the transformer architecture that powers almost every major AI model today all came from Google's family of labs.

His argument: as the current set of ideas gets "wrung out," the labs that can invent genuinely new algorithms will pull further ahead. Algorithmic invention, not raw scale, will be the differentiator. Open source models, he said, tend to trail the frontier by about six months, enough time for the research community to reverse-engineer what the frontier labs figured out. DeepMind also maintains its own open model series called Gemma, aimed at developers, academics, and edge computing use cases where running a smaller, efficient model makes sense.

Hassabis also weighed in on whether large language models (LLMs, the family of AI behind ChatGPT and Gemini) will eventually be replaced by something different. He put the odds at roughly 50/50 that some additional breakthrough beyond LLMs may still be needed. But he does not think LLMs will be discarded. Future AGI systems will likely be built on top of them, not instead of them.

Drug Discovery and a Golden Age of Science

One of Hassabis's clearest personal motivations is applying AI to science and medicine. The AlphaFold project solved a 50-year-old problem in biology (predicting the three-dimensional structure of proteins) and earned the 2024 Nobel Prize in Chemistry. Off the back of that, DeepMind spun out a separate company called Isomorphic Labs to tackle drug discovery.

The goal is to automate drug design: figuring out what chemical compounds might work against a disease, checking whether they are toxic, and verifying the other properties drugs need to be safe. Hassabis estimated this "drug design engine" could be ready within 5 to 10 years.

But the harder problem is clinical trials, which currently take many years regardless of how good the drug is. His long-term vision: once enough AI-designed drugs make it through the full process and regulators can back-test the models' predictions, it might become possible to skip certain steps, including, eventually, animal testing. Build enough trust in the models, and the regulatory timeline compresses. That is a two-step process: solve drug design first, then earn regulatory trust on top of that.

He described this as the beginning of "a new golden age of scientific discovery."

Safety and the Case for an International Body

Hassabis quoted Stephen Hawking without hedging: "We must get it right because we might not get another chance."

He identified two distinct risks. The first is misuse: these are dual-purpose technologies. The same AI that accelerates medicine can be repurposed by bad actors. The second is a technical problem: as AI systems become more autonomous and capable, can they be kept within the guardrails humans set? This concern does not apply to today's systems, he said, but it will, within a year or two.

His preferred solution, if he had a magic wand: an international regulatory body modeled on the atomic energy agency. National AI safety institutes, like the one set up in the UK under Prime Minister Rishi Sunak, would feed into it. These bodies would independently audit AI systems against agreed benchmarks, checking for dangerous properties like the ability to deceive, and issue something like a quality certification. Companies and consumers could then safely build on models that had passed.

He acknowledged the timing is difficult: we are building arguably the most consequential technology in history at precisely the moment when international coordination is at its weakest.

Jobs, Energy, and the Industrial Revolution Comparison

On labor displacement, Hassabis acknowledged the concern but took the long historical view. Every major technological revolution has destroyed old jobs and created new ones, and he expects the same pattern here. But he also sees a crucial difference in scale. "I sometimes quantify the coming of AGI as 10 times the Industrial Revolution at 10 times the speed." The Industrial Revolution unfolded over a century. AGI's equivalent disruption could unfold over a decade.

The Industrial Revolution was worth having, he said. Child mortality stood at roughly 40% before it. We would not want to have avoided it. But we could have handled the downsides better. His hope is that this time around we do better, through things like pension funds and sovereign wealth funds holding stakes in the major AI companies so the productivity gains are more broadly distributed.

On energy, he made a counterintuitive argument: AI will ultimately pay for its own energy costs many times over. Optimizing national power grids alone could yield 30 to 40% efficiency gains. Add climate modeling, materials science, and the potential to accelerate fusion energy (DeepMind is partnering with Commonwealth Fusion Systems on this), and the energy picture looks very different in the 5-to-10-year window.

He also made an observation about the current moment that cuts against the hype: AI is simultaneously overhyped in the short term and still deeply underappreciated in the long term. The next year will probably disappoint relative to the noise. The next decade will likely exceed what most people are imagining.

Glossary

TermDefinition
AGI (Artificial General Intelligence)An AI system that can perform any cognitive task a human mind can — not just one specific skill, but all of them
Scaling lawsThe pattern where making AI models bigger (more data, more computing power) makes them smarter — but with diminishing returns over time
Continual learningThe ability to keep learning new things after training is finished, the way humans absorb new information every day
Foundation modelA large AI model trained on massive amounts of data that serves as the base for many different applications
Jagged intelligenceWhen an AI excels at a task framed one way but fails at the same task framed differently — a sign of brittle, uneven capability
LLM (Large Language Model)A type of AI trained on vast amounts of text to understand and generate language; the technology behind ChatGPT, Gemini, and similar systems

Sources and resources

Share this article