Skip to content
Back to articles

Altman Warns Washington: Superintelligence Is Coming

April 6, 2026/4 min read/854 words
OpenAISam AltmanAI RegulationAI Security
Sam Altman interviewed by Mike Allen of Axios on superintelligence and Washington preparedness
Image: Screenshot from YouTube.

Key insights

  • OpenAI positions itself as the 'educator' for policymakers. The company building the technology is also framing the policy conversation, and the one who defines the problem often shapes the solution.
  • Even staunch free-market Republicans privately admit the balance between labor and capital will break if AI develops the way they expect. The political consensus on AI may shift faster than the public debate suggests.
  • Altman calls it 'a sad thing' that government can't lead AGI development the way it led Apollo or the Manhattan Project. A rare admission from the CEO of the world's leading AI company.
  • Altman reframes AI safety from a company responsibility to a societal resilience challenge, broadening who needs to act beyond just the AI industry.
SourceAxios
Published April 6, 2026
Axios
Axios
Hosts:Mike Allen
OpenAI
Guest:Sam AltmanOpenAI

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Sam Altman sat down with Axios co-founder Mike Allen to discuss OpenAI's new policy blueprint, "Industrial Policy for the Intelligence Age," and why he believes Washington is not prepared for what's coming. Altman warns that a world-shaking cyber attack is possible within the year, that bio-risks from open-source AI models are no longer theoretical, and that the next generation of AI models will represent a step-change that society hasn't fully reckoned with yet.

The urgency argument

Altman opened by framing OpenAI's role as something like a scout reporting from the frontier. He said his team feels "a sense of urgency" and wants to see the debate on these issues "start to happen with seriousness." The blueprint isn't meant to be a finished policy agenda. The goal is to put ideas into the conversation early, the same way OpenAI's earlier predictions about AI capabilities, once dismissed as far-fetched, eventually proved right.

The timing matters because AI is no longer theoretical. It's doing real work, at scale, right now. What it meant to be a software developer at the start of 2025 versus the start of 2026 are, in Altman's words, "very different things."

Next-gen models: career-defining discoveries

When Allen pushed Altman to describe what the next generation of models would actually be capable of, Altman was careful not to overclaim, but clear about the trajectory. Today's models, like GPT 5.4, are already helping scientists make small breakthroughs. With the next class of models, Altman expects scientists to start saying "this helped me make the most important discovery of my decade, or maybe my career." Not a Nobel Prize, but a significant, career-defining result.

On the productivity side, coders currently report being two or three times more productive. The next threshold is different in kind: one person doing the work of a whole software team. "Me and X hundred GPUs (graphics processing units), we can do the work of a whole software team," Altman said. That's a qualitative shift, not just a speed boost.

Cyber and bio: the threats that can't wait

This is where Altman's tone turned serious. OpenAI tracks threats in two main areas: cybersecurity and biology.

On cyber, Altman was direct: a world-shaking cyber attack is "totally possible" this year. Avoiding it won't come from making one AI model safer in isolation. It requires a resilience-first approach, with cybersecurity companies, governments, and platforms all using AI to rapidly secure their systems. OpenAI runs what it calls a "trusted access program," giving defenders early access to advanced models for exactly this purpose.

On biology, the threat is further out but moving fast. Advanced AI models will soon be capable of helping with complex biological research at a high level. That's mostly a good thing. Altman expects "a bunch of diseases to get cured." But the same capability could be misused. The risk of terrorist groups using open-source AI models to design novel pathogens is, in Altman's words, "no longer a theoretical thing." This is why OpenAI's blueprint pushes for societal resilience: company-level safety measures won't be enough.

Government can't lead this. But it has to be a partner.

Allen asked the case against nationalizing OpenAI. Altman's answer was candid: in a different era, AGI (Artificial General Intelligence) development would have been a government project, like Apollo, the Manhattan Project, or the Interstate Highway System. He called that a "sad thing," then acknowledged that the current structure of government makes it unlikely to succeed.

The alternative isn't independence, it's deep partnership. Altman was clear that the relationship between AI companies and government can't stay at arm's length. "There's going to have to be very, very deep partnership here." He pointed to energy infrastructure, biosecurity, and cybersecurity as the areas where action is most urgent and most politically achievable.

What surprised him: even the most ideologically committed free-market Republicans are privately acknowledging that if AI develops the way they expect, the traditional balance between labor and capital will break. One senior Republican figure told him he hates giving airtime to ideas like these, but admits that capitalism has always depended on some balance between labor and capital, and that balance may need to shift. That kind of private acknowledgment, Altman noted, signals society is more ready for this conversation than the public debate suggests.

Intelligence as a utility

To close, Allen asked about AI pricing. Altman's vision is that intelligence becomes something like electricity. You'll have a personal AI assistant running in the cloud, doing work in the background, plugging into different services and devices. You'll pay a monthly bill based on how much you use it. The price per unit of intelligence will keep falling, unless supply can't keep up with demand, which is why Altman keeps pushing for more energy and infrastructure investment.

One caveat: frontier models won't get cheaper. People always want the biggest, smartest model, and the price for that will stay premium. The commodity is yesterday's intelligence. The cutting edge will always cost more.

Glossary

TermDefinition
SuperintelligenceAI that surpasses human-level ability across all intellectual tasks. The step beyond AGI.
AGI (Artificial General Intelligence)AI that can perform any intellectual task a human can. Altman says we're close enough that the precise definition of whether we've reached it already matters a great deal.
Frontier modelsThe most advanced, capable AI models available at any given time.
Trusted access programOpenAI's program giving cybersecurity defenders early access to advanced models so they can secure systems before attackers do.
Societal resilienceAltman's framing for the idea that AI safety can't be handled by AI companies alone. Governments, infrastructure owners, and healthcare systems all need to prepare.

Sources and resources

Share this article