Altman: Society Hasn't Digested What's Coming

Key insights
- OpenAI is framing superintelligence (AI that surpasses humans across all domains) as a policy problem, not just a technical one. The debate shifts from 'will it work?' to 'who benefits and how do we prepare?'
- Instead of giving people money when AI takes their jobs, OpenAI wants to give people access to AI itself. It is about agency, not dependency.
- The COVID analogy is a deliberate persuasion strategy. Altman is saying we saw this before, acted too late, and should not repeat the mistake.
- March 2028 is the official target for an AI that can do AI research itself. If that works, progress compounds, leaving roughly two years for society to prepare.
This is an AI-generated summary. The source video may include demos, visuals and additional context.
In Brief
On the same morning OpenAI published its "Industrial Policy for the Intelligence Age" blueprint, CEO Sam Altman sat down with Josh Achiam, Chief Futurist at OpenAI, and research scientist Adrien Ecoffet for a public forum conversation moderated by Chris Nicholson from OpenAI's policy and public affairs team. The 46-minute discussion covered the pace of AI progress, who gets access, how to cushion economic disruption, and why society needs to start debating these questions before the decisions are forced.
Related reading:
"The change has already happened"
Altman opened with a story. In late January 2020, before most of the world had heard of COVID-19, OpenAI researchers had already concluded it was coming. They made work-from-home plans, put copper on door handles, and got mocked by journalists for it. One night, Altman walked alone through San Francisco, watching people in restaurants and bars breathe in each other's faces through the windows, wearing a mask himself. "I have not felt that so acutely as I do again in this moment," he told the forum. "There is this crazy change. The change has already happened — the models have already hit some level. Society has not digested them yet."
The analogy is deliberate. Altman is not predicting doom. He is saying that the people closest to the data can see what is coming before it becomes obvious to everyone else, and that by the time it is obvious, it is too late to prepare. The blueprint, and the forum, are OpenAI's attempt to start the debate early.
Ecoffet made the urgency personal. During the months the blueprint was being written, many OpenAI researchers went through a personal transition: from writing most of their own code to having AI write most of it. Watching that shift happen inside the organization sharpened the sense that what is coming is real and fast, in ways not yet visible to most people outside.
Resilience is not a single company's job
One of the sharpest arguments in the discussion was about what "safety" actually means at scale. Ecoffet drew a distinction between classical AI safety (aligning the model, running safety evaluations, implementing mitigations) and societal resilience: what happens when some actors do less safety testing, or when incidents occur anyway?
The blueprint proposes aviation-style incident reporting: every near-miss or unexpected event gets logged in a shared database, so the whole industry can learn from it. But Altman pushed the point further. Capable AI models will be able to find vulnerabilities in software better than humans ever could. Even if every major AI company agreed to prevent their models from being used for cyberattacks, open-source models (freely available for anyone to download and modify) with the same capabilities are coming, and no policy can stop that. The only viable response is to use AI to defend software systems at the same speed attackers are using it to probe them.
The same logic applies to biological risk. Someone, at some point, will use a model to help develop a pathogen. Restricting the most advanced models from providing that uplift buys time, Altman said, but the world needs actual defense shields: detection systems, rapid-response treatments, and, as Josh Achiam stressed, better protection for the food supply chain, an underappreciated vulnerability that AI could help close.
The framing matters. OpenAI is not saying "trust us, we'll handle it." It is saying that resilience is emergent, that it requires society-wide action, and that OpenAI can help build some of the defensive tools but cannot build all of them.
Who owns the compute?
The second major theme was access. Altman was direct about what he sees as the only real way to make AI accessible to everyone: build so many data centers that the price of AI keeps falling. "The demand is uncapped," he said. If supply stays limited, the richest people and companies will simply bid the price up until compute (the processing power needed to run AI) becomes another form of concentrated power. More data centers are, counterintuitively, an egalitarian initiative.
The historical analogy he reached for was electricity. Driving the price of energy down over the past 200 years is one of the best things humanity ever did for quality of life globally. AI access, Altman argued, needs the same treatment: not a scarce resource rationed by who can afford it, but a commodity available to anyone with an idea.
This framing connects to what the blueprint calls universal basic compute: rather than giving people a monthly stipend when AI displaces their jobs, give people access to AI itself. Altman's argument is that people are generally good at figuring out what they need. The problem is being boxed out of the resource in the first place.
Achiam added a note of caution: until compute is genuinely abundant, who gets to use it for what will be one of the most important societal questions to navigate. Painful trade-offs are coming unless supply scales fast.
Restructuring the economy
The discussion was unusually specific about economic policy. Altman said the current tax system, built around taxing human labor, will need to evolve in a world where AI is doing most of the intellectual work. Taxing AI labor instead of human income is one possibility he floated. He also mentioned new forms of transition assistance, that benefits like healthcare and retirement plans should follow the worker instead of being tied to a specific employer (he called the current US structure "really bad"), and the need for people to eventually have ownership stakes in the AI economy's upside.
Ecoffet raised measures that only kick in when the crisis hits: economic policies that only activate when disruption from AI actually arrives, rather than being baked into the system unconditionally. A 32-hour work week, for instance, might make sense in a world of AI-driven labor displacement, but implementing it prematurely could cause its own damage. The goal is to have the measures ready and calibrated to trigger when needed.
Achiam sketched a different kind of institutional gap: something between a corporation and a government. Private companies have minimal governance accountability; governments move slowly. New organizations with accountability somewhere between a company and a government might be able to test social safety-net services and democratic solutions faster than either.
The belief gap
The final part of the conversation turned to a phenomenon that anyone watching AI adoption has noticed: most people massively underestimate what current models can already do. Achiam described the time-scale mismatch precisely. AI progress happens in weeks and months. Most people who are not close to the field check in roughly every six months. They still think of the default chat model, not the reasoning models (AI systems that think step-by-step before answering) that have arrived since. They hear about hallucinations and conclude the technology is not ready. The belief gap will close, he argued, when people start seeing other people and institutions succeed visibly with AI at full capability settings.
Altman's favorite example was watching parents who are software engineers by training watch their kids use OpenAI Codex for the first time. A child with no idea of what should be hard just starts describing a video game, building it by voice. The parent thinks, "That's not going to work." And then it works. The parent wouldn't have even tried, because their training told them it was impossible.
That gap between what children assume is possible and what trained adults assume is impossible is, in miniature, the gap between where AI is today and where society's mental model of AI is.
The 2028 window
The most concrete number in the discussion came from Ecoffet: March 2028 is the official goal for an automated AI researcher: an AI capable of doing the work of an AI research scientist. This is not just about AI taking over ordinary tasks. An AI that can do AI research could accelerate the development of AI itself, compounding progress in ways that are hard to forecast. Ecoffet was careful to note the uncertainty, but the implication was clear: the window for institutions to adapt is measured in roughly two years, not decades.
The forum is framed as the beginning of a conversation, not a final policy document. OpenAI is inviting feedback via a new email address and launching a pilot program of research fellowships for independent researchers (up to $100,000 in funding and up to $1 million in API credits (prepaid access to OpenAI's AI services)) to encourage external thinking on the questions raised. Further discussions are being convened at a new OpenAI workshop opening in Washington, DC in May.
The message underneath all of it is the COVID walk. The researchers who understood exponential growth saw it coming before most people did. They were mocked for it. They prepared anyway. This time, OpenAI is trying to bring everyone on the walk before the lockdowns begin.
Glossary
| Term | Definition |
|---|---|
| Superintelligence | AI that surpasses human cognitive ability across all domains, not just narrow tasks |
| Universal basic compute | Giving people access to AI computing power as a basic resource, instead of a cash stipend, so they can build things rather than just consume |
| Capability overhang | The gap between what AI can do today and what most people realize it can do, caused by infrequent check-ins and outdated mental models |
| Compute | The processing power needed to train and run AI models. Measured in processor capacity and access to powerful data centers |
| Resilience (in AI context) | Society's ability to withstand and recover from AI-related disruptions, through layered defenses rather than relying on any single safeguard |
Sources and resources
- Sam Altman on Building the Future of AI (YouTube) — 46-minute OpenAI Forum panel discussion with Sam Altman, Josh Achiam, and Adrien Ecoffet
- OpenAI Codex — The coding AI referenced in the kids-building-video-games anecdote
- OpenAI — Publisher of the "Industrial Policy for the Intelligence Age" blueprint discussed in the forum
Want to go deeper? Watch the full video on YouTube →