Skip to content
Back to articles

OpenAI's Blueprint for the Superintelligence Era

April 7, 2026/6 min read/1,277 words
OpenAISam AltmanAI RegulationAI and Employment
Cover image from OpenAI's Industrial Policy for the Intelligence Age document, showing the title over a pastel-colored abstract digital cityscape

In Brief

OpenAI, led by CEO Sam Altman, published a 13-page policy document in April 2026 titled "Industrial Policy for the Intelligence Age." It lays out 20 concrete proposals for how governments and companies should navigate the transition to superintelligence, a stage where AI systems can outperform the smartest humans even when those humans are assisted by AI. This is not a product announcement. It is a political manifesto from an AI company, one that explicitly acknowledges the very technology it is building could concentrate economic power in the hands of a few firms, including OpenAI itself.


Three pillars

The entire document is built on three stated goals:

  1. Share prosperity broadly: make sure AI's economic gains reach everyone, not just those at the top
  2. Mitigate risks: build the safety systems, governance structures, and oversight regimes that can keep pace with AI capabilities
  3. Democratize access and agency: treat AI as foundational infrastructure, the way we treat electricity or the internet

Everything else in the document flows from these three pillars.


Part 1: Building an open economy

The first half of the document covers eleven proposals focused on economic policy. They sketch a version of the AI transition where workers share in the gains rather than simply absorbing the disruption.

  1. Worker perspectives. Give workers a formal voice in decisions about how AI is deployed at their workplace.
  2. AI-first entrepreneurs. Microgrants, revenue-based financing, and "startup-in-a-box" tools to help domain experts in any field start businesses using AI, not just people with technical backgrounds.
  3. Right to AI. Treat AI access as foundational public infrastructure, like electricity. Expand free and low-cost access so that access to these tools is not determined by income.
  4. Modernize the tax base. As AI automates more labor, the tax base erodes. OpenAI proposes shifting taxation from payroll and income toward capital gains and corporate profits.
  5. Public Wealth Fund. A government fund where every citizen holds a share of AI-driven economic growth. The document describes this as a way to ensure broadly distributed ownership of AI's benefits.
  6. Accelerate grid expansion. Public-private partnerships to build the energy infrastructure needed to power AI data centers.
  7. Efficiency dividends. Pilot programs for a 32-hour, 4-day workweek with no pay loss. The logic: if AI makes workers more productive, some of that gain should translate into more time off rather than just higher output.
  8. Adaptive safety nets. Unemployment benefits and training vouchers that automatically scale up when AI-driven disruption in a region or sector crosses defined thresholds, without waiting for emergency legislation.
  9. Portable benefits. Decouple healthcare, retirement contributions, and training from employment. Benefits follow the individual worker from job to job, not the employer.
  10. Pathways into human-centered work. Expand care, education, and healthcare roles as structured pathways for workers displaced by automation. These are jobs AI cannot easily replicate.
  11. Accelerate scientific discovery. A distributed network of AI-enabled labs across universities, hospitals, and community colleges to democratize access to research tools.

Part 2: Building a resilient society

The second half covers nine proposals focused on safety, governance, and oversight. These are less about economics and more about making sure that as AI systems become more capable, the structures to oversee them keep pace.

  1. Safety systems for emerging risks. Threat modeling, red teaming (deliberately trying to break AI systems to find weaknesses), medical countermeasures, and competitive markets for AI safety tools.
  2. AI trust stack. Technical systems for provenance (tracking the origin of AI-generated content), verifiable signatures, and privacy-preserving audit logs. The goal: traceable and authentic AI-generated content and actions.
  3. Auditing regimes. Strengthen the Center for AI Standards and Innovation (CAISI) and build a competitive market for independent AI auditors and evaluators.
  4. Model-containment playbooks. Pre-agreed, coordinated protocols for responding if a dangerous AI system is released into the world after the fact.
  5. Mission-aligned corporate governance. Public Benefit Corporations, long-term philanthropic commitments, and explicit protections against regulatory capture, where established companies help write rules that lock out competitors.
  6. Guardrails for government use. High standards for AI used in government decisions, digital records of AI-assisted choices, and updated Freedom of Information Act rules to cover AI-generated government documents.
  7. Mechanisms for public input. Transparent model specifications, democratic alignment processes (ensuring AI systems act according to human values), and representative public participation in decisions about how AI systems behave.
  8. Incident reporting. A near-miss sharing system between AI companies and a designated public authority, similar to how aviation and nuclear industries share safety data.
  9. International information-sharing. A global network of AI institutes with shared evaluation protocols, expanding CAISI's role beyond the United States.

What OpenAI commits to

OpenAI also announces concrete steps it will take:

  • Research fellowships up to $100,000 for policy researchers working on AI-related economic and governance questions
  • Up to $1 million in API (Application Programming Interface) credits for research organizations studying AI's societal impact
  • An OpenAI Workshop opening in Washington, DC in May 2026, as a space for policymakers and researchers to engage with the company directly
  • A dedicated email address for feedback: newindustrialpolicy@openai.com

The uncomfortable paragraph

Buried in the document is a passage worth reading carefully. OpenAI acknowledges that the gains from AI could concentrate in a small number of firms, and it names itself as one of them. The company is essentially saying: the technology we are building might make us very powerful, and that is a problem you should help solve, partly by regulating us.

This is unusual. Most technology companies do not publish detailed estimates of the inequality their products could generate. There are two ways to read it. One is that OpenAI is genuinely concerned and wants governments to act quickly. The other is that a company that helps write the rules of its own industry gets to shape those rules to its advantage. These readings are not mutually exclusive.

The 4-day workweek proposal is a genuinely progressive idea, and the first formal policy recommendation of its kind from a major AI company. It is also a proposal that costs OpenAI nothing directly. The company is asking governments to run these pilots, not committing to one itself.

The Public Wealth Fund proposal is the most ambitious of the twenty. The concept of every citizen owning a stake in AI-driven growth mirrors the logic behind Norway's Government Pension Fund. Norway channeled oil revenue into a fund owned by all citizens rather than letting it concentrate in private hands. OpenAI is proposing something structurally similar for AI. Whether governments have the political will to build such a fund is a separate question.


Why this document exists

OpenAI is not the first AI company to engage with policy, but this document is unusually specific. Twenty distinct proposals, with named institutions, dollar amounts, and opening dates. That level of detail signals intent to participate in the actual legislative process, not just release a white paper and move on.

The timing also matters. The US AI regulatory environment is unsettled. Several states are pursuing their own frameworks. Federal AI legislation is contested. In that environment, a company that publishes a detailed, reasonable-sounding policy agenda early gets to anchor the conversation. That is a strategic advantage, regardless of whether the motives behind the document are entirely altruistic.

For anyone trying to understand where AI policy is heading, this document is worth reading in full. Not because OpenAI's proposals will become law, but because they represent one influential company's vision of what the rules of the superintelligence era should look like.


Glossary

TermDefinition
SuperintelligenceA stage of AI development where systems can outperform the smartest humans, even when those humans are assisted by AI. OpenAI considers the transition to superintelligence already underway.
AlignmentThe challenge of making AI systems act according to human values and intentions. "Democratic alignment" means giving the public a say in what those values should be, rather than leaving it to engineers.
Public Wealth FundA government-owned investment fund where every citizen holds a share of the returns. Proposed as a way to distribute AI-driven economic gains broadly, similar to Norway's Government Pension Fund.
Efficiency dividendWhen AI makes workers more productive, the productivity gain is converted into a benefit for workers: shorter hours, higher retirement contributions, or subsidized childcare, rather than simply higher output.
Trust stackA technical system for verifying that AI-generated content and actions are authentic and traceable, using provenance markers and audit logs.
Regulatory captureWhen regulations are designed in ways that protect established companies from competition, often because those companies helped shape the rules.
CAISICenter for AI Standards and Innovation, a US body for developing AI safety and evaluation standards.
Portable benefitsHealthcare, retirement, and training benefits that follow the individual worker across jobs, rather than being tied to a single employer.
Public Benefit CorporationA corporate structure that legally commits a company to serve the public interest, not just shareholder profit.
Adaptive safety netsUnemployment and training support that automatically scales up when AI-driven disruption exceeds defined thresholds, without requiring emergency legislation.

Sources and resources

Share this article