Skip to content
Back to articles

Tax the Robots? Altman's AI New Deal Meets Its Critics

April 19, 2026/7 min read/1,432 words
OpenAISam AltmanAI RegulationAI and EmploymentRoboticsAI in Healthcare
BBC AI Decoded panel discussing Sam Altman's AI blueprint, robot taxes and autonomous machines
Image: Screenshot from YouTube.
SourceBBC News
Published April 17, 2026
BBC News
Hosts:Christian Fraser
Cognitive scientist and AI author
Guest:Gary MarcusCognitive scientist and AI author

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Sam Altman just published a policy blueprint arguing that capitalism as we know it will not survive what his company is building. On this week's BBC AI Decoded, presenter Christian Fraser puts that claim to cognitive scientist Gary Marcus, Oxford robotics professor Sara Bernardini, and BBC tech reporter Lara Lewington. The hour ranges from robot taxes to autonomous drones in mines to a smart ring that spotted sepsis before three doctors did.

Altman's blueprint on one page

The document in question is "Industrial Policy for the Intelligence Age", a 13-page paper OpenAI published in early April. Axios called it a "New Deal for superintelligence". Altman's pitch, in Fraser's summary: "capitalism as we know it won't be enough to handle what's coming."

The blueprint floats four proposals that the panel works through:

ProposalWhat it means in plain English
Robot taxIf a company replaces workers with automated systems, the state taxes that automation so it can still fund schools, hospitals and pensions.
Four-day workweekCompanies get incentives to run a shorter week. People earn the same, so income-tax revenue keeps flowing even as AI takes over the grind.
Public wealth fundEvery citizen owns a piece of the AI boom. OpenAI suggests shifting some of the tax base from payroll to capital gains.
ContainmentPreset tripwires for AI systems. If they cause enough damage, they get isolated or shut down, including models that in Altman's words "cannot be switched off."

The proposals themselves are not new. The signal is that the CEO of the most aggressive AI company on Earth is now saying the free market cannot absorb his own product. That is why the BBC built an hour around the document.

The credibility problem

Marcus does not buy the premise. He sat next to Altman at the 2023 US Senate hearing on AI and came away convinced that Altman "will tell anybody exactly what they want to hear, but he won't stick to it." In Marcus's telling, Altman praised his regulatory ideas in the hearing room while OpenAI lobbied against them in private. A recent New Yorker profile by Ronan Farrow reaches a similar verdict about the gap between what Altman says and what his company does.

That history shapes how Marcus reads the blueprint. Some of it, he concedes, would be genuinely good for society. A real tax on big AI companies would help. But he expects the proposals to be "whittled down" inside Washington, with the industry-friendly parts (government support for AI infrastructure) surviving and the redistributive parts (taxing the companies more) dying on the vine.

On the wider question of whether anyone is actually ready for what is coming, Marcus is blunt: "Nobody is prepared." Regulation has gone backwards since 2023. Even the EU AI Act, he argues, is getting quietly undermined. Lewington offers a milder reading: for people in Silicon Valley who spend every week drowning in maximalist rhetoric about the age of abundance, the Altman document at least sounds like it has been written by a human being. That is not the same as saying it will happen.

Bernardini sides with the economic case for some kind of robot tax. As productivity rises, income shifts from labour to capital. If a handful of companies own the systems doing the work, someone has to rebalance the ledger or the welfare state runs out of money to pay.

What is "a robot" anyway?

The second half of the hour moves from the spreadsheet to the workshop. Bernardini designs autonomous systems at Oxford, and her work makes one thing clear: "the robot" in the debate is not a humanoid waiter. It is a fleet of specialised machines operating where humans should not go.

Two of her projects stand out:

Neither system improvises. Both depend on heavy upfront planning. Bernardini is careful to push back on the "AI suddenly makes this possible" framing: robotics has been using AI in different forms for decades, going back to NASA's Remote Agent controlling the Deep Space 1 probe in 1999, millions of miles from Earth. The real progress lately is in perception, the ability of machines to see and understand their surroundings, not in the planning brain.

World models and the missing common sense

A viewer, Linda in Switzerland, asks about world models. Marcus treats it as the most important question of the hour.

A world model is an explicit internal picture of reality. It encodes facts like "there is a table in the room", "the people around it are intelligent agents", "if one of them looks bored I should shorten my answer." Roboticists have worked with world models for decades. The modern large language models behind ChatGPT and Claude do not. They approximate the world with statistics across vast piles of text, without representing any of the underlying objects or physics.

That gap, Marcus argues, is why chatbots hallucinate. When a model invents a fake court case or garbles a simple fact, it is often something it could have looked up in Wikipedia if it had any model of what is true. It is pattern-matching words, not checking reality. Even a pocket calculator, he notes in a nice aside, has a richer world model than a chatbot: it knows how multiplication actually behaves.

The field's direction of travel, then, is obvious. Combine the neural networks that are good at language with the explicit world models that are good at cause and effect. Bernardini clarifies the hard part: in classical robotics, an engineer wrote the world model by hand. The goal now is for the robot to build its own, by itself, in environments nobody has mapped for it.

One smart ring, one sepsis case

The final segment is Lewington's reporting on Maeve O'Neil, a 19-year-old student in the United States. She had what three different doctors kept telling her was COVID, then tonsillitis, then nothing serious. Her Oura smart ring disagreed. Her resting heart rate would not come down. Her respiratory rate spiked the moment she tried to sleep. Her body temperature was several degrees above her personal baseline.

She went to the emergency room on her own judgement, backed by the ring's data. The diagnosis: her body was going septic. The ring did not diagnose anything. It did something more modest and more useful: it told her what her normal looked like, so she could see when her normal stopped.

Lewington is careful to keep the framing grounded. Activity trackers are not medical devices. They can miss things and they can cry wolf. But a growing number of people are spotting serious conditions (atrial fibrillation, cancer, autoimmune flares) because a device noticed a pattern break before they did. The AI here is not the ring. It is the software that learns your baseline over weeks of data and flags when the pattern changes.

Bernardini then offers the counterweight the segment needs. Several of her friends, she says, started tracking their sleep so obsessively that they stopped sleeping well. Health anxiety has its own cost.

What the episode actually leaves us with

The blueprint, the drones and the ring look unrelated, but they share one frame. All three force a question the panel keeps circling: who has to adapt — the machines to us, or us to them?

Altman's document says society has to redesign itself around systems he has built. Marcus's answer is that the record makes that offer hard to trust. Bernardini's robots need humans to pre-specify their world. The smart ring flips the direction entirely: the machine adapts to the individual wearer.

Taken together, AI Decoded's panel does not settle any of it. What they do is make the political document, the underground drone and the dorm-room ring sit next to each other on one table, so you can see the shape of the trade we are being asked to accept.

Glossary

TermDefinition
Robot taxA tax on companies that replace human workers with automation, so the state can still fund welfare and public services.
Public wealth fundA fund that gives every citizen a stake in the returns of the AI economy, paid for by taxing the companies building it.
Risk-aware autonomyA robot continuously reasoning about dangers and uncertainty while it acts, instead of following a fixed plan.
World modelAn explicit internal representation of objects, people and how they behave, used by a system to reason about reality.
Hallucination (LLM)When a chatbot confidently states something false because it is matching patterns in text rather than checking a model of the world.
AI New DealAxios's framing of Altman's blueprint, echoing Roosevelt's 1930s social contract, not a formal policy.

Sources and resources

Share this article