Skip to content
Back to articles

Isaacson: The Altman-Musk Feud Is Personal and Real

April 7, 2026/5 min read/951 words
OpenAISam AltmanElon MuskAI EthicsAnthropic
Walter Isaacson being interviewed on CNBC about the Altman-Musk feud
Image: Screenshot from YouTube.

Key insights

  • The feud is both personal and structural. Dismissing it as ego misses the real policy questions, and dismissing it as policy misses how personal betrayal shapes decisions at the top of AI.
  • AI safety has a prisoner's dilemma built in. Any company that slows down for safety risks losing contracts to one that doesn't. The Anthropic-Pentagon episode is the clearest example yet.
  • Neither self-regulation nor government regulation is adequate, according to Isaacson. The realistic fallback is the legal system, which only kicks in after harm has already occurred.
  • OpenAI's nonprofit origin is now a legal liability. The for-profit conversion is not just a philosophical disagreement. It is the basis of Musk's lawsuit and the core of a major investigative report.
Published April 7, 2026
CNBC Television
CNBC Television
Hosts:Becky Quick
Tulane University
Guest:Walter IsaacsonTulane University

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Biographer and CNBC contributor Walter Isaacson appeared on Squawk Box to respond to the major New Yorker profile of Sam Altman, written by Ronan Farrow and Andrew Marantz and published the previous day. The New Yorker is a prominent American magazine known for in-depth investigative reporting. The profile covered Altman's temporary ousting as OpenAI CEO in 2023 and his relationships with current and former colleagues. Isaacson, who wrote the biography of Elon Musk, argued the feud between the two men is both genuinely personal and grounded in real disagreements about AI safety and corporate structure. He also offered a sobering read on why safety is structurally hard to enforce in a competitive industry, and why neither self-regulation nor government oversight is likely to fix it.

Not just ego, not just ideology

When a big public dispute breaks out in tech, the instinct is to reach for the simplest explanation. Either it is a personal grudge between two outsized personalities, or it is a principled disagreement about how technology should be developed and governed. With Altman and Musk, Isaacson says the answer is both, and that matters.

His read of the Farrow piece was that "this is a deeply personal thing, but it also has a deep substance to it" — covering questions of openness, safety guardrails, and who should ultimately control AI. Treating the dispute as a personality clash lets the real policy questions off the hook. But the personal dimension is real too: personal betrayal shapes decision-making at the top of these organizations in ways that ripple through the whole industry.

How it started: an open, nonprofit AI lab

Isaacson described watching Musk and Altman in the early meetings that led to OpenAI's founding. Their shared concern was straightforward: AI development was going to be dominated by one company, most likely Google. Their answer was an organization that would be open-source (sharing its code publicly), nonprofit (not designed to generate profit for owners), and not controlled by any single commercial interest.

Then Altman changed the structure. Watching Musk's reaction, Isaacson said: "Elon Musk, when he gets angry, he can get really angry." He described watching Musk's face change as he talked about feeling betrayed — a shift Isaacson documented in the biography as "demon mode," the point when a grievance stops being a disagreement and becomes something harder to contain. That transformation is now playing out in courtrooms.

The for-profit conversion is not just a philosophical disagreement about what OpenAI should be. It is the formal legal basis of Musk's lawsuit. The original nonprofit structure created obligations, and those obligations did not disappear when the structure changed.

The safety prisoner's dilemma

Isaacson's sharpest point isn't about the two men — it's about the industry.

A prisoner's dilemma is a situation where two parties would both benefit from cooperating, but each has a strong incentive to act in their own interest instead, making both worse off. Isaacson argues AI safety has this structure baked in: any company that really pushes safety may fall behind, and one that doesn't will fill the gap.

The clearest example he offered came directly from the Farrow article. Anthropic placed safety guardrails on its AI system before licensing it to the Pentagon. Guardrails are rules and limits built into AI systems to prevent harmful or unintended outputs. The Pentagon then asked for those guardrails to be removed. Anthropic refused.

The Pentagon walked. Immediately, Sam Altman and OpenAI stepped in with their own system, without imposing similar restrictions. Isaacson is not arguing that Altman acted wrongly, or even that Anthropic did. His point is that the market structure nearly guarantees this outcome: if company A declines a contract on safety grounds, company B takes it. The incentive to be company B is very strong.

This is what's sometimes called the tragedy of the commons: when individuals acting in their own interest deplete or degrade something shared. Here the shared resource is an industry-wide floor of safety standards, and competitive pressure systematically erodes it.

Can anyone actually regulate this?

When Becky Quick asked whether the industry could self-regulate, or whether government should step in, Isaacson declined to choose either option. He rejected both.

His answer: "The industry is not going to be able to self-regulate... I don't think [government] works well either." With multiple competing companies racing to deploy, some will cut corners regardless of what the industry agrees to in principle. Congressional or regulatory action moves slowly, tends to be poorly targeted, and gets outpaced by the technology.

The fallback, Isaacson suggested, may simply be the legal system. When something goes badly wrong, lawsuits follow. He pointed to Meta (the company behind Facebook and Instagram) as the comparison: liability and litigation after harm has occurred. It is not a satisfying answer. He didn't present it as one. But it may be the most realistic mechanism available, precisely because it doesn't require companies or governments to agree on anything in advance.

Heading to court, not mediation

Isaacson was direct when Quick asked whether this dispute could be resolved outside the courtroom. It won't be. The grievances are too structural, too entangled with real legal claims, and too personal for the kind of quiet settlement that resolves most business disputes. This one ends with a judge.

What Isaacson left open is what comes after that. Even if Musk prevails in court, it doesn't fix the prisoner's dilemma. The competitive pressure to trade safety for market share doesn't disappear because one lawsuit resolves. It is built into the structure of how these companies compete, and no court ruling changes that.

Glossary

TermDefinition
Prisoner's dilemmaA situation where two parties would both benefit from cooperating, but each has an incentive to act selfishly instead, leaving both worse off
Tragedy of the commonsWhen individuals acting in their own self-interest deplete or damage a shared resource that benefits everyone
GuardrailsRules and limits built into AI systems to prevent harmful or unintended outputs

Sources and resources

Share this article