Skip to content
Back to articles

Four Investors on Why AI Just Changed Everything

March 3, 2026ยท11 min readยท2,275 words
AIAnthropicOpenAIEnterprise AIJob DisplacementOpinion
Panel discussion on AI disruption featuring Peter Diamandis, Salim Ismail, Dave Blundin, and Alexander Wissner-Gross
Image: Screenshot from YouTube.

Key insights

  • Anthropic is reportedly growing revenue at 10x per year compared to OpenAI's 3.4x, with a crossover point expected mid-2026
  • 88 nations signed the New Delhi Declaration, the first global AI agreement including the US, China, and Russia
  • Andrew Yang predicts 20-50% of 70 million US white-collar workers could face displacement within 1-2 years
SourceYouTube
Published March 2, 2026
Peter H. Diamandis
Peter H. Diamandis
Hosts:Peter H. Diamandis, Salim Ismail, Dave Blundin, Dr. Alexander Wissner-Gross

This article is a summary of Anthropic vs. The Pentagon, Claude Outpaces ChatGPT, and Consulting Gets Replaced | #234. Watch the video โ†’

Read this article in norsk


In Brief

Sixty-six million years ago, an asteroid wiped out the dinosaurs because they could not adapt fast enough. In episode 234 of their weekly panel, Peter Diamandis, Salim Ismail, Dave Blundin, and Alexander Wissner-Gross argue that AI is this generation's asteroid. They cover India's massive AI summit, Anthropic's explosive revenue growth, the arrival of recursive self-improvement in coding agents, a looming collapse in the consulting industry, and Andrew Yang's prediction that tens of millions of white-collar workers face displacement. The throughline is the same: institutions that cannot evolve will go extinct.

10x/year
Anthropic's reported annual revenue growth
88 nations
signed the New Delhi Declaration
20-50%
white-collar workers Yang says face displacement

India positions itself as the AI neutral zone

The panel opens with the India AI Impact Summit, where India positioned itself as a neutral ground between the US and China in the global AI race. $250 billion in combined AI investment was announced (5:19), with Indian conglomerates Reliance and Adani committing $210 billion together (5:25). Google announced a $15 billion infrastructure investment (5:31), and Microsoft committed as part of a broader $50 billion package (5:34).

The bigger story, according to Ismail, is the shift in tone. AI leaders like Dario Amodei, Sam Altman, and Demis Hassabis used to attend summits looking for money. Now, fully funded, they are positioning themselves as stewards of a civilizational shift (4:00). Hassabis described AI as having "10 times the impact of the industrial revolution at 10 times the speed" (14:20).

Perhaps the most significant outcome was the New Delhi Declaration, signed by 88 nations including the US, China, and Russia (5:44). The agreement focuses on three pillars: democratic diffusion of AI tools so developing countries are not locked out, transparency from frontier AI companies, and measuring AI by health and welfare outcomes rather than corporate profits (5:58).

Wissner-Gross raises a geopolitical wrinkle: the declaration emphasizes open-weight models (models where the trained parameters are publicly available), yet the world's leading open-weight AI models all come from China (11:43). He frames this as a digital version of China's Belt and Road initiative. Meanwhile, Blundin points out that India's working-age population is roughly eight to nine times larger than that of the US, making it a talent and user base that every AI company wants to capture (6:33). India is already the second-largest ChatGPT user base after the United States (19:16).


Anthropic vs the Pentagon: a brief update

The Anthropic-Pentagon conflict has been covered extensively here (see our articles on the initial standoff, Dario Amodei's response, and the federal use ban).

In brief: The panel adds a few new dimensions worth noting.

Wissner-Gross frames this as "a very western problem to have" (26:17). In China, civilian-government fusion means the question of whether AI companies can refuse military requests simply never gets asked. He also reveals that Anthropic's models were the only frontier models cleared to operate on SIPRNet (the US military's secret-level classified network) (26:32), making the standoff all the more consequential.

Blundin draws a parallel to Elon Musk and Starlink during the Russia-Ukraine conflict, where a single CEO's decision could affect the outcome of a war (27:56). AI companies, the panel agrees, have become moral actors in geopolitics whether they wanted to or not.


The revenue crossover: Anthropic outpaces OpenAI

The panel shifts to financial data, and the numbers are striking. Anthropic is reportedly growing revenue at 10x per year, compared to OpenAI's 3.4x (35:45). At those rates, the revenue crossover point is expected around mid-2026 (35:56).

Blundin extrapolates the growth curves: if Anthropic's pace holds, it could reach $1 trillion in annual revenue by roughly 2029 (37:42), with an implied valuation of $20-30 trillion. Diamandis goes further, predicting "$100 trillion companies in the next five-year period" (38:05). Both Anthropic and OpenAI are expected to go public within six months (38:11).

Wissner-Gross offers the most useful framing. He argues the real story is not chatbots versus agents, but consumer versus enterprise (38:50). OpenAI historically pursued the consumer subscription market. Anthropic, partly out of compute scarcity, focused on code generation and enterprise use cases. It turns out enterprises have far deeper pockets. As Wissner-Gross puts it: why do you sell AI to enterprises? Because that is where the money is (39:17).

The evidence backs him up. OpenAI reportedly tried to push reasoning capabilities to hundreds of millions of consumer users through the ChatGPT 5 launch, but consumers preferred quick, conversational responses (40:25). Enterprises, by contrast, consume as many reasoning tokens as you can feed them. This is why OpenAI is now pivoting hard to its Codex product to compete directly with Claude Code (39:35).

MetricAnthropicOpenAI
Annual revenue growth10x3.4x
Primary marketEnterprise / code generationConsumer / subscriptions
Revenue crossoverExpected mid-2026โ€”
Strategic pivotโ€”Pivoting to Codex / enterprise

The recursive self-improvement era

OpenAI's Codex lead predicted that the current state of coding agents will be "remembered as being so primitive it'll be funny" in just 10 weeks (40:49). The panel takes this seriously.

Wissner-Gross declares that we have entered the "recursive self-improvement era" (41:31). He lays out three distinct phases of AI progress. The pre-training era, which produced major improvements roughly once per year through training on internet data. The reasoning era, which used synthetic data and distillation techniques to achieve quarterly breakthroughs. And now, the recursive self-improvement era, where models literally emit the weights for their own successor models (41:49). This, he claims, is producing capability jumps on a weekly or even daily basis.

Blundin adds context from a visit to OpenAI, where researcher Noam Brown reportedly said "Q1 2026 will be the quarter of scaffolding" (43:09). Scaffolding refers to the external tools and code that help AI agents perform multi-step tasks. The transition happening now is from scaffolding to pure reasoning: prompt an AI to build an entire reporting system, and it works continuously for days to deliver a result (43:28).


Consulting's reckoning: from scared to restructured

Accenture CEO Julie Sweet announced that employee promotions would be linked to AI tool usage (59:28). Diamandis, who has spoken at multiple consulting firm events, says the leadership teams are "scared shitless" (59:43).

Ismail offers a counterpoint that is more nuanced than it first appears. The consulting firms' clients are so far behind that they need more help, not less (1:00:20). He has told leadership at KPMG, EY, Deloitte, and Accenture the same thing: "We need to rebuild every institution and rearchitect every institution by which we run the world. That is the biggest advisory opportunity in the history of mankind" (1:00:50).

The audit side, however, faces a different reality. When one of the Big Four firms visited Link Ventures, the tech team estimated an 80% reduction in audit headcount (1:01:13). Ismail explains why: financial systems combining AI and blockchain will be self-auditing in real time (1:01:37), eliminating the need for periodic human-speed reviews. What audit firms really sell is trust, and the question is how they package that trust in a world where the data verifies itself.

Wissner-Gross raises a Goodhart's Law warning about Sweet's approach. Goodhart's Law states that when a measure becomes a target, it ceases to be a good measure. Measuring how much employees use AI tools is not the same as measuring the quality of their output (1:03:56). Still, the panel broadly agrees this is the right move for now: at the rate AI is improving, any organization that does not force adoption risks falling behind irreversibly.


Andrew Yang's warning: 14 to 35 million jobs at risk

Former presidential candidate Andrew Yang predicts that 20 to 50% of the 70 million US white-collar workers could be displaced within one to two years (1:42:25). Diamandis calls the approaching backlash a "pandemic of fear" (1:42:37).

The panel's response is more nuanced than simple agreement or dismissal. Ismail pushes back, arguing that history shows new opportunities emerge alongside displacement, pointing to how ATMs did not kill banking jobs but reshaped them (1:42:56). Blundin, however, agrees with the numbers: "These numbers are right," he says, noting that new opportunities "are laggy" (1:43:31). The gap between job loss and job creation, he argues, will cause "massive social unrest" that is "imminent... certainly before the next presidential election" (1:43:42).

Ismail adds the structural problem: government policy is "absolutely not set up" for what is coming (1:43:55). Governments are not ready, not willing, and not able to handle the transition, as Blundin put it earlier in the episode. The conversation about Universal Basic Income (UBI) or Elon Musk's proposed Universal High Income (UHI) remains theoretical while the displacement, according to the panel, is already happening on the ground.


How to interpret these claims

The panel presents sweeping claims with conviction, but several factors warrant careful consideration before accepting them at face value.

Conflict of interest is baked in

All four panelists are technology investors and entrepreneurs who benefit directly from AI enthusiasm. Diamandis runs Abundance360, a high-ticket conference ($25-50K per seat) built on techno-optimism. Blundin runs Link Ventures, which invests in AI companies. Wissner-Gross has founded AI companies including one focused on "physical super intelligence." Their framing of AI as an unstoppable force with trillion-dollar upside aligns neatly with their financial incentives. This does not mean their analysis is wrong, but listeners should weigh claims like "$100 trillion companies" accordingly.

Extrapolation is not prediction

Blundin's projection of Anthropic reaching $1 trillion in revenue by 2029 is based on extrapolating a 10x annual growth rate. Sustained exponential growth of this kind has no historical precedent in revenue terms. Amazon, the company Blundin himself cites as a comparison, took decades to approach $600 billion. Growth rates almost always compress as companies scale. The 10x figure captures a moment in time; it is not a reliable forecast.

The job displacement timeline is unusually aggressive

Yang's prediction of 20-50% displacement within one to two years is far more aggressive than most labor economists project. Even the panelists are split: Ismail pushes back with historical parallels, Blundin agrees with the scale but notes new jobs will emerge, and Wissner-Gross suggests job displacement might not even be in the top five concerns a decade from now. The range itself (20-50%) is so wide it reveals high uncertainty rather than precision.

What strong evidence would look like

For the revenue claims: quarterly earnings reports from Anthropic and OpenAI after they go public, showing whether enterprise AI revenue sustains its growth trajectory. For the job displacement claims: Bureau of Labor Statistics data over the next 12-18 months showing actual white-collar employment changes, not projections. For the consulting disruption claims: headcount reports from the Big Four firms showing whether audit staffing actually declines at the predicted rates.


Practical implications

For business leaders

The consumer-vs-enterprise distinction matters. Companies evaluating AI tools should focus less on which chatbot feels friendliest and more on which platform integrates into enterprise workflows. The revenue data suggests the market is voting with its wallets, and enterprise AI is winning.

For white-collar workers

Whether Yang's timeline is exactly right, the direction is clear across every panelist. Workers in audit, consulting, and routine knowledge work face the highest near-term risk. The panel's implicit advice: learn to use AI tools now, not because a mandate says so, but because the gap between AI-augmented productivity and traditional productivity is already too large to ignore.

For policymakers

The panel's most consistent criticism is aimed at governments. Every panelist agrees that policy infrastructure is unprepared. The time between job displacement and new job creation is, as Blundin says, "laggy." Whether the policy response is UBI, retraining programs, or something else entirely, the panel agrees it needs to start now.


Glossary

TermDefinition
Defense Production Act (DPA)A US law that allows the government to compel companies to produce goods or services for national defense. The Pentagon reportedly threatened to invoke this against Anthropic.
SIPRNetThe Secret Internet Protocol Router Network. A classified US military network used for sensitive but not top-secret information.
Open-weight modelsAI models where the trained parameters (weights) are publicly available. Often confused with open source, but the training data and code may remain proprietary.
Recursive self-improvementA process where AI models help create or improve their own successor models without human involvement, accelerating the pace of capability gains.
Scaffolding (AI context)External tools, code, and infrastructure that help AI agents perform multi-step tasks. The panel describes a shift from scaffolding-dependent agents to pure reasoning-based ones.
Goodhart's LawThe principle that when a measure becomes a target, it ceases to be a good measure. Applied here to Accenture measuring AI usage rather than output quality.
CapexCapital expenditure. Large upfront investments in infrastructure such as data centers. The AI industry is currently spending an estimated $2 billion per day on compute infrastructure.
InferenceWhen an AI model generates a response or prediction from input data. Distinct from training, which is the process of building the model in the first place.
UHI (Universal High Income)A concept proposed by Elon Musk where everyone receives enough income for a high standard of living, as an alternative to Universal Basic Income (UBI).
Enterprise vs consumer AITwo distinct AI market segments. Enterprise AI sells to businesses for internal workflows, while consumer AI sells subscriptions to individual users. The panel argues enterprise generates far more revenue.

Sources and resources