Skip to content
Back to articles

Oracle Stuck With Last-Gen Chips as OpenAI Moves On

March 9, 2026ยท4 min readยท843 words
AIOracle Stargate data centerNVIDIA chip upgrade cycleAI infrastructure riskBlackwell vs Vera Rubin GPUs
CNBC segment on OpenAI pivoting away from Oracle's Stargate data center expansion
Image: Screenshot from YouTube.

Key insights

  • OpenAI chose to expand elsewhere because NVIDIA's next-gen Vera Rubin chips will be out before Oracle's Texas site is ready
  • Oracle committed debt and hardware for a customer that now says the chips are outdated, and it lacks the financial cushion to absorb the loss
  • The tension between chip upgrade speed and construction timelines threatens the mega-deals underpinning the AI buildout
SourceYouTube
Published March 9, 2026
CNBC Television
CNBC Television
Hosts:Deirdre Bosa, Kelly Evans

This is an AI-generated summary. The source video includes demos, visuals and context not covered here. Watch the video โ†’ ยท How our articles are made โ†’

Read this article in Norwegian


In Brief

OpenAI is choosing to put additional data center capacity into other locations instead of expanding its Stargate site in Abilene, Texas, according to a CNBC report by Deirdre Bosa. The reason is straightforward: NVIDIA's chip generations advance faster than data centers can be built. By the time Oracle's facility is ready, the graphics processing units (GPUs) inside it will already be outdated. For a deeper look at how the Stargate expansion fell apart and who is stepping in, see Oracle Exits Stargate Expansion, Meta Steps In.

-3%
Oracle stock on the day
1.2 GW
Abilene site capacity cap
2 gens
Blackwell โ†’ Vera Rubin chip gap

What happened

OpenAI executives told sources that the calculus was straightforward: the company can pursue larger sites elsewhere and build around NVIDIA's next-generation chips and bigger clusters (0:38). A cluster is a group of interconnected GPUs working together on AI tasks. The Texas site could not accommodate that pivot, so OpenAI chose to go somewhere else.

The problem comes down to timing. Oracle is building data centers equipped with NVIDIA's current Blackwell GPUs, the specialized chips that power AI workloads. By the time Oracle connects power and finishes construction, NVIDIA's next generation, called Vera Rubin, will already be available (1:44). For a company like OpenAI, whose entire business depends on staying at the frontier of AI capability, using last-generation hardware is not an option (1:55).

As Bosa put it: "The chip cycle moves faster than data center concrete dries" (0:27). Oracle's stock fell 3% on the news (0:14).


Why Oracle is uniquely exposed

Oracle committed debt, secured the site, and ordered the hardware (0:51). Now its biggest customer is walking away, saying the chips are already dated. The pitch to find a replacement tenant is, as Bosa described it: "Come lease the chips that our last customer didn't want" (1:11).

What makes Oracle especially vulnerable is its balance sheet. The company does not have the financial cushion that hyperscalers have (1:01). Hyperscalers are the largest cloud providers, companies like Meta, Microsoft, and Google, that operate infrastructure at massive global scale. They can absorb a failed data center bet. Oracle has been funding its AI buildout with heavy borrowing, which leaves far less room for its largest customer to change plans (2:19).

Anchor Kelly Evans asked whether this issue could affect others in the data center business. Bosa's response: "I'm not sure that it won't." The difference is that it will hit Oracle harder because of how much debt it has taken on (2:19). Other hyperscalers like Meta "can absorb some of this" (2:26). For more on how NVIDIA's growing influence shapes these deals, see Nvidia Says $30B OpenAI Stake May Be the Ceiling.


The bigger question

Bosa framed the situation as a warning for the entire AI infrastructure sector. "Oracle may be the canary in the AI infrastructure coal mine," she said (1:29).

The core tension is simple: data centers do not move at chip speed. Construction takes years. Chip generations turn over every 12 to 18 months. Any company that locks in hardware for a facility that takes three years to build is gambling that the chips will still be competitive when the doors open. As NVIDIA's Vera Rubin architecture approaches, that gamble gets harder to win.

This raises a question that extends well beyond Oracle: can the mega-deals underpinning the entire AI buildout survive when the technology they depend on moves faster than the buildings that house it? (1:19) For more on how Wall Street is reassessing the AI infrastructure boom, see AI Euphoria Fades as Wall Street Gets Real.


Glossary

TermDefinition
GPU (Graphics Processing Unit)A specialized chip originally designed for graphics, now widely used to power AI workloads because it can run many calculations simultaneously.
BlackwellNVIDIA's current generation of AI chips, used in data centers today. Named after statistician David Blackwell.
Vera RubinNVIDIA's next-generation AI chip architecture, expected to succeed Blackwell. Named after astronomer Vera Rubin.
HyperscalerThe largest cloud providers (Amazon, Microsoft, Google, Meta) that operate infrastructure at massive global scale and can absorb large financial losses.
Debt financingUsing borrowed money to fund investments. More debt means higher potential gains but also higher risk if things go wrong.
ClusterA group of interconnected GPUs or servers working together on AI tasks. Larger clusters enable training bigger AI models.
Chip cycleThe pace at which new generations of processors are released and adopted. In AI, this cycle is currently 12-18 months.
Frontier modelThe most capable, cutting-edge AI model available at any given time. Frontier labs like OpenAI need the latest hardware to stay competitive.
Project StargateA large-scale AI infrastructure initiative involving OpenAI, Oracle, and SoftBank, originally announced at the White House in early 2025.

Sources and resources

Share this article