Skip to content
Back to articles

OpenAI Lands on AWS as Microsoft Exclusivity Ends

April 28, 2026/5 min read/935 words
AWSOpenAIMicrosoftAI AgentsCodex
AWS CEO Matt Garman interviewed by Ed Ludlow on Bloomberg Technology
Image: Screenshot from YouTube.
Published April 28, 2026
Bloomberg Technology
Bloomberg Technology
Hosts:Ed Ludlow
AWS
Guest:Matt GarmanAWS

This is an AI-generated summary. The source video may include demos, visuals and additional context.

Watch the video · How the articles are generated

In Brief

Matt Garman became CEO of AWS in June 2024, after nearly twenty years inside Amazon's cloud business. On April 28, 2026 he sat down with Bloomberg Technology host Ed Ludlow to announce something nobody could have announced a week earlier: OpenAI's frontier models are now running on Amazon Bedrock, the AWS marketplace for AI models.

The timing was not subtle. Less than 24 hours before, OpenAI had ended its exclusive cloud arrangement with Microsoft. AWS moved into the gap immediately, with three products in preview: GPT-5.5, the OpenAI coding agent Codex, and a new joint product called Managed Agents.

Ludlow asked the right hard questions. If OpenAI is now available everywhere, what is the actual pitch for getting it through Bedrock? And how does Garman feel about Microsoft still getting paid?

What's actually new

Three things landed on Bedrock at once. They are easiest to read as a table:

ProductWhat it isStatus
GPT-5.5OpenAI's frontier model, the most capable one they havePreview today, full availability in a few weeks
CodexOpenAI's coding agent, grown from 2 to 4 million users in weeksPreview on Bedrock
Managed AgentsA joint AWS-OpenAI service that handles the plumbing of running AI agentsPreview, exclusive to AWS

The first two are the obvious news. The third one is the lever Garman keeps coming back to, and it's worth understanding why.

A "managed agent" is AWS hosting the boring, hard parts of running an AI agent for you: keeping its memory between conversations, plugging it into tools, handling failures, scaling it up. In Garman's words, it's "a complete managed agent capability," and you can only get the OpenAI version of it on AWS.

Ludlow's question: why through Bedrock at all?

If you can get OpenAI directly from OpenAI, or through Microsoft Azure, or now through AWS, why bother with Bedrock? That is essentially Ludlow's question.

Garman's answer has two parts.

Part one: customers don't actually want to choose. Most enterprise data already lives on AWS. Most enterprise applications already run on AWS. Until today, if a company wanted to use GPT-5, somebody had to set up an Azure account, move data, manage two security boundaries. Now they don't.

Part two: Managed Agents only exists here. That's the differentiator Garman keeps pointing at. The frontier model on its own is becoming a commodity — every cloud will eventually offer every major model. But the wrapper around it, the "agent harness" that turns a model into something a business can deploy, is the layer where AWS thinks it can win.

The Anthropic comparison is the one Garman uses himself. AWS made Claude easy to use through Bedrock and got a share of Anthropic's growth as a result. The bet is that the same pattern repeats with OpenAI.

The Microsoft awkwardness

Here is where the interview gets interesting. Even though OpenAI's exclusivity with Microsoft has ended, OpenAI continues to pay Microsoft under the revised terms of their relationship. Ludlow asks Garman, more or less, how he feels about being the new partner while the old one still gets a cut.

Garman is unbothered. His framing: AWS has paid Microsoft for Windows licenses since 2007 or 2008. Microsoft has benefited from cloud's growth from day one, and that's fine. SQL Server, Microsoft's database, runs on AWS too. Customers want what they want, and the clouds settle up around them.

That's a smooth answer, but it understates the strangeness of the moment. A year ago, OpenAI was Microsoft's exclusive cloud bet, and AWS was building Anthropic. Today both labs run everywhere, and the clouds are racing to be the most convenient place to use them.

The subplot Garman wants to tell

Most of Ludlow's questions are about frontier models. Garman, when given the chance, keeps changing the subject — and the change is worth noting.

He spends real time talking about three AWS products that aren't models at all:

  • Amazon Q, a productivity assistant whose desktop app launches today. It works across email, Slack, and the rest of your workday and learns over time who you write to and how.
  • Amazon Connect, a set of agentic applications for supply chain, contact centers, and healthcare.
  • Kiro, AWS's coding tool.

The pattern is that AWS is climbing what cloud people call the stack: from selling raw infrastructure (servers, storage), to databases, to AI models, and now to finished applications. That puts AWS in direct competition with Salesforce, ServiceNow, and SAP, companies AWS used to call partners.

Asked whether these will ever be a meaningful share of AWS revenue, Garman says yes. Today most of the AI revenue is frontier models on Bedrock. He thinks the application layer is the next leg.

What this means

The headline is the partnership. The deeper story is that the model layer is becoming a layer you rent, not a layer you bet on. OpenAI runs on Microsoft and AWS. Anthropic runs on AWS and Google. Open models run everywhere.

That makes the layers around the model (agent infrastructure, applications, security, data residency) the place where the cloud providers actually compete. Garman's interview, read carefully, is about that competition. The OpenAI partnership is the headline, but Managed Agents and Amazon Q are the bet.

There's also a quieter line worth catching. Asked whether AI capacity has caught up with demand, Garman says no. The world is still supply-constrained on power, chips, and memory. Every cloud, including AWS, is still selling less than customers would buy.

Glossary

TermDefinition
BedrockAWS's marketplace for AI models. One API, dozens of models from different labs
CodexOpenAI's coding agent, used for software development tasks
frontier modelThe most capable AI model a lab has built. GPT-5.5, Claude Opus, and Gemini Ultra all qualify
Managed AgentsAn AWS service that handles the plumbing of running an AI agent: memory, tools, failure handling
stack (the layers)How cloud people describe what they sell: infrastructure at the bottom, then databases, then AI models, then applications at the top

Sources and resources

Share this article