Who Really Controls OpenAI and Anthropic? The AI Mega Deals Reshaping Power

 illustration of a chessboard made from glowing server racks and cloud icons, two sides labeled “OpenAI” and “Anthropic”, subtle data streams flowing into three large cloud logos



For three years, the story in AI was simple: who had the best model. OpenAI and Anthropic, Google Gemini and DeepSeek, GPT versus Claude. It felt like a leaderboard of breakthroughs and benchmark charts.

That phase is over. The real contest now is about who owns the compute, who controls the clouds, and who writes the contracts that decide which models can actually run at scale.

This post walks through the two biggest AI deals on the table today, Microsoft–OpenAI and Anthropic’s web of partnerships, and unpacks what they really mean for builders, investors, and anyone trying to understand where AI power is drifting.

You will see why the models are now just pieces. The real board is the infrastructure layer that sits underneath them.


The AI Race Has Entered Its Middle Game

The early AI race looked a lot like a model tournament. Each month brought a new release, a new paper, or a new benchmark.

Now the pattern is different. The breakthroughs keep coming, but what really moves the market are multi‑billion dollar contracts for GPUs, power, and cloud time.

You can think of it in two phases:

  • Opening phase:
    GPT vs Claude, Gemini vs DeepSeek, model size, training runs, flashy demos.
  • Middle game:
    Long‑term compute commitments, revenue sharing, exclusive API rights, multi‑cloud distribution.

Two deals sit at the center of this middle game:

  • Microsoft’s quasi‑merger with OpenAI, where equity, revenue share, and a massive Azure spend tie the two companies together.
  • Anthropic’s multi‑cloud strategy with Google Cloud, AWS, and Microsoft, where commitments to buy chips and power help keep the company independent while still accessing giant pools of compute.

If you want a broader view on how much capital is flooding into the stack, it is worth looking at why AI infrastructure spending may be a bubble, because the numbers behind these deals are already in trillion‑dollar territory.

The rest of this article breaks down how those structures work, who depends on whom, and why the real moat in AI now lives in infrastructure and distribution, not in any single model release.


Inside OpenAI’s Restructuring: When a Partner Becomes a Dependency

On October 28, OpenAI announced a deep restructuring that made official what the market already sensed: Microsoft is no longer just a customer or strategic investor. It is a core financial and distribution partner with very real control.

Different reports put the stake around the high twenties. The video pins it at about 26 to 30 percent, valued at roughly 135 billion dollars, which makes this one of the largest technology partnerships ever. Outlets like Fortune’s coverage of Microsoft’s equity stake in OpenAI and OpenAI’s own description of the Microsoft partnership outline the same basic picture: OpenAI is now a for‑profit structure with a nonprofit wrapper, and Microsoft sits on a large, preferred slice of that entity.

To see why this deal had to happen, it helps to start with OpenAI’s original structure and its unit economics.

Why OpenAI’s Original Structure Broke

In 2019, OpenAI created OpenAI LP, a capped‑profit subsidiary designed to balance two goals that did not sit comfortably together:

  • Raise enough money to train frontier models.
  • Keep a nonprofit mission focused on broad benefit to humanity.

The idea was elegant on paper. Investors could earn returns up to a cap, while the residual value would accrue to the nonprofit.

The economics of running ChatGPT at global scale did not match that elegance.

A few simple facts frame the problem:

  • OpenAI is losing about 2 dollars for every 1 dollar of revenue.
  • Inference costs for serving responses already exceed what they collect from users.
  • The company is burning through funding faster than it can build durable, high‑margin products.

Most of OpenAI’s revenue now flows from ChatGPT subscriptions, not from API usage. The API is the natural heart of business adoption, yet the company is still mostly consumer‑heavy, which leaves it exposed to high serving costs and lower predictability.

Even with about 800 million weekly users, only around 5 percent are paid. At this scale, small differences in per‑request cost compound into large structural losses.

The capped‑profit structure could not reconcile the need for ever larger compute budgets with the slow path to sustainable margins, so a deeper partnership with a hyperscaler was not optional. It was survival.

How Much Control Microsoft Actually Has

Under the current structure, Microsoft’s power over OpenAI shows up in several concrete ways.

First, on the equity side:

  • Microsoft holds around 30 percent of OpenAI’s for‑profit entity.
  • That stake is valued at roughly 135 billion dollars.
  • The holding sits on top of earlier investments and extensive product integrations across Windows, Microsoft 365, and more.

Second, on the revenue side:

  • Microsoft takes 20 percent of OpenAI’s direct revenue through 2030, mostly from ChatGPT subscriptions and enterprise deals.
  • In return, Microsoft pays OpenAI around 20 percent of revenue from Azure OpenAI services and Bing AI features.

This is not a light licensing agreement. It is a shared economic engine.

Microsoft now has real pull over:

  • Product direction, because alignment with Azure matters for both sides.
  • Go‑to‑market and pricing, because every price cut or increase hits the shared revenue pool.
  • Enterprise strategy, because most large corporate deployments route through Azure contracts first.

At the same time, OpenAI depends on Microsoft for colossal amounts of compute, both today and in the future. That connection runs through Azure.

illustration of two giant corporate buildings labeled “Microsoft” and “OpenAI”, connected by thick glowing data cables



Sponsor Spotlight: Rippling and the Cost of Scaling Humans

While the AI giants fight over GPUs and cloud time, most companies are still wrestling with a more basic problem: how to scale real teams without drowning in admin.

Hiring 15 people in a quarter does not sound huge. Yet if each onboarding takes around 6 hours of HR and IT time across payroll, benefits, accounts, devices, and compliance, that is already 90 hours of manual work, spread over fragmented tools that do not talk to each other.

Rippling has a very direct name for this pattern: SAD, or software as a disservice.

Rippling’s answer is a unified platform where HR, IT, and Finance all sit on one source of truth for employee data. In practice, that looks like:

  • A single system to run HR, payroll, spend, and IT.
  • The ability to hire contractors in more than 185 countries and full‑time employees in more than 80 countries.
  • Automatic provisioning and deprovisioning of laptops, accounts, and permissions based on role.
  • Real‑time visibility into spending, so finance teams are not stuck waiting for end‑of‑month expense reports.

More than 20,000 companies, including names like Cursor, Barry, Chaz.com, and Liquid Death, already treat Rippling as part of their operating system.

If you want to see whether SAD software is slowing your own organization, Rippling has a dedicated flow for that on their unified workforce platform page, and a diagnostic tool at their site, stopsad.com.


Layer 1: Revenue Share And A 865 Million Dollar Signal

The first clear signal of how deep the Microsoft–OpenAI link runs is the revenue share.

In just 9 months of 2025, Microsoft captured 865 million dollars tied to its OpenAI partnership. That single figure packs several important messages.

  1. OpenAI is a major revenue driver for Azure
    Every dollar OpenAI earns sends a cut to Microsoft, regardless of whether OpenAI itself is profitable. OpenAI’s rapid growth feeds Azure’s top line.
  2. The flow of money funds Microsoft’s own moat
    Hundreds of millions per quarter give Microsoft cover to invest in more data centers, more GPUs, and more power contracts. That infrastructure, not any single model, is the real defensive wall.
  3. OpenAI’s unit economics get squeezed even harder
    OpenAI already spends more on inference than it brings in from many workloads. Losing another 20 percent of revenue to Microsoft makes the path to sustainable margins even steeper.

This revenue sharing looks less like a friendly profit split and more like a deep coupling of business models. OpenAI’s growth is now structurally wired into Microsoft’s cloud story.


Layer 2: The 250 Billion Dollar Azure Commitment

Revenue share is only half of the equation. The other half is about guaranteed demand.

OpenAI has committed to purchasing about 250 billion dollars in Azure services over time. That number is so large that it changes Microsoft’s business at the core.

For decades, Microsoft was mainly a software seller. It is now, in practice, a renter of compute infrastructure at insane scale. OpenAI is not just a big account. It is a driver of Azure’s growth narrative.

In fact:

  • OpenAI accounts for roughly 50 percent of Azure’s revenue growth.
  • Azure has to scale its GPU, networking, and power footprint to keep up with OpenAI’s appetite.
  • Those expansions are not easy to reassign if OpenAI’s usage ever drops.

That creates a tight set of mutual risks.

For Microsoft:

  • It is hard to replace OpenAI with another customer of equal size. Even if Anthropic or DeepSeek grow quickly, it would take years to match OpenAI’s current compute demand.
  • If OpenAI’s financial structure fails, or if usage shifts to other clouds, Microsoft loses both a revenue stream and part of the AI growth story that helped lift it toward a 4 trillion dollar valuation.

For OpenAI:

  • Being tied to a single cloud makes the company fragile. Price hikes, policy shifts, or term changes at Microsoft could materially hit OpenAI’s survival odds.
  • Diversification attempts through CoreWeave and Oracle are a clear sign that OpenAI’s leadership understands this concentration risk.

This is why the OpenAI–Microsoft deal feels like a double‑edged sword. The partnership created huge upside for both sides, but now each is locked into the other in ways that are difficult to unwind.


Why Microsoft Took The Risk In The First Place

Looking backward, Microsoft’s decision to go all in on OpenAI looks brilliant. At the time, it was a defensive move.

Chasing AWS And Google

In 2019, the cloud market looked different:

  • AWS sat at around 37 percent market share.
  • Microsoft had about 29 percent.
  • Google was behind both but had DeepMind and a long track record in AI research.
  • Microsoft did not have a native AI research group with comparable weight.

By investing early in OpenAI, Microsoft bought itself a fast‑forward button. It gained access to frontier models it could not build internally, along with a partner hungry for compute that Azure could provide.

The Payoff: A Distribution Moat Around Enterprise AI

The payoff came in several layers.

  • Microsoft embedded OpenAI models into Microsoft 365, Teams, and Copilot. That turned basic productivity tools into AI‑enhanced workflows and gave Microsoft a direct distribution path to hundreds of millions of workers.
  • Azure became the primary cloud with official OpenAI API access. Any enterprise that wanted to adopt GPT models at scale had a strong reason to route workloads through Azure’s private network and compliance stack.
  • Azure’s growth accelerated. Through 2025, Azure grew at about 40 percent annually, compared with 19 percent at AWS, helped by demand for generative AI workloads.
  • Microsoft’s market cap gained more than 2 trillion dollars since the partnership began, supported in part by the story that it owned the enterprise AI distribution layer.

One simple number captures the strength of this moat: Azure holds only about 29 percent of cloud share, but is estimated to host roughly 62 percent of generative AI use cases. That is the power of having the most in‑demand models tied to your cloud.

Architectural Lock‑In: The Hidden Switch Cost

The most powerful part of the Microsoft–OpenAI arrangement is not pricing. It is architecture.

Under current terms, Microsoft has exclusive API rights to OpenAI’s models through Azure until AGI is verified by an independent expert panel. That exclusivity shapes corporate decisions in a very direct way.

If an enterprise wants to run GPT‑5 or similar models in production, it usually must:

  • Route traffic through Azure’s private network.
  • Accept Azure’s security, identity, and data residency layers.
  • Integrate with Azure services such as Cognitive Search, Functions, and Key Vault.

Applications that start on Azure OpenAI services often end up deeply tied into the rest of the stack. Moving away would mean:

  • Rewriting application code.
  • Rebuilding data pipelines and search integrations.
  • Re‑certifying compliance and security with a new cloud.

Even smaller tooling shifts can be painful. Anyone who has migrated a company from one productivity suite to another knows the feeling. One recent story on how a near‑bankrupt startup achieved $40M ARR with AI includes a real example of how deeply teams can get locked into the tools that host their AI workflows.

There is also a more subtle incentive built into this deal. Since Microsoft’s exclusivity lasts until AGI is “verified,” Microsoft has every reason to treat that bar as very high. A formal declaration of AGI would, in effect, loosen its grip on OpenAI’s distribution.

illustration of a cloud data center shaped like a chessboard, with enterprise app icons on Azure tiles and arrows showing lock‑in paths



Anthropic’s Countermove: Independence Through Multi‑Cloud

On the other side of the board sits Anthropic, which has chosen a very different path. Rather than tying itself to one cloud in exchange for capital and exclusivity, Anthropic has spread its dependencies across all three major hyperscalers.

Three Hyperscaler Deals, One Independent Model Company

Anthropic’s deals look like this.

With Google Cloud:

  • Google is a major investor and a key cloud partner.
  • Anthropic receives access to 1 million custom TPUs (tensor processing units) that will be online by 2026.
  • The partnership includes around 1 gigawatt of power, roughly equivalent to a large nuclear power plant’s output, reserved for AI workloads.

With AWS:

  • AWS has invested about 8 billion dollars into Anthropic, making it a lead financial backer.
  • AWS committed to supply 1 million Trainium 2 processors, custom chips designed for AI training.
  • In practice, this makes AWS Anthropic’s principal provider for many workloads, especially around training.

With Microsoft and Nvidia:

  • In November 2025, Microsoft and Nvidia announced a new partnership with Anthropic.
  • Anthropic agreed to spend at least 30 billion dollars on Azure for future compute needs.
  • Across all partners, Anthropic has committed to about 50 billion dollars in future compute spending.

The result is simple but powerful: Anthropic is the only foundational model provider that is first‑class on all three major clouds at once.

Enterprises can access Claude models through:

  • AWS Bedrock.
  • Google Cloud’s Vertex AI.
  • Azure AI Foundry.
  • Direct API access for custom integrations.

This multi‑cloud position comes with complexity, but it also reduces the risk that any single provider can unilaterally squeeze margins or dictate product direction.

OpenAI and Anthropic: Consumer vs Enterprise Economics

The sharpest difference between OpenAI and Anthropic is not model quality. It is the business model that sits on top.

Here is a quick comparison.

AspectOpenAIAnthropic
Revenue mixAbout 73% from consumer (ChatGPT Plus/Pro), 27% from API and enterpriseAround 85% from enterprise API, roughly 20% from consumer plans
User base~800M weekly users, about 5% paidSmaller consumer base, focused on business usage
Cloud strategyDeeply locked to Azure, single‑cloud at global consumer scaleMulti‑cloud, available on AWS, Google Cloud, and Azure
Workload typeAlways‑on consumer traffic, sensitive to latency and UX consistencyEnterprise workloads that are batch‑oriented and latency tolerant
Unit economicsHigh inference costs, premium Azure pricing, costs up to 200% of revenue on some workloadsBetter margin potential through chip diversity and cross‑cloud optimization
Go‑to‑marketDirect to consumer with a strong brand, slower B2B penetrationEnterprise‑first, often embedded in other platforms and tools

For OpenAI, a multi‑cloud strategy would be very hard to pull off. Global consumer scale demands a unified, always‑on infrastructure. Routing live user sessions across different providers would fragment the experience and complicate reliability.

Anthropic’s focus on enterprise APIs makes multi‑cloud possible. Batch jobs, internal tools, and asynchronous workflows can tolerate more variance in latency and routing, as long as they meet SLAs and security requirements.

This focus also lines up cleanly with the way many AI‑native startups now think about cost. A solo founder story like the cost‑effective AI SaaS using OpenAI, Anthropic, Mistral shows how a “bring your own key” approach can keep inference spend near zero while still tapping into several model providers.

Anthropic’s multi‑cloud deals are not cheap, and they still lock the company into hyperscaler infrastructure for the next decade. They do, however, keep control of the model layer in Anthropic’s hands.

split‑screen visual, left side a single giant Azure cloud tethered to one model icon labeled “OpenAI”, right side three balanced clouds


The Real War: Owning The Infrastructure Board

Once you zoom out, the pattern across all these moves becomes clearer.

  • Microsoft and AWS do not have their own flagship foundation models that dominate usage.
  • Google is pushing Gemini hard, but also hosts other models for customers.
  • Anthropic and OpenAI build models, yet they rely on others to run those models at scale.
  • Hyperscalers supply the GPUs, power, and data centers that no one else can match easily.

The real race in AI now is control of the infrastructure where all models must run.

For OpenAI and Anthropic, that control shows up as:

  • Long‑term compute purchase commitments, measured in tens or hundreds of billions of dollars.
  • Complex revenue share deals that feed cloud growth.
  • Exclusive rights, like Microsoft’s hold over OpenAI’s APIs until AGI is declared.
  • Multi‑cloud placements that trade simplicity for independence.

For builders and product teams watching from the sidelines, the lesson is not a simple “pick a winner”. It is closer to this:

  • Power is moving down the stack, into GPUs, power contracts, and clouds.
  • Models are becoming more interchangeable, especially for many business use cases.
  • Distribution and switching costs matter as much as raw capabilities.

If you are trying to think clearly about AI as a business, it helps to pair this structural view with practical thinking on product economics and moats, like the ideas explored in building sustainable AI products beyond professional replacement.

The middle game is here. The opening fireworks are behind us. The winner will not be the flashiest demo, but the player that controls the board everyone else has to use.


Conclusion: Models Are Pieces, Infrastructure Is The Board

The story of OpenAI and Anthropic is not just about two labs and their models. It is about two different ways of living inside a world run by hyperscale compute.

On one side sits OpenAI, deeply joined with Microsoft, trading independence for scale, reach, and a massive cloud engine. On the other sits Anthropic, threading a narrower path, spreading its bets across Google, AWS, and Microsoft to keep more control at the model layer.

In both cases, the real leverage comes from who owns the data centers, the chips, and the distribution channels into enterprises. Those choices will shape pricing, access, and innovation speed for years.

As the AI race settles into its middle game, the quiet contracts for GPUs and power may matter more than any model name in a launch blog post. The board, not the pieces, will decide how this plays out.

Post a Comment

0 Comments