For three years, Nvidia owned the AI chip game. Every major tech company—Google, Meta, Amazon, Microsoft—basically had no choice but to buy their GPUs. It was like being forced to use one airline because nobody else had planes.
But here’s the thing: the economics just changed.
Training AI models? That’s expensive, sure. But the real money pit is *inference*—all those billions of times people actually *use* these models every single day. At that scale, even tiny inefficiencies become massive, recurring expenses. So Big Tech did what Big Tech does: they decided to build their own chips instead of renting Nvidia’s.
**The Problem With Doing Everything**
Nvidia’s GPUs are like Swiss Army knives. They’re powerful, flexible, and can handle basically anything you throw at them—training models, running games, rendering 3D stuff, simulating physics. That versatility made them the backbone of the AI boom.
But here’s the catch: a tool designed to do everything isn’t optimized for *anything*.
Enter custom chips—Application-Specific Integrated Circuits (ASICs). These aren’t jack-of-all-trades. They’re built for one job: running AI inference. Less flexible? Sure. But the payoff is massive: better performance-per-watt and way lower operating costs at the scale these hyperscalers operate at.
**The Builders Are Cashing In**
Two companies are leading this charge: **Broadcom** and **Marvell**. They’re the architects in the middle of the value chain. Big Tech brings the specs, these guys do the engineering, and Taiwan Semiconductor manufactures the final product.
And the deal flow in the last two weeks? Staggering.
Google locked in Broadcom through 2031 for custom AI chips. Anthropic—the Claude maker—just committed to nearly *quadrupling* its compute capacity using Google’s custom TPUs instead of Nvidia’s GPUs. Meta extended its partnership with Broadcom through 2029 and is already paying them $2.3 billion annually for chip design. OpenAI, the company that basically made Nvidia indispensable, is now building its own custom chip with Broadcom for 2027 deployment. And Google is *also* talking to Marvell about designing two new AI chips.
This isn’t a trend. This is a structural shift.
**The Real Winners**
Broadcom is on track to capture roughly 60% of the custom ASIC market by 2027. Marvell is targeting 20-25%. Together, they’re dividing a category growing nearly *three times faster* than the GPU market they’re replacing.
But it’s not just about the chip designers. The entire ecosystem benefits:
– **ARM Holdings**: Every custom chip runs on ARM’s architecture. It’s the toll road of the custom silicon revolution.
– **Synopsys & Cadence**: They make the design software. More custom chips = more licenses.
– **Taiwan Semiconductor**: They manufacture everything. No TSMC, no custom silicon revolution.
**The Bottom Line**
The original thesis was simple: Big Tech would eventually stop buying compute and start owning it. That shift is happening *now*. And the companies positioned at the center of this buildout are looking at a combined addressable market north of $700 billion annually.
The last time the semiconductor industry shifted this decisively, it minted a generation of winners. Nvidia was one of them. The next chapter is being written right now—and the pen is in different hands.