Meta Just Locked Up Millions of Nvidia Chips and AMD Is Sweating

Meta and Nvidia just made every other chipmaker in the room nervous.

On Tuesday, the two Silicon Valley giants announced a sweeping multiyear, multigenerational AI infrastructure deal. Meta will deploy millions of Nvidia chips across its U.S. data centers — including current-generation Blackwell GPUs, next-generation Vera Rubin systems arriving in 2027, and, for the first time ever at scale, Nvidia’s standalone Grace CPUs. No financial terms were disclosed, but analyst Ben Bajarin of Creative Strategies estimated the deal is “certainly in the tens of billions of dollars” and expects a good chunk of Meta’s planned $135 billion AI spend for 2026 to flow directly into this Nvidia build-out.

  • Special: Trump's $250,000/Month Secret Exposed
  • The standalone CPU deployment is the sleeper detail here. Meta becomes the first company to run Nvidia’s Grace processors independently — not just packaged alongside GPUs in a server rack — optimized specifically for inference and agentic AI workloads. That’s a direct shot at the CPU market where AMD and Intel have traditionally dominated. Bajarin called it “affirmation of the soup-to-nuts strategy that Nvidia’s putting across both sets of infrastructure: CPU and GPU.”

    The market reaction told the story immediately. Meta and Nvidia shares climbed in extended trading. AMD stock dropped about 4%. This is the growing headache for Nvidia’s competitors: as mega-cap tech companies like Meta lock in massive, long-term supply agreements directly with Nvidia, the addressable pie for everyone else shrinks. AMD won a notable OpenAI deal in October 2024, but Meta’s latest move signals that when it comes to betting the farm on AI infrastructure, Nvidia remains the first call.

    The scale of what Meta is building is staggering. The company has committed to spending $600 billion in the U.S. by 2028 on data centers and related infrastructure. It has 30 data centers planned — 26 in the U.S. — including the 1-gigawatt Prometheus site in Ohio and the 5-gigawatt Hyperion facility in Louisiana, both currently under construction. The Nvidia deal also includes Spectrum-X Ethernet networking switches and security capabilities for WhatsApp’s AI features.

    For investors, the takeaway is straightforward: the AI capex supercycle isn’t slowing down — it’s consolidating. The companies with the deepest pockets are locking in the best supply for years to come, and that’s creating a winner-take-most dynamic in AI chips. If you’re holding AMD or other chip competitors, the question isn’t whether they’ll find deals — it’s whether they’ll find enough of them to keep up with a market that’s increasingly tilting toward one dominant supplier.

  • Special: Trump's $25 Million Secret (How You Can Get in For Less Than $20)