Meta's $135B AI Bet: Inside the Infrastructure Arms Race
Meta nearly doubled 2026 AI capex to $115–135B, signaling an aggressive push to close the gap with OpenAI and Google
The Number
Meta updated its 2026 AI capital expenditure guidance to a range of $115–135 billion, up from the prior $70–75 billion. It is nearly double what the company spent last year, and roughly the entire GDP of New Zealand poured into datacenters, chips, and power.
The size of the number isn't the surprising part — everyone knew Meta was spending heavily. The surprising part is how openly the company is framing this as a structural bet rather than a cyclical one. Susan Li told analysts that the spend reflects a multi-year commitment to closing the capability gap with OpenAI and Google.
Where the Money Goes
Silicon
Meta's previously-announced $100B multi-year AMD agreement covers MI540 GPUs and CPUs, and sits alongside its existing NVIDIA buys and a growing internal MTIA program. The pattern across hyperscalers is the same: multi-vendor silicon, with a custom backstop for the highest-volume inference workloads.
Power
The harder constraint is electricity. Meta is signing PPAs with renewable operators, restarting nuclear, and co-locating new builds near power-rich grids. Capex is the visible number; megawatts is the bottleneck behind it.
Talent
Compensation packages for senior research staff continue to escalate. Meta's push into reasoning models and agentic systems requires the same scarce talent OpenAI, Anthropic, and Google are competing for — and Meta is increasingly winning offers it would have lost a year ago.
The Strategic Read
Meta's public-facing AI story is open-weight Llama plus an aggressive consumer product line (Meta AI, glasses, agents inside its messaging properties). The capex story underneath it is even more aggressive: build enough capacity that Meta is never compute-constrained on either training or inference, and never has to depend on a third-party API for a flagship feature.
The risk is the obvious one. If model progress plateaus or the unit economics of generative features don't close, Meta is left with a balance sheet full of depreciating GPUs and long-dated power contracts. Mark Zuckerberg has telegraphed that he's willing to take that risk.
What It Signals to the Market
For everyone selling compute, this is a strong year. For everyone competing on foundation models, the cost of staying at the frontier just got reset upward again. And for startups building on top of these models, the cheap, abundant inference assumption is going to keep holding — at least until power becomes the limiting factor for someone other than the hyperscalers.
The 2026 AI cycle is no longer about who has the best model on a given Tuesday. It's about who can afford to keep training one in 2028.
Tags: Meta • Infrastructure • AI Economics