The Coming Data Center Shakeout
Some time in the 2030s we'll be in a post-GPU era but today's data centers are optimized for GPU compute
In today’s free post, I look at what happens to AI data centers optimized for GPUs, once we enter a post-GPU compute era some time in the 2030s.
If you like what you read here and you’re not yet subscribed, consider subscribing. Most of my posts are free though deeper dives tend to be paid.
For the past five years, we’ve been building data centers like it’s the new railroad boom. Billions in equity and debt have flowed into hyperscale campuses, all designed around one thing: GPUs that generate enormous amounts of heat.
These are not the retrofitted office parks of the early cloud days. Today’s AI centers run 40–80 kilowatts per rack, cooled by direct-to-chip liquid loops, backed by on-site substations, and priced like perpetual motion machines. Everyone from REITs to sovereigns to pension funds has piled in, underwriting twenty- to thirty-year asset lives off the back of three-year hardware cycles.
And now, with near inevitability, a new class of post-GPU compute substrates is coming—cooler, denser, and potentially incompatible with the environments we’ve just spent hundreds of billions building.
This creates a problem. Data centers last 30 years. GPUs don’t. And we are looking at a duration mismatch that’s about to punch holes in a lot of capital structures.
AI Data Centers Are Not General-Purpose Buildings
The current generation of AI data centers are built for heat. Literally. The GPU clusters that power frontier models like GPT-4, Gemini, and Claude emit a staggering amount of thermal energy. Hopper-class GPUs generate 700 watts of heat. The upcoming Blackwell ones will generate almost 1000W of heat. Cooling that heat safely and efficiently is the defining design constraint.
That’s why these facilities are engineered around dense rack power, heavy-duty liquid cooling, and high-amp electrical distribution. The mechanical-electrical-plumbing systems that support this architecture can account for 10–20% of total capex. The rest is land, shell, substation, and fiber.
But here’s the catch: those thermal loads are temporary. The physics of transistor design are changing. Whether it’s optical interconnects, analog-in-memory, superconducting logic, or something else entirely, post-GPU silicon will almost certainly run much cooler. The chip roadmap is converging toward lower heat flux, not higher. And this isn’t science fiction: Jensen Huang, Nvidia CEO, has spoken about the promise of photonic chips. They are not yet on the market but we should assume that they will be sometime within the next decade. And that’s well within the lifetime of data centers being built today.
The S-P-O Model: Who Gets Screwed
Here’s a simple model to think through stranding risk. Any GPU-optimized data center can be scored on three variables:
S = the share of capital tied up in GPU-specific systems (cooling, busbars, etc.).
P = the value of its power interconnect. This is the option to use the land as a grid-tied power node for any workload.
O = the option value of repurposing the site to a non-GPU workload (battery storage, hydrogen electrolysis, traditional HPC, etc.).
The risk score is: Risk= S / (P + O)
If the GPU-specific capex (S
) exceeds the residual value of the site’s power and conversion options (P + O
), the score goes above 1, and the site is likely economically stranded. The plumbing becomes worthless, and the equity gets wiped.
If P + O
is larger than S
, the site has strategic value even if the GPUs vanish. It can be retrofitted or flipped to another high-density use case. The score stays below 1. The asset survives.
How Big Is the Problem?
Run this model across North American GPU-optimized campuses, and you get a rough but useful picture. There have been about 150 qualifying data centers built in North America since 2021:
~$160–200B in total build cost
$15–20B of GPU-specific capex at risk of being stranded if post-GPU compute shows up by 2030
Concentrated in secondary markets with cheap land and replicable power (Iowa, Quebec, West Texas)
This isn’t an extinction-level event for the sector. But it’s a meaningful write-down: big enough to torpedo mezzanine lenders, PE-backed GPU hosts, and REIT expansion plays that leaned too far out over their skis. It also reshapes who gets to play next.
Who’s Left Standing?
The capital stack tells the story.
The players who took tech risk for 20% IRRs are eating the write-downs. The players who financed land, power, and long-lived structures are fine. The lesson is clear: the only thing you can underwrite with confidence is grid-tied real estate. Everything else is just a workload du jour.
The New Playbook: Power First, Compute Second
If you're sitting on long-duration capital, such as a sovereign wealth fund, this is where you feast. The next silicon cycle won’t be financed by GPU host startups or tech-forward REITs. It'll be financed by core infrastructure funds, energy majors, and sovereigns who already underwrite pipelines, substations, and nuclear plants. The compute workload will change. The power rights won’t.
So what do you do?
You invert the underwriting model. You stop asking “how much GPU rent can I get today?” and start asking:
Is the interconnect rare?
Can the shell be gutted and reused?
Is the cooling system modular?
Can the site flip to batteries, hydrogen, or immersion-mined crypto with modest retrofit capex?
You treat the white space like rolling stock and the grid tie like a long-dated call option on the future of compute.
The Bottom Line
What we’ve built over the past five years isn’t worthless, but it’s not sacred, either. We’ve just completed a GPU-centric buildout that assumes high heat, high density, and short hardware cycles will define the next 30 years. They won’t. Silicon will change. Power constraints will persist. Compute will decentralize and fragment.
The durable value sits in power, land, and flexibility.
Everything else will be ripped out and replaced.
If you’re holding a checkbook in 2025–2030, that’s your edge.
You’re not buying GPUs.
You’re buying options on what comes next.
Coda
If you enjoy this newsletter, consider sharing it with a colleague.
Most posts are public. Some are paywalled.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can: