The Energy Layer of the AI Stack
AI’s scaling ceiling isn’t algorithms. It’s megawatts, metallurgy, and lead times
The Rocky Mountain Institute notes that the global heavy-duty gas-turbine supply chain is jammed: OEM order books are full, spare‐part inventories are thin, and lead times for both new units and major overhauls have stretched well past 2028. That makes it hard for U.S. utilities to add the 15–20 GW per year of gas “peaker” capacity they assumed would cover rising summer peaks, coal retirements, and renewables variability.
RMI argues that the cheapest near-term reliability hedge is not “more gas fast,” but (i) modular batteries, (ii) demand-side flexibility, (iii) targeted efficiency, and (iv) modest transmission upgrades that unlock latent renewable capacity already in interconnection queues.
This reinforces a claim that I have repeatedly made: it is infrastructure, not algorithms, which are the principal gating function for advancements in AI.
Deeper implications
The “cheap natural-gas reflex” just failed a real-world stress test.
Silicon Valley’s mental model—“if renewables are flaky we’ll just bolt on cheap gas peakers”—relied on turbine OEMs being infinite-elasticity widgets. They aren’t. This is a hardware supply-chain crunch eerily analogous to GPUs: few vendors, long cycle times, and heavy metallurgical IP. The lesson for AI is that every layer of your stack ultimately bottoms out in atoms with real lead times. You already learned this with H100s; now learn it with megawatts.Opportunity: turn AI’s 30 % idle time into grid service revenue.
RMI pushes demand response; AI workloads are uniquely suited. Fine-tuning and inference are latency-sensitive, but large-batch pre-training isn’t. If you containerize jobs, checkpoint relentlessly, and orchestrate across multiple regions, you can drop 200 MW on 15 min notice and get paid capacity credits that offset rising power prices. Some early-stage providers (Crusoe’s flare-gas GPUs, Google’s Flex TPU pilots) hint at the playbook.Expect a permitting backlash against gas-turbine microgrids at data centers.
In 2023–24 it was fashionable for hyperscalers to co-locate 300–600 MW combined-cycle plants (Iowa, Virginia). With turbine scarcity, those projects seize scarce OEM slots the public sector also wants. NGOs will weaponize that queue conflict: “Why is Microsoft jumping the line?” Local officials, seeing turbine delivery slots as a public good, will attach political strings, such as low-carbon commitments, public-facing reliability concessions, maybe even usage caps on non-flexible AI load.Gas scarcity strengthens the “go nuclear or go home” argument for AI.
The least-controversial use case for SMRs is a captive, 24 × 7 critical industrial load with deep corporate pockets and tolerance for bespoke engineering, procurement, and construction risk. That is exactly an AI training cluster. Every month gas turbines stay back-ordered is a month SMR developers inch closer to parity. Watch Microsoft’s new-build efforts with TerraPower and Google’s interest in X-energy.Strategic counsel: internalize energy optionality as a core competence.
Just as you manage GPU supply across Nvidia, AMD, and internal ASICs, manage power across (i) grid PPA, (ii) behind-the-meter batteries, (iii) dispatchable engines, (iv) future SMR offtake. Board-level KPI: “Effective megawatt diversity index,” rebalanced quarterly. Tie new model-training roadmaps to physically secured incremental MWh, not just to notional cloud instances.
Contrarian speculation (flagged as forward-looking)
OEM triage could create a grey market for mid-life turbines. Don’t be surprised if hedge funds start flipping refurbished LM6000 units the way brokers flip GPUs today. An AI-first energy desk might quietly corner that market.
*The next liquidity crunch in AI infrastructure could be energy, not capitals markets. If turbine scarcity plus winter reliability scares drive forward power above $60/MWh nationwide, cash-burning AI startups that assumed $30 will hit a wall. Watch for distressed-asset rollups by energy-savvy incumbents.
Grid-aware schedulers will become a competitive moat. Today everyone focuses on model architecture; tomorrow the killer tech may be a reinforcement-learning agent that “time-slides” trillions of tokens to chase sub-$20/MWh nodal prices while meeting compute-to-market timelines.
Bottom line
The gas-turbine crunch is another reminder that AI’s growth is gated by heavy-industry chokepoints as much as by algorithms. Treat electrons the way you treat GPUs: source‐diversify, hoard early, price optionality, and design your software stack to flex with physical reality. Do that, and turbine scarcity becomes a lever you can pull, rather than a ceiling that stops your scaling curve cold.
Coda
If you enjoy this newsletter, consider sharing it with a colleague.
Most posts are public. Some are paywalled.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:
Some great thought here. Were I even a decade younger I might take a whack at investing on some of the ideas that spring from the points presented.