The AGI Bottleneck Is Power, Not Alignment
Show me the electricity: without it you can't power AGI
I. Show Me the Electricity
The race to AGI has turned from speculative fiction into a mainstream expectation. OpenAI, DeepMind, Anthropic, and others all expect AGI to arrive relatively soon, with superintelligence arriving shortly thereafter. Forecasts like those in the recent AI 2027 scenario by Daniel Kokotajlo and team offer richly detailed narratives about geopolitics, aligment crises, and algorithmic feedback loops. But in their eagerness to dramatize the intelligence explosion, they largely ignore one thing: where the electricity is going to come from.

Power is the silent constraint. Megawatts don’t scale like model weights. Datacenters don’t boot themselves into existence. And if we’re to believe that humanity will stand up tens of gigawatts of compute capacity in just two to three years, someone has to build the substations, transmission lines, and cooling towers to match. That someone doesn’t exist in these timelines. This essay argues that energy infrastructure is the critical blind spot in current AGI narratives. Unless addressed directly, it renders most AGI-by-2027 scenarios physically implausible.
II. The Physicality of Compute
Modern AI models run on electricity-guzzling silicon. An NVIDIA H100 GPU draws around 700W under full load. Training a GPT-4-class model reportedly consumed ~2e25 FLOP; the training of future Agent-4 or Agent-5-level systems in AI 2027 calls for 1e28 FLOP or more. This isn't a marginal increase—it's a leap that implies thousands of times more compute.
In AI 2027, OpenBrain (a fictional stand-in for OpenAI) operates 100 million H100-equivalent GPUs by 2027. At full load, that's 70 gigawatts of continuous power draw. Add cooling and networking overhead and you're looking at 100+ GW—more than the peak electricity demand of the United Kingdom. Multiply that by multiple companies and Chinese counterparts like DeepCent (AI 2027’s fictional stand-in for DeepSeek, natch), and you're forecasting a planetary-scale power grab in three years flat.
This is not speculative. The scaling laws of deep learning are now well understood: more compute leads to better performance, and the frontier is moving fast. But electricity is not subject to Moore's Law. The exponential scaling of intelligence is shackled to the linear (and heavily regulated) world of energy.
III. Timeline Friction: Building Gigawatts in the Real World
There is a naïve techno-utopianism embedded in many AGI forecasts. They assume datacenters materialize like software deployments. In reality, standing up a 2 GW facility—never mind 20 GW—requires massive land, permitting, environmental review, grid interconnection, and often political capital. In the United States, new high-voltage transmission lines take 6 to 12 years from proposal to activation. Interconnection queues are jammed. The idea that OpenBrain could double its power draw from 2 GW to 4 GW between 2025 and 2026, as AI 2027 suggests, is a fantasy unless construction is already well underway today.
And what generation is feeding these datacenters? New solar farms take 2–3 years minimum. Nuclear takes a decade. Wind is subject to geographic and political constraints. Even gas turbines face permitting hurdles and local opposition. In short, if you're not already pouring concrete, your 2027 compute dreams are toast.
To be fair, there are optimistic counterpoints. One could imagine AI companies quietly signing multi-decade purchasing power agreements (PPAs), building modular nuclear reactors, or siting datacenters near stranded hydroelectric capacity in places like Iceland or Quebec. But if this is happening, it's happening behind an iron curtain of silence. And it's still unclear whether such measures can scale fast enough to match AGI acceleration timelines. The burden of proof is on the accelerationists—and so far, there's little to show.
IV. The Delusion of Infinite Compute
Many AGI scenarios mistake the digital for the abstract. They imagine models scaling up purely through clever training tricks, recursive self-improvement, and algorithmic elegance. But every FLOP is a physical event. Every gradient update demands joules.
Agent-3 and Agent-4 in AI 2027 are depicted as running in hundreds of thousands of parallel copies, at 30x human speed, across fleets of datacenters. This implies not just inference power draw in the tens of gigawatts but also the need to maintain low-latency, high-throughput interconnects between clusters. Fiber, routers, switches, load balancers—all of it consumes power and space. And it all has to be cooled.
Worst of all, the scenario imagines persistent online learning. Agent-2 "never finishes learning" and continuously retrains from synthetic data. This shifts the power profile from bursty to continuous—think smelting plant, not Excel macro. It's industrial intelligence, and it needs industrial infrastructure.
V. Geopolitics and Power as the New Compute
AGI scenarios love to talk about chip wars, but the real war will be over electricity. Energy becomes a strategic asset. Compute sovereignty becomes energy sovereignty.
China's Tianwan CDZ in AI 2027 is allegedly capable of 4 GW draw by late 2027. But the paper doesn’t explain how such a facility overcomes China’s power shortages, blackouts, or cooling constraints. Nor does it reckon with how vulnerable such megaclusters are to sabotage, cyberattack, or physical targeting. Centralized superintelligence is a national security risk, but also a grid fragility multiplier.
Meanwhile, the U.S. is imagined to build the equivalent of dozens of nuclear reactors' worth of new load capacity without a single permitting bottleneck or union dispute. If only.
The chip bottleneck is real, but tractable—fabs are getting funded. The power bottleneck? No one’s talking about it. Not in corporate earnings calls, not in state energy plans, not in the open-source AI community. Yet the moment superintelligence becomes real, the question isn’t "who has the best model," it’s "who has 100 gigawatts and knows how to use them."
VI. What a Realistic AGI Infrastructure Strategy Would Look Like
If we take AGI-by-2027 seriously, then serious actors should already be:
Signing multi-decade power purchase agreements with nuclear, hydro, and geothermal operators
Building on-site small modular reactors at hyperscaler campuses (see: Oklo, NuScale)
Investing in grid-scale batteries to smooth load imbalances from training runs
Constructing private transmission corridors to bypass congested public grids
Hardening datacenters against weather, EMPs, and cyberattack
In other words: OpenAI should look more like ExxonMobil with an ML stack, and less like a lean API startup. Until that shift happens, these timelines are speculative fiction wrapped in spreadsheets.
VII. Conclusion: The Intelligence Explosion Will Be Powered, or Not at All
The AI 2027 scenario is compelling, imaginative, and in many respects disturbingly plausible. But without gigawatts, there are no gradient updates. Without power, the intelligence explosion fizzles. The authors treat alignment as the bottleneck. They’re wrong. The bottleneck is energy.
This is not an argument for slowing down AGI. It's a demand for realism. If superintelligence is coming, it will be an infrastructural event. And until the scenarios acknowledge that, they are not foresight. They are wishful hallucinations running on zero volts.
Let’s stop imagining silicon gods conjured out of speculation. Let's start talking about substations, transmission lines, cooling systems, and yes—megawatts. Because AGI will not emerge from the void. It will emerge from the grid. And if we’re not building the grid, we’re not building the future.
> In AI 2027, OpenBrain (a fictional stand-in for OpenAI) operates 100 million H100-equivalent GPUs by 2027. At full load, that's 70 gigawatts of continuous power draw.
No, see https://ai-2027.com/research/compute-forecast
Not 100 million H100, *global* compute is 80M, but OpenBrain is using 18% of that == 14 million equivalent H100.
Not 70GW, they expect 5.4 GW for Leading US AI company by Dec 2027 (see section Power Requirements).
For Nvidia R100/200 (2027-2028) they're expecting 1.8X the efficiency of H100 and 6 times the speed. To match the speed of 14 million H100, they need 2.3 million R100/R200 which gives about 7.7 GW at peak 3300W capacity. So I imagine 5.4 GW figure is average, not max.
In the AMA I asked if they considered cooling and infrastructure as part of their costs, as confirmed here: https://www.astralcodexten.com/p/ama-with-ai-futures-project-team/comment/112167356. So it seems like the main objection here doesn't even apply, when you add T H's comment about how you were an order of magnitude off of claimed power consumption.