AGI Needs Infrastructure—For Now
The only path to AGI today runs through copper, concrete, and kilowatt-hours
AGI will not be born in a laptop. It will be forged in substations, datacenters, and power corridors. The real bottleneck on the road to artificial general intelligence isn’t algorithms. It’s amperage.
This post is a bit different from my normal fare. I have written quite a few posts arguing that the real binding constraint on the road to AGI is not scale or data, but rather physical and material: land, data centers, power generation, transmission lines, electrical substations and transformers, permits, cooling equipment, etc. What follows are a series of rebuttals to that claim, and responses to those rebuttals. At the end of the day the accelerationists’ claims about ‘scale is all you need’ seem, to me, to be more metaphysical than realistic. None of their claims violates physics, but we don’t yet know how to do any of the things that they say will create AGI. Thus I am left with the conclusion that material, physical constraints remain our principal binding constraint for AGI.
The New Orthodoxy
A new orthodoxy has taken hold among AI pragmatists: that artificial general intelligence won’t emerge from scaling alone. It must be built, like a dam or a refinery. In this view, AGI is not a software artifact but an industrial one, born not in notebooks but in substations. Intelligence, we’re told, is thermodynamics: FLOPs per kilowatt-hour, not tokens per second.
This critique has teeth. It punctures the metaphysical haze surrounding AGI discourse and drags the conversation into the realm of transformers (the electrical kind). But it also risks oversteering. Many of its most vocal proponents mistake today’s engineering reality for a timeless constraint. They don’t just say that AGI currently requires vast infrastructure. They imply that it must.
There are rebuttals to this claim. Some are wishful. Some are theoretical. A few are formidable. All of them, however, buckle under scrutiny if treated as present-day capabilities rather than future aspirations.
Let’s dissect the strongest counterarguments to infrastructure determinism, and evaluate how real they are.
1. The Brain as Proof of Concept
Rebuttal: The brain runs on 20 watts and performs feats no LLM can match. If AGI were a matter of energy input and real estate, human intelligence wouldn’t exist. The current industrial-scale approach is a brute-force workaround, not a requirement.
Reality check: True, but irrelevant. The brain is analog, recurrent, massively parallel, and optimized by evolution over hundreds of millions of years. We don’t understand its encoding, memory structure, or learning algorithms. Neuromorphic computing is still stuck in toy domains. Invoking the brain as proof of concept is like citing photosynthesis to solve energy poverty. Yes, it's efficient, but we don’t know how to build it, we can’t incrementally approach it, and we’re not even sure what problem it’s solving.
Verdict: Technically correct. Operationally inert.
2. Emergent Efficiency Through Scale
Rebuttal: Larger models don’t just get more capable. They get more efficient per unit of useful cognition. GPT-4 does more with less than GPT-2. Scaling, paradoxically, may be a path to efficiency.
Reality check: There’s no evidence this efficiency trend continues indefinitely. Training costs have ballooned. Inference costs remain high. We’ve seen qualitative gains, yes, but not a flattening of infrastructure demands. The models are smarter per FLOP, but they consume more FLOPs. No known scaling law promises declining energy or hardware footprints.
Verdict: Appealing narrative. Empirically ungrounded.
3. Infrastructure Will Catch Up, as it Always Does
Rebuttal: Breakthroughs come first; infrastructure follows. The internet, electricity, and aviation all outpaced their initial support systems. Why should AGI be different?
Reality check: Because the grid doesn’t scale at internet speed. You can’t spin up a electrical transformer like you deploy a digital transformer. Interconnect queues are years long. Environmental permits drag. Silicon capacity is finite. Physics isn’t agile.
Yes, Microsoft and Google are spending billions, but that’s the point: even they are bottlenecked by the lagging supply of energy, land, and transmission capacity. This isn’t a mindset issue. It’s a thermodynamic one.
Verdict: Historically seductive. Logistically naïve.
4. AGI Will Build Its Own Infrastructure
Rebuttal: Once we reach AGI, it will recursively improve itself, including the ability to design chips, optimize code, and build physical infrastructure. The bootstrap cost is high, but once crossed, the system becomes self-sustaining.
Reality check: Recursive self-improvement remains entirely hypothetical. No existing model demonstrates autonomous planning, tool use, or embodiment at the level needed to manage industrial systems. AGI won’t 3D-print its own fabs.
The dream of recursive self-improvement collapses diverse domains into a single axis called ‘intelligence.’ But managing supply chains, lobbying for permits, or constructing a transformer farm are not cognitive puzzles. They’re embodied, negotiated, multi-agent processes. Even if such autonomy emerges, it won’t render infrastructure irrelevant. It will simply become AGI’s problem, too.
Verdict: Theologically interesting. Physically implausible.
5. Smarter Architectures Will Make Infrastructure Moot
Rebuttal: Transformers are inefficient. Intelligence may ultimately emerge from sparse, modular, or neuromorphic systems that mimic biological brains or distributed systems. If we find the right architecture, AGI might not require industrial-scale compute at all.
Reality check: True, but entirely speculative. No such architecture exists today that can match the generality, scalability, and trainability of transformers. Mixture of Experts reduces cost per inference, but not total infrastructure burden. Neuromorphic hardware is promising but unproven. This rebuttal is a hope, not a strategy.
Verdict: Architecturally plausible. Practically nonexistent.
The Hard Truth: The Materialists Win. For Now
All the rebuttals to infrastructure determinism share a structural weakness: they rely on capabilities we do not yet possess. They point toward desirable futures but offer no path to reach them without the very infrastructure they attempt to deny.
The materialist critique may sound dour, but it has one virtue none of its opponents can claim: it is already true. You cannot train GPT-5 without multi-megawatt datacenters. You cannot deploy AGI inference at global scale without rewriting the energy equation.
AGI isn’t a breakthrough. It’s a buildout.
If we ever reach general intelligence, it will not emerge from clever code or spontaneous sparks of insight. It will rise from transformers humming in concrete bunkers, powered by turbines and cooled by aquifers, shaped by permits, zoning fights, and supply chains. The intelligence may be artificial, but the path to it is brutally real.
Let me add one more point:
6. AGI needs less new infrastructure than you'd think.
You've rightly emphasized the massive barriers to entry in AGI research—it's not just about GPUs, but also data centers, power, land, regulatory approvals. The scale required rules out almost everyone.
But a handful of companies have already cleared that hurdle. Google, for example, consumed around 2.74 GW of power in 2023 ( https://en.wikipedia.org/wiki/Google ) and likely surpassed 3 GW in 2024. Microsoft, Amazon, and Meta aren't far behind. These companies have a track record of success solving infrastructure problems and they're not slowing down.
So don't worry about needing vast infrastructure upgrades for AGI; the top companies are already there, they're just wasting most of their capacity on non-AGI. The moment an internal prototype convinces a Pichai or Nadella that the finish line is in sight, expect a sharp pivot. Cloud prices will spike, GPUs and hardware accelerators vanish from the market, global compute shortages will hit as these companies redeploy their enormous infrastructure towards the ultimate business advantage.