The AI capex cycle: doom, discipline, or differentiation?
Simple math highlights the risks, but utilization, margins, and financing wil decide who compounds and who gets stranded
A recent piece written by the Chief Investment Officer of an investment fund called Praetorian Capital, made waves with a stark claim: “Simple math says the AI investment boom is doomed.” His argument is clean: with $400B in data center capex forecast for 2025, and only ~$15-20B in visible AI revenues today, the economics can never work. The industry is destined to follow the well-worn path of fiber, shale, or shipping: glorious capacity buildouts followed by equity destruction.
This kind of warning should not be dismissed lightly. Coming from a CIO, it reflects real capital-cycle intuition. And he’s right on the essentials: AI infrastructure buildouts are enormously capital-intensive, they refresh far faster than traditional infra, and they carry the same risk of stranded assets and overcapacity that have wrecked past booms. If you are a cautious allocator, you should be worried about the mismatch between depreciation clocks and revenue visibility.
But the leap from this is risky to this is doomed depends on simplifications that don’t survive contact with how AI infrastructure actually works. His simple math is elegant, but it leaves out most of the economic stack. The reality is more complex, and far more interesting.
Where the CIO is right
Let’s start by giving him full credit. His analysis is directionally correct in three important ways:
The arms race is relentless. Nvidia’s cadence, from Hopper to Blackwell to Rubin, means GPUs refresh every 3-4 years. Buildings and power infrastructure, by contrast, run on 10-30 year clocks. That mismatch is dangerous.
Capex can outrun monetization. Telecom fiber, shale oil, and LNG trains all created real-world utility while destroying equity. AI infra can absolutely follow that script.
Gross-margin compression at the utility layer is real. Renting raw GPU hours will structurally resemble cloud infrastructure-as-a-service (IaaS): capital intensive, competitive, thin margins.
On these points, his piece is a valuable warning shot. Investors should never assume that exponential demand automatically equals attractive returns.
Where the simple math is incomplete
Four key omissions matter.
Depreciation ≠ economic life.
The author assumes straight-line depreciation: 25% of capex in 30-year buildings, 40% in 10-year MEP systems, 35% in GPUs with a 4-year life. That arithmetic yields ~13.6% effective depreciation, or ~$54B per year on $400B of capex1.
But that’s an accounting schedule, not a real-world cash profile. Operators routinely stretch useful life: GPUs start at frontier training, then cascade into inference, fine-tuning, ETL jobs, and finally secondary markets. Fully depreciated chips still earn cash, often for years. Hyperscalers treat fleets as portfolios, not one-and-done cohorts. Depreciation clocks are blunt; economic clocks are flexible.
“AI revenue” is mis-specified
The CIO anchors on visible AI line items (~$15-20B) and concludes there can never be enough revenue. But AI value doesn’t show up as a neat SKU. It’s embedded: higher ad engagement at Google and Meta, stickier SaaS seats at Microsoft, automated support at Salesforce, faster developer velocity across the stack. The right denominator isn’t AI sales, it’s something like incremental gross cash yield per compute-hour. That’s not a single line item in financial statements. And, it’s much larger than today’s API revenue.
Margin isn’t capped at 25%
Assuming a 25% gross margin ceiling bakes in failure. Raw GPU resale may resemble a 25% utility margin, but the AI stack doesn’t end there. Platforms, copilots, vertical applications, and marketplaces capture margins in the 60-80% range. The blended margin is a function of mix, not a universal constant. To ignore the higher stack is to miss where most value will accrue.
Vintage-by-vintage breakeven is the wrong frame
The article treats each annual capex cohort as needing to repay itself via same-year AI revenue. But hyperscalers run portfolios of cohorts: GPUs don’t disappear after four years, and they don’t need to be paid back in isolation. Capacity cascades, utilization improves, and risk is financialized via take-or-pay contracts and GPU-hour forwards. You don’t judge an airline fleet by whether the 2025 plane pays for itself by 2029; you judge the portfolio.
What the CIO’s frame misses structurally
Hyperscalers live by a few factors that the simple math leaves out:
Utilization ramp: Early fleets run sub-40% utilized. With better toolchains, schedules, and middleware, utilization climbs to 70-80%. That alone changes the denominator.
Power economics. Energy is 30% of total cost of ownership. Owning generation, hedging PPAs, or colocating with stranded renewables can radically lower costs.
Residual value & secondary markets: Older GPUs cascade internally, then are resold. Even 4-5 years in, they can hold 20-40% of original cost in resale value.
Sovereign demand: Governments will underwrite capacity for national security reasons. That revenue is off-balance-sheet to the simple math view.
Financialization: GPU-hour forwards, possibly GPU futures, pre-purchase contracts, and take-or-pay structures stabilize revenue and lower the hurdle rate. That changes the ROIC math entirely.
A more nuanced back-of-envelope
Reframe the same $400B 2025 capex through a blended margin lens:
Annual depreciation: ~$54B
Revenue needed to cover depreciation:
60% gross margins (platform heavy): ~$90B
40% gross margins (mixed compute + services: ~$135B
25% gross margins (pure utility): ~$216B
Instead of “doomed unless $480B revenue appears,” you get a range of scenarios. Some are achievable, some aren’t. The outcome depends on utilization, mix, and financial structures, not raw arithmetic.
The real failure modes
None of this means the cycle is safe. Investors should watch three tripwires:
Persistent under-utilization: if toolchains fail, GPUs sit idle.
Token-price collapse outruns elasticity: usage booms but willingness-to-pay falls faster.
Refresh stranding: new generations make old fleets uneconomic before they’re amortized.
These are live risks. They’re what determine who compounds and who gets incinerated.
How to reconcile the two views
The CIOs piece is valuable, not as prophecy, but as discipline. It reminds us that AI infra is not a one-way bet, that the capital cycle can destroy equity, and that depreciation vs monetization mismatches are deadly. But “doomed” is too strong. The industry is not a monolith; some players will die, others will compound. The right question isn’t “can the math ever work?” but “under what mix of utlization, margin capture, and financing does it work, and for whom?”
If you enjoy this newsletter, consider sharing it with a colleague.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:
follow me on Twitter,
connect with me on LinkedIn, or
send an email to dave [at] davefriedman dot co. (Not .com!)
I’m using a weighted average here to calculate depreciation expense from $400B capex, and my calculation of ~$54 billion of depreciation expense is greater than the author’s calculation of around $40 billion.
The weighted average is calculated in the following manner.
Break out the asset mix:
Buildings: 25% of capex, depreciated over 30 years
Mechanical/Electrical/Plumbing (MEP): 40% of capex, depreciated over 10 years.
GPUs: 35% of capex, depreciated over 4 years
Compute each category’s annual depreciation rate
For straight-line depreciation, the rate is just 1 / lifespan.
Buildings: 1 / 30 = 3.33% per year
MEP: 1 / 10 = 10%/year
GPUs: 1 / 4 = 25%/year
Weight by share of capex
Now multiply each rate by its cost share:
Buildings: 25% * 3.33% = 0.83%
MEP: 40% * 10% = 4%
GPUs: 35% * 25% = 8.75%
Sum the weighted contributions
0.83% + 4% + 8.75% = 13.58% ≈ 13.6%
Apply to total capex
If 2025 capex = $400B, then estimated annual depreciation = $400B * 13.6% = $54B per year, approximately.
Really insightful breakdown. I’d add that the financial viability of AI models will ultimately determine not just which infra players survive, but also whether end-users continue to adopt at scale. Right now, every sector is “subscribed out” tools, platforms, and services all layered on top of each other. If costs can’t be aligned with clear value and sustainable pricing, adoption will stall. The winners will be those who crack utilisation, monetisation, and affordability together not just in infra, but across the stack.
I thought Kuppy made an even bigger error that that: he compares the *total* data center capex number to *only* AI-related revenues and use cases. A good chunk of that recent data center spend (30%? 40%?) is supporting regular old fashioned AWS/Azure/GCP workloads which continue to grow robustly even excluding all inference spend. This is in addition to the other sources of ROI you mentioned in pt #2 like better ad engagement, etc.
I've followed his writings for a while, and I don't doubt he's a good macro investor (he had terrific returns in '20-'22), but he's not really a tech analyst. He tends to think about growth tech companies as risk assets & low interest rate phenomena, see https://pracap.com/the-problem-with-ponzis/
The discussion around the prospective AI capex vs revenue cycle reminds me of the soft vs hard landing chatter regarding what the Fed was doing in 2022 and 2023. The pushback from Kuppy et al is helpful because it forces the bulls to be intellectually rigorous, but we also happen to be in a deeply anti-institutional political moment where people like to fantasize that the people running the world's most important organizations (Satya Nadella, Sundar Pichai, Jerome Powell) are corrupt morons driving us off a cliff... when in reality they probably have a lot better information with which to forecast than the average layperson does.
I increasingly anticipate that this Mag7 capex binge is going to have more or less a "soft landing" (i.e. the eventual ROIC on all this AI data center spend will be somewhere between okay and good) but we'll see. Given the tone of this post I would guess your view on that is "it depends".