AI Capex: Built on Options, Priced as Certainty
The mismatch between how deals are announced and how money gets stranded
Ed Zitron published a 19,000-word jeremiad, dubbed the Enshittifinancial Crisis, about the AI buildout. It’s at its best when it treats modern tech-capitalism as a systems problem: incentives, narrative, and accounting choices cohere into a machine that converts maybe later into priced as now. His Stage-4 framing, in which companies prioritize their stock price ahead of everything else, captures the vibe of a market where price discovery increasingly feels like story selection rather than cashflow discounting.
But his piece also does the classic polemical thing: he draws connections between various distinct mechanisms–loose deal language, depreciation games, off-balance-sheet financing, VC reflexes, AI unit economics, bank liquidity signals–and collapses them into a single causal chain. The right way to sharpen his argument is to separate (1) earnings optics, (2) financing plumbing, and (3) real economic profitability, because each can break independently, and their contagion paths differ.
Start with the most concrete, least ideological claim: earnings optics. He points to hyperscalers extending the useful life of servers and network gear, lowering depreciation expense and boosting net income. He frames it as conspiracy, but if it’s a conspiracy it’s one that the hyperscalers have disclosed extensively in their financial statements. Microsoft explicitly says it increased server/network useful lives from four to six years effective FY2023. Alphabet disclosed an assessment that moved servers and certain network equipment to six years, reducing depreciation expense (e.g., $988M in Q1 2023). Meta extended server lives.
But a useful life extension can be both economically plausible and strategically convenient. The large tech companies justify extending the useful lives of their servers because software optimization, workload shifting, and a fat tail of good enough compute use cases all lengthen servers’ useful lives once the companies stop using them for training and inference. At the same time, when capex surges, as it has with the AI buildout, the incentive to lengthen lives rises as well. Depreciation is the accounting lever that turns today’s capex into tomorrow’s cost, smoothing margins right when Wall St is looking at your AI spend.
Even better for stress-testing this story: Amazon is a counterexample. It shortened the useful life of a subset of servers/networking equipment from six years to five, explicitly citing the faster pace of AI/ML technology development. That single disclosure should force a more precise thesis than “everyone is cooking the books.” What’s really happening is: depreciation policy is an active management variable. It’s a knob that firms turn in either direction depending on their internal view of obsolescence and their external need to manage reported profitability. If this discretion is really the problem that Zitron claims it is, then the answer to this problem lies in Norwalk, not in a blog post.
Now the second bucket, financial plumbing. Zitron’s strongest instinct here is about how AI infrastructure is financed, especially the migration of leverage into private credit and bespoke SPV structures. Meta’s Hyperion data center campus is a clean example of the modern move: a JV structure to finance and own/operate the campus, with Meta providing management services. Reporting around the deal emphasized that the debt and the data center wouldn’t sit on Meta’s own balance sheet. Ratings analysis also framed the scale: roughly a 2-gigawatt facility estimated around $27B to construct, plus additional capex for servers.
Call this what it is: risk laundering. Not Enron, necessarily, because disclosure can be adequate and structures can be legal, but the economic effect rhymes. Leverage exists; it’s just been repackaged so the sponsor preserves headline credit metrics while the system absorbs the risk through vehicles, lenders, and structured exposures. Zitron’s thinks this will be a stress test of banks and private credit, which is reasonable. But a more precise claim is: AI infra is creating a maturity/optionality mismatch. This is long-tenor financing against assets whose economic half-life may be shorter than the debt assumes, and whose tenants often have meaningful outs if delivery is late or the economics shift.
The third bucket is profitability, or good unit economics. Here, I think Zitron is directionally right but rhetorically sloppy. Yes: many frontier model businesses have ugly gross margins and huge ongoing inference/training costs. Yes, subsidies, in the form of VC equity, cheap debt, and strategic cross-investments, can keep an ecosystem alive longer than cash flow purists expect. But his claim that there is no generative AI company with a path to profitability is too absolute to be useful. It confuses profitability under today’s product/price mix with profitability as the tech diffuses, costs fall, and value capture shifts. The real question isn’t whether AI can be profitable somewhere. It’s who captures the profit and who gets stuck holding depreciating metal financed by brittle capital structures.
And this is where Zitron’s obsession with letters of intent (LOIs) matters. But he overshoots when he says that LOIs show no deals exist. The Nvidia-OpenAI announcement is explicitly framed as a letter of intent with deployment/investment tied to future buildout; OpenAI’s own post says Nvidia “intends” to invest up to $100B progressively as each gigawatt is deployed. Reuters covered it as an LOI with details to be finalized later, not as a finished contract. And later reporting noted Nvidia hadn’t finalized the proposed investment and that nothing was booked in its existing sales backlog.
So the correct takeaway isn’t that LOIs are fake. It’s subtler and more insidious: LOIs are real options. They move markets, guide counterparties, and shape capex decisions before obligations harden. In an ecosystem funded by leverage and narrative, options can be enough to pull forward tens of billions in irreversible investment. If (when?) the option doesn’t convert, the stranded capital becomes a restructuring problem.
If you want a tighter thesis than “everything is a scam,” try this working theory: We’re watching the financialization of AI infrastructure collide with the commoditization of AI compute. Financialization requires long-lived, financeable assets with stable contracted cashflows. Commoditization requires prices to race toward marginal cost as capacity floods in and differentiation collapses. Those are not compatible. If compute becomes a utility, someone can still make money, but the over-levered intermediaries and the “build first, pray later” campuses become the shock absorbers.
One last contrarian point: even if Zitron’s macro call is right, he underestimates path dependency. The crisis may not look like a sudden 2008-style cliff. It might look like a slow grind of impairments, refinancing stress, repriced contracts, and equity wipeouts localized in the most levered nodes, while the hyperscalers keep marching because they can fund mistakes with operating cash and because AI spend is strategically defensible even when it’s financially messy.
If you enjoy this newsletter, consider sharing it with a colleague.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:

The LOI-as-option framing is exacty what gets missed in most AI buildout analysis. When leverage meets narrative without hard contracts, the mismatch between capex timing and revenue realization can create serious stranded assets. I saw similiar patterns in renewable energy where project financing assumed capacity factors that never materialized. The difference here is the velocity of obsolesence in AI compute, depreciation games might smooth earnings now but dont change the underlying risk.