The future of AI compute looks like oil futures trading
The AI buildout requires trillions in capital and its financialization is the way to fund it
TL;DR: many people are skeptical that the hyperscalers will ever generate trillions in revenue to justify the trillions in capex they’re spending on AI data centers through 2030. But if AI compute is a commodity financialized like oil or natural gas, then you can separate demand creation and capital formation.
And the road to this goes through a prop trading shop in Chicago.
Lots of Silicon Valley investors and AI people seem confused about this, and no one else is writing about this, so I’ve decided to explain it to the world.
Show Me the Money
Read headlines about the multi-trillion dollar AI data center buildout, and one thing you see repeatedly is show me the money! How will the hyperscalers generate sufficient revenue to finance the multi-trillion dollar expense they’re incurring? Bain recently published a report with a startling back-of-the-envelope calculation: by 2030, the U.S. alone could need an extra 100 gigawatts of electricity to power AI data centers. Meeting that need would require roughly $500 billion in new data center spending every year.
Then they dropped the real bombshell: justifying that level of capital spending requires roughly $2 trillion per year in AI-driven revenue. The message is clear: if the world doesn’t generate trillions in fresh AI dollars, this whole thing collapses.
But this is misleading. It reflects a consulting lens, in which infrastructure investment is tied to revenue forecasts, rather than a financial markets lens. In reality, trillions in capital can be mobilized for compute without trillions in new end-user revenue first. The missing piece is financialization of compute.
Demand vs Funding: The Category Error
Bain’s argument rests on a common but faulty assumption: that the only way to justify new supply is to have corresponding new demand already in hand. If you want $500 billion in new data centers, you’d better have $2 trillion in new AI revenues to pay for it.
But this conflates demand creation with capital formation. They are not the same.
Think about other asset-heavy industries: power generation, liquefied natural gas, commercial aircraft. In none of these cases do developers wait until demand is fully realized before building. Instead, they rely on contracts, indices, and risk transfer mechanisms that allow capital to flow into projects years before final demand is known.
You don’t need trillions in new AI revenue tomorrow to build trillions in new data centers. You need bankable cash flows: predictable streams of money that investors can rely on. That’s what financialization provides.
How Other Industries Do It
Let’s take power generation. Utilities don’t build a new gas plant only when every household signs up to pay more on their electricity bill. They finance projects through two distinct legs:
Capacity payments: fixed payments just for keeping a plant available, whether or not it’s running at full tilt.
Energy payments: variable payments based on actual usage, megawatt-hour by megawatt-hour.
These separate streams allow investors to fund the plant because they know that, even if usage fluctuates, the fixed capacity payment covers debt service.
The same pattern holds in LNG (take-or-pay contracts), aircraft (long-term leases), and even cell towers (tower-sharing agreements). Capital flows because risk is sliced into manageable, hedgeable pieces, not because revenue perfectly lines up on day one.
Applying This to Compute
Compute can be structured the same way. Instead of relying solely on volatile usage fees (per GPU-hour), data centers can sign contracts that guarantee fixed revenue streams.
Here’s how the cash might flow:
Capacity Contracts. A buyer, say an AI lab or a government agency, pays a fixed fee every month to reserve a block of GPUs. This revenue is steady and predictable, and can be used to secure loans and bonds.
Usage Fees. When the GPUs are actually used, the buyer pays additional fees per GPU-hour. This is the spiky, demand-driven part of the business.
Futures and Forwards. Just as airlines hedge fuel prices, compute buyers and traders can lock in GPU-hour prices years ahead, creating a forward curve. Developers can pre-sell future capacity to raise capital today.
Compute Power Purchase Agreements (PPAs). Long-term deals that bundle capacity and usage, with clear pricing rules, pass-throughs for power costs, and penalties for downtime.
Securitization. Capacity contracts can be pooled, packaged, and sold as bonds to pension funds or insurers. That opens the door to trillions in conservative capital that would never touch raw GPU speculation.
Enter Don Wilson and the Playbook of Financializing Scarce Resources
If financialization of compute sounds exotic, it isn’t. We’ve seen this movie before, and one of its main directors is Don Wilson. Wilson is the founder of DRW, one of the world’s most aggressive proprietary trading shops. He has a long history of stepping into markets where the underlying commodity is scarce, volatile, and poorly priced, and then building the financial plumbing to make it tradable.
Electricty & Energy: Wilson was an early participant in U.S. electricity and natural gas futures markets in the 1990s, helping turn power from a local utility product into a global, hedgeable commodity.
Interest Rate Derivatives: DRW became one of the largest liquidity providers in interest rate swaps and futures, markets that only work because someone was willing to shoulder and slice up complex risk.
Crypto: When crypto markets were chaotic and opaque, Wilson launched Cumberland, one of the first institutional-scale crypto trading desks. His thesis was simple: bring professional market-making and derivatives into an immature commodity-like asset class.
Now Wilson is back with Compute Exchange and Silicon Data. Compute Exchange wants to do for GPU cycles what CME did for cattle, crude, and copper. Silicon Data is building the underlying indices that compute financialization relies on. His wager is that AI compute has the same features as other commodities:
It’s scarce.
It’s volatile.
It’s critical to modern economies.
And it lacks standardized risk-transfer instruments.
Wilson’s history tells us something important: when he sets up shop in a market, it usually means that market is on the verge of being financialized.
Here is Wilson being interviewed by Bloomberg’s Tracy Alloway and Joe Weisenthal on this and related topics:
The Waterfall of Cash Flows
If you were to draw the money flow as a waterfall, it would look like this:
Top of the stack: End buyers (labs, enterprises, governments) make payments, both fixed capacity fees and variable usage fees.
Middle: Operators collect those payments.
Then the waterfall splits:
Debt holders (banks, bond investors) get paid first.
Operations and maintenance (O&M) (power, cooling, staff) get covered.
Equity holders (the operators themselves, or investors) take whatever is left as profit.
As long as the fixed capacity payments cover debt service and O&M, the project is financeable, even if usage is highly uncertain.. This is why the $2 trillion or bust framing is one-dimensional. It ignores the fact that financial contracts can decouple financing from end-user revenue.
Why Hyperscalers Will Resist and Then Submit
The standard assumption is that hyperscalers (Amazon, Microsoft, Google, etc.) don’t want compute financialization. They benefit from opacity: no one knows their true costs, customers can’t easily comparison shop, and they can mark up compute however they like.
That’s true in the short run. But in the long run, the incentives change:
Balance Sheet Relief: Hyperscalers can’t carry $500 billion per year of capex indefinitely. Financialization brings in outside capital.
Hedging: Futures markets let them lock in GPU-hour costs, just like airlines hedge fuel.
Regulatory Pressure: Governments will force more standardized, transparent contracts.
Stranded Asset Monetization: Excess or underutilized GPUs can be sold into open markets at index prices.
Opacity maximizes margins, but limits scale. Financialization reduces margins, but unlocks trillions in external capital. Once the scale problem becomes urgent, even hyperscalers will embrace financialization, just as utilities eventually embraced electricity futures.
What This Means
Don’t get distracted by the $2 trillion revenue gap narrative. The real story is how financialization will reshape the economics of compute:
Investors: Pension funds, insurers, and sovereign wealth funds will pour money into compute the way they already invest in power plants or toll roads.
Operators: Data center developers can raise capital without waiting for every last AI use case to appear.
Hypescalers: They’ll resist transparency, then quietly become the biggest users of compute futures and capacity markets.
Policy makers: If governments want sovereign AI capacity, pushing financialization is smarter than trying to subsidize every new GPU farm directly.
Bain’s report is useful in one sense: it dramatizes the scale of the coming build-out. But its $2 trillion or bust framing is misleading. Capital doesn’t need to wait for revenue. It needs market design.
Once compute is financialized, through capacity contracts, futures, tolling agreements, and securitization, the trillions in capex Bain worries about will flow. The key is building the indices and cash-flow structures that make compute bankable.
If you enjoy this newsletter, consider sharing it with a colleague.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:
I agree that financialization allows the hyperscalers to build data centers before they have customers and revenue. However, at some point, these loans need to be repaid, or default. To repay trillions in capex, ultimately AI will need to generate trillions in value of the end users.
Just remember, financially settled derivatives are only marginally useful when you need physical delivery