The hidden risk of AI compute
AI compute behaves like power and weather, not software, and it needs the same financial machinery
Most people who talk about the market for AI compute think of it in terms of cloud computing: wrap scarce hardware in an API, meter usage, send invoices, add a sprinkle of scheduling magic, raise a round.
That’s the Silicon Valley reflex. It’s also the wrong mental model.
Frontier AI compute is not a product category. It’s a risk exposure. It’s a volatile, time-sensitive capacity constraint that needs to be priced, hedged, and made liquid. The right analogy is not AWS or Snowflake. It’s CME, power markets, and derivatives desks in Chicago and New York.
The Core Issue: Compute as Stochastic Capacity
Cloud infrastructure was built on a few assumptions:
Supply is elastic.
Demand is smooth.
Cost curves are predictable.
The right abstraction is service consumption.
Frontier Ai violates all of that. AI compute is:
Spiky: driven by discontinuous training runs, not steady web traffic.
Scarce: bound by wafer cycles, export controls, and multi-year power buildouts.
Time-critical: missing a training window can mean losing a whole product cycle.
Path-dependent: costs are exposed to energy prices, hardware generations, and algorithmic shifts.
That’s not usage. That’s capacity risk with real downside. When you have a real asset whose availability, timing, and price are uncertain, you’re no longer in product design land. You’re in market design land.
A Simple Example: The Unhedged Training Run
Take a lab planning a major training run sometime in the next 12-18 months. They don’t know the exact date yet, as it depends on research milestones, but they know roughly the scale. Call it $20 million of compute at current prices.
Today they have two bad options:
Over-reserve capacity via long-term contracts and eat the carry cost.
Gamble on the spot market and hope prices and availability cooperate when they’re ready.
That’s exactly the kind of problem futures, options, and swaps exist to solve. If compute is treated as a financial primitive, the same lab could:
Buy compute futures to lock in a base layer of capacity at known prices.
Layer in call options on additional capacity in case the project over-runs.
Use swaps to trade floating spot exposure for fixed pricing.
Nothing about that logic is exotic. It’s commodity risk management 101 applied to GPUs instead of wheat or power. The only reason it doesn’t yet exist is that we still think of compute as a service product instead of a stochastic input to production.
SV vs Chicago/NYC: Two Different Games
Consider two very different games.
The Silicon Valley game:
Hide complexity behind an API.
Optimize for developer experience.
Smooth volatility into tiered pricing plans.
Monetize through usage and lock in.
That works when the underlying system is forgiving. If supply catches up, if demand is diversified, if nobody gets wiped out by a single price spike, you can treat risk as noise.
The Chicago/NYC game:
Expose the risk instead of hiding it.
Define a standard contract on the risky thing.
Build venues where those contracts trade.
Add clearing, margining, and risk models so institutions can hold exposure.
That’s the mentality that turned weather, volatility, power reserves, and freight into tradable assets. It’s not romantic. It’s just disciplined about one fact: any repeated uncertainty that hurts real people wants a market.
“But Compute Isn’t Oil!”
Correct, and that’s the interesting part. Compute is messy. It varies by hardware, network, latency, geography, SLA. Verification is non-trivial. There’s no single scalar that perfectly captures one unit of compute across all contexts.
But that’s not disqualifying. Power markets deal with location, time-of-day, and transmission constraints. Freight markets deal with route, vessel, and port risk. Volatility products deal with an abstract statistical property of prices.
To financialize compute, you don’t need to boil the ocean. You need standardized slices:
Clearly specified units (e.g., “X tokens of benchmark Y over Z hours, max latency L, failure conditions defined up front”).
Measurement and attestation that both sides trust.
Enforcement penalties when delivery fails.
You won’t get a single global GPU future that covers everything. You’ll get a family of related contracts, just like in power and commodities. That’s fine. That’s how real markets work.
Services vs Markets: Who Actually Wins?
Once you view compute as risk, the strategic map changes. The service-first instinct is: “We’ll abstract this away for users, eat the risk ourselves, and charge a margin.”
So you get GPU Airbnbs, fancy schedulers, nicer dashboards. Useful, but fundamentally linear. You have a better middleman in a broken market.
The market-first instinct, on the other hand, is: “We’ll surface the risk, standardize it, and make it tradable. Our moat is the market structure, not the interface.”
That produces an entirely different stack:
Contract standards for compute units.
Exchanges and matching engines for those contracts.
Clearing and margining so institutional capital can participate.
Market makers warehousing compute risk.
Data and indices on compute prices and volatility.
Credit and collateral frameworks for providers.
This is much closer to CME + ISO power markets than to Stripe for GPUs. And the entities best positioned to build and operate systems like that don’t live on Sand Hill Road. They live in Chicago and New York.
The Prediction
Whoever owns the risk layer of AI compute—the instruments, venues, and rules through which everyone else’s exposure is priced and traded—captures leverage over:
Labs that need to hedge training risk.
Cloud and bare-metal providers who want to monetize capacity without blowing up.
Funds and institutions looking for new, diversifying real-asset exposure.
Even governments, if/when they start thinking about strategic compute reserves like they already do for oil and gas.
That’s not a better SaaS product. That’s a market institution.
AI compute will be financialized because the underlying uncertainty is too large and too persistent to stay on ad-hoc contracts and Slack DMs. If you’re still thinking about GPUs as SKUs on a pricing page, you’re solving yesterday’s problem. The real game is designing the markets where compute risk lives.
If you enjoy this newsletter, consider sharing it with a colleague.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:

Absolutely brilliant Dave. Congratulations. Now we are talking about things I can finally understand. Here’s my concern about establishing markets, both spot and forward, for this type of service - tradable markets and the related standardized contracts can only form around perfectly fungible (i.e. interchangeable) commodities or they have to be broken out in a separate market as you pointed out. My question is that when you talk about clearly specifying the product, you use compute quantities, but are there other specifications that would be necessary to differentiate fungible offerings, such as specific GPUs, differential uses for the compute, or geographic proximity, for example.? if there are too many things are not interchangeable and have to be distinctly considered for each purchase, it can fail to get enough volume for any given fungible offering for the types of markets that you envision to emerge. Thanks again for the insights!