Three GPU Markets, Three Volatility Regimes
Why new hardware gets unstable when tight and mature hardware does the opposite
This analysis is based on proprietary GPU market data collected and curated by Ornn. For more information visit data.ornnai.com. I used Sonnet 4.5 & Python for the quantitative analysis.
Everyone talks about “GPU shortage” like it’s binary: either the market is tight or it isn’t. That framing obscures what’s actually happening. The data shows distinct regimes across different SKUs, and utilization is the variable that predicts which regime you’re in.
I analyzed 90 days of Q4 2025 data (Oct 2 through Dec 30) for three GPUs: A100 SXM4, H100 SXM, and H200. Each has daily spot prices, a 7-day realized volatility series, and utilization percentages.
The question: does today’s utilization predict future volatility?
In commodity markets, spare capacity determines whether prices stay boring or become chaotic. If GPU compute is becoming commoditized, utilization should work the same way.
The Test
For each SKU, I compared utilization on day t with volatility on day t+7. Shifting volatility forward by a week avoids correlating it with itself (rolling windows create overlap) and tests whether tightness today signals turbulence later.
I also tested 3-day and 14-day horizons to check if the relationship is consistent or specific to a particular timescale.
H200: Tight Markets Get Jumpy
The correlation between H200 utilization and next-week volatility is +0.46 (p < 0.0001). A simple linear regression gives R² ≈ 0.21, meaning utilization explains roughly 21% of volatility variation a week out. For a single variable in noisy market data, that’s substantial.
The slope is about +0.74 volatility points per 1% utilization increase. Moving from 40% to 70% utilization corresponds to roughly +22 volatility points the following week.
Split the data into quartiles:
Lowest utilization (≤41.5%): average next-week vol ≈ 10.5%
Highest utilization (≥64.6%): average next-week vol ≈ 36.8%
That’s 3.5× higher volatility in tight conditions.
The relationship holds across different time horizons. At 3 days forward: r = +0.36, R² = 0.13, volatility ratio 2.8×. At 14 days: r = +0.33, R² = 0.11, volatility ratio 2.1×. The signal peaks at 7 days but remains significant at both shorter and longer horizons.
This is standard commodity behavior: tight markets don’t just get expensive, they get unstable. H200 is a thin market still forming its microstructure.
H100: Same Direction, Weaker Effect
H100 shows the same pattern but less dramatically:
7-day correlation: +0.25 (p = 0.023)
R² ≈ 0.06
Low utilization (≤49.4%): next-week vol ≈ 27.2%
High utilization (≥66.2%): next-week vol ≈ 45.2% (about 1.7×)
Interestingly, the relationship strengthens at longer horizons. At 14 days forward: r = +0.37, R² = 0.14, volatility ratio 2.0×. This suggests H100’s market depth absorbs short-term shocks but doesn’t eliminate utilization effects entirely—they just take longer to materialize.
H100 is deeper and more operationally mature than H200. More providers, more inventory, more standardized deployments. Tightness matters, but it doesn’t destabilize pricing as quickly.
A100: The Inversion and What It Means
A100 does the opposite. Higher utilization predicts lower volatility:
7-day correlation: −0.29 (p = 0.009)
R² ≈ 0.08
Slope: −1.29 volatility points per 1% utilization increase
Low utilization (≤56.7%): next-week vol ≈ 64.2%
High utilization (≥64.8%): next-week vol ≈ 48.3%
The inversion weakens at longer horizons (14-day r = −0.09, not significant), suggesting this is a short-term microstructure effect rather than a fundamental supply dynamic.
On Nov 19, A100 utilization was 75.6% and realized vol was 31.9%. One week later, vol jumped to 65.9%—seemingly contradicting the pattern. But this is the exception that proves the rule: A100’s base volatility is much higher (mean 57.2% vs 35.1% for H100 and 26.0% for H200), with a coefficient of variation of 0.48 compared to 0.93 for H100 and 0.83 for H200. The A100 market is inherently more turbulent, but high utilization represents stable throughput rather than stress.
The likely explanation: A100 is mature. It has better substitutability across providers, a larger installed base, and more elastic supply. Capacity can be reallocated or repriced without panic. High utilization reflects steady demand that providers have learned to accommodate. Low utilization, by contrast, may signal demand uncertainty that drives exploratory repricing.
What “mature” means operationally:
Looking at the data, market maturity appears to correlate with:
Utilization stability: A100 shows the tightest utilization distribution (std dev 5.9% vs 10.7% for H100 and 14.7% for H200)
Price stability: A100 has the highest price stability metric (1 - coefficient of variation = 0.937 vs 0.941 for H100 and 0.965 for H200, though all three are relatively stable)
Volatility character: Lower mean volatility doesn’t indicate maturity—A100 has the highest mean volatility at 57.2%. But its volatility has a lower coefficient of variation (0.48), meaning it’s more consistently volatile rather than experiencing regime shifts
The paradox: mature markets can be more volatile on average but less sensitive to utilization swings. A100’s volatility is steady background noise. H200’s volatility is reactive and regime-dependent.
Market Lifecycle Model
These three SKUs represent different stages of market development:
Stage 1 - Formation (H200):
Strong positive utilization-volatility correlation (+0.46)
High sensitivity to tightness (3.5× volatility differential)
Thin liquidity, immature pricing mechanisms
Utilization = stress signal
Stage 2 - Deepening (H100):
Moderate positive correlation (+0.25)
Reduced sensitivity (1.7× volatility differential)
Growing depth, standardizing deployments
Utilization = moderate stress signal
Stage 3 - Maturity (A100):
Negative correlation (−0.29)
Inverted relationship (0.75× volatility differential)
Deep liquidity, elastic supply response
Utilization = throughput signal, not stress
This progression suggests a predictable evolution as GPU markets mature. New SKUs will likely follow H200’s pattern; established ones will trend toward A100’s behavior.
What This Means
Stop asking “is there a shortage?” Start asking which SKUs are in fragile microstructure regimes.
As of Dec 30, H200 spot price is only slightly above H100 ($2.33/hr vs $2.18/hr), but its volatility is far more sensitive to utilization. That’s a thin market that hasn’t finished forming. Pricing looks calm until it doesn’t.
For buyers, this matters more than spot rates. Your realized cost over a quarter isn’t just average price—it’s average price plus tail events where you’re forced into bad execution: repricing, migration, downtime, procurement panic.
Practical implications:
H200 users: Monitor utilization closely. When it crosses 65%, expect volatility to spike within a week. Consider locking in longer-term contracts or maintaining buffer capacity during high-utilization periods.
H100 users: You have more breathing room, but utilization above 66% still predicts elevated volatility at 7-14 day horizons. The market depth gives you time to respond, but doesn’t eliminate the risk.
A100 users: High utilization is actually your friend—it signals stable, accommodated demand. Low utilization may indicate market uncertainty and repricing risk. Paradoxically, you should worry more when capacity sits idle.
For sellers and capacity providers, these regime differences create strategic opportunities:
H200 capacity commands a premium during utilization spikes—but you need to be nimble
H100 requires medium-term planning; the 14-day signal means you can position inventory ahead of volatility
A100 benefits from demand aggregation; stable high utilization attracts sophisticated buyers who value predictability
For anyone building indices or derivatives: utilization is a state variable for volatility regimes. Commodity markets don’t model price without inventories. Power markets don’t model price without reserve margin. Shipping doesn’t model rates without fleet utilization.
Compute isn’t special. It’s just late.
The data confirms what sophisticated traders already suspect: GPU markets are differentiating by maturity, and utilization is the indicator that tells you which regime you’re operating in. As these markets continue to develop, we should expect newer SKUs (future H200 replacements) to repeat this pattern while older hardware transitions toward the A100 model—high utilization as signal of market health rather than market stress.
Technical Appendix
Data: 90 days (Oct 2 - Dec 30, 2025), three GPUs (H200, H100 SXM, A100 SXM4)
Method: Pearson correlation between utilization(t) and volatility(t+k) for k ∈ {3, 7, 14} days
Statistical significance: All reported correlations have p < 0.05 except where noted
Summary statistics (7-day forward):
If you enjoy this newsletter, consider sharing it with a colleague.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:

Really interesting to see the market maturity lifecycle mapped through utilization-volatility dynamics. The A100 inversion makes intuitive sense once u frame high utilization as established throughput rather than stress. I've noticed similar patterns in cloud spot markets where mature instance types behave more like commodities with elastic supply, while cutting-edge instances act more liek illiquid assets. The 7-day forward correlation timing is particuarly useful for planning capacity.
Interesting read. I wonder how tools like WoolyAI will impact this. Their ability to maximize GPU utilization (through dynamic allocation, VRAM deduplication, and multi-tenant serving) directly addresses what was descirbed here. I would think organizations that can run at high utilization internally are less exposed to the volatile spot market dynamics.