FluidStack and the GPU Financing Problem
Complex deal structures tell us a lot about what's missing in the market for GPU financing
FluidStack, a neocloud provider, just assembled one of the most complex financing structures in the sector: roughly $653 million in total equity, a Macquarie debt facility sized up to $10 billion, and $6.7 billion in contracted revenue backstopped by Google. On paper, this reads like a mature infrastructure financing story. It is the kind of layered capital stack you’d see behind a pipeline or a fiber network.
It isn’t. The underlying asset is a rack of GPUs that will lose 40–60% of their value within three years as next-generation chips arrive. In other asset-backed lending markets—aircraft, rolling stock, shipping containers—lenders manage that kind of residual value risk through residual value insurance and residual value swaps. These are instruments that let lenders offload residual value risk to a counterparty. There are some startups that are building markets for both residual value swaps and insurance for GPUs, but these markets are nascent and the deals are all bespoke.
So what is actually securing this debt? That question matters far beyond one company. It is the structural question facing everyone trying to finance the AI buildout.
FluidStack’s capital stack: equity, debt, and a hyperscaler safety net
Start with the equity. FluidStack was in talks to raise a $200 million Series A in February 2025, followed by a $450 million raise in January 2026. Situational Awareness, the fund1 run by Leopold Aschenbrenner, formerly of OpenAI, is reportedly in talks to lead a $700 million round at a $7 billion valuation.
Set aside the dollar amounts for a moment and note the signal: an ex-OpenAI researcher’s investment fund leading a GPU infrastructure deal tells you something about where informed capital sees value in the AI stack. It’s not in the application layer. It’s in the picks and shovels.
Now the debt. Macquarie Group, the Australian infrastructure finance giant whose Specialised and Asset Finance division has spent decades lending against toll roads, pipelines, and power plants, is providing a GPU-collateralized senior debt facility, meaning the physical GPUs themselves serve as collateral. On its face, this is a straightforward asset-backed lending arrangement. GPUs go into a data center, they generate revenue through compute services, and they secure the loan.
But the actual structure of these deals is different from a straightforward asset-backed lending arrangement. FluidStack has two 10-year hosting agreements with TeraWulf, a former crypto miner pivoting to AI infrastructure, totaling $9.5 billion in contracted revenue. Those agreements are backstopped, in part, by Google, meaning Google has agreed to cover payments to lenders if FluidStack misses its obligations. In exchange, Google retains the option on the power and data center capacity for itself.
Pause on that. The debt is nominally collateralized by GPUs. But no rational lender is extending billions against hardware whose residual value will be a fraction of the loan balance within the asset’s useful life. What Macquarie is actually underwriting is Google’s balance sheet. Google’s guarantee is what makes the coverage ratios work, what makes the 10-year tenor viable, and what transforms a speculative hardware bet into something that fits inside a structured finance framework.
Call this what it is: credit substitution, not asset-backed lending in any traditional sense. The collateral is cosmetic. The credit is Google’s. This structure is not unprecedented, and the precedents are instructive.
The closest analog is the monoline bond insurance industry that imploded in 2008. Companies like MBIA and Ambac were highly rated entities—AAA until the very end —that provided credit enhancement across thousands of structured finance deals. Their guarantee made otherwise marginal debt investable, just as Google’s backstop makes otherwise speculative GPU debt financeable. The monolines were supposed to be diversified across so many deals that no single default could threaten them. But the triggers turned out to be correlated: when the housing market deteriorated, the guarantees got called across the entire portfolio simultaneously. MBIA went from AAA to junk in months. Ambac filed for bankruptcy. Bill Ackman, who had shorted MBIA starting in 2002 on the thesis that the monolines had insufficient capital to cover their aggregate exposure, made over a billion dollars when the structure collapsed exactly as he predicted.
Google is not a monoline. Its balance sheet is among the strongest on Earth; the monolines were thinly capitalized. The underlying assets here are real GPU infrastructure, not synthetic CDOs. And Google has a genuine strategic motive—securing data center capacity without committing upfront capital—beyond pure financial engineering.
But the structural position is strikingly similar to the monolines: a highly rated entity providing credit enhancement across multiple counterparties, with no public aggregation of total contingent exposure, and trigger conditions that are likely correlated. How much aggregate backstop liability does Google carry across all of its neocloud partnerships? What about Microsoft and Amazon, who are engaged in structurally identical arrangements with their own constellations of infrastructure partners? Nobody outside those companies knows. If three hyperscaler balance sheets are quietly backstopping the entire neocloud sector’s debt, and the actual credit risk is concentrated in those three names while appearing distributed across dozens of independent companies, then the system has a correlation problem that no individual deal sheet reveals. An AI demand downturn would not hit one neocloud. It would hit all of them simultaneously, which means all the backstops get stressed at once. This is exactly the dynamic that destroyed the monolines.
The risk is not that Google cannot pay. The risk is that nobody can see how much Google has committed to pay, and whether the triggers are correlated in ways that nobody is currently aggregating.
The Iceland structure: testing whether the math works without a backstop
FluidStack’s European deals look structurally different from the U.S. ones.
FluidStack is deploying exascale GPU clusters in Iceland and the Nordics through a partnership with Borealis Data Center, Dell Technologies, and NVIDIA. The hardware is current-generation: Dell PowerEdge XE9680 servers loaded with NVIDIA HGX H200 GPUs and Quantum-2 InfiniBand networking. These are financed through the same Macquarie facility.
But there is no disclosed Google-equivalent backstop for the European operations.
So either Macquarie is taking genuine residual value risk on the hardware, with no insurance or swap market to hedge against it, which would be novel and arguably reckless at this scale, or customer revenue commitments are functioning as undisclosed credit anchors. The distinction matters enormously for understanding whether GPU-collateralized lending is actually viable as a standalone financing model, or whether it always requires a shadow guarantor to work.
Geothermal and hydro power mean Iceland has some of the cheapest and most stable electricity in Europe. The subarctic climate dramatically reduces cooling costs, which typically represent 30–40% of a data center’s total power consumption. Together, these factors compress operating expenses, meaning a higher proportion of revenue flows through to debt service. That improves the coverage ratios Macquarie needs to see to stay comfortable. FluidStack’s French facility for Mistral AI, running on decarbonized nuclear and renewable power, follows the same logic: low-cost energy as structural foundation, sovereign-adjacent customers providing revenue stability.
This is Macquarie applying genuine infrastructure project finance logic to compute. And it partially works. But “partially” is doing a lot of lifting. The GPUs still lose residual value on the same curve regardless of where they are plugged in. An H200 in Reykjavík becomes obsolete at the same rate as an H200 in Virginia. Lower opex extends the economic life of the facility, but it does not change the fundamental mismatch between hardware obsolescence cycles and debt tenors.
The missing piece: why there is no forward curve, or residual value market, for compute
Step back from FluidStack and ask why all this structural complexity is necessary.
In mature commodity markets—oil, natural gas, electricity—forward curves solve most of the problems that GPU financing deals are trying to engineer around. A forward curve is a market-derived set of prices for future delivery at specified dates. It allows producers to lock in revenue, consumers to budget costs, and lenders to model collateral value against observable prices. Critically, it lets everyone hedge. Similarly, depreciating-asset lending in aviation and shipping works because residual value instruments give lenders a floor on future collateral value, and those instruments exist because the underlying markets have decades of price history and a deep pool of counterparties.
GPU compute has neither forward curves nor residual value markets. There is no standardized contract for “one hour of H100-equivalent compute delivered in Q3 2027.” There is no decades-long price history for used compute hardware under conditions of rapid generational obsolescence. There is no insurance or swap market where Macquarie can offload the risk that an H200 is economically worthless in 36 months because the B300 has made it uncompetitive.
The result is what you see in the FluidStack structure: every GPU financing deal requires bespoke workarounds—hyperscaler backstops, long-term hosting agreements, sovereign-adjacent customer relationships—to substitute for market infrastructure that does not exist. Google’s guarantee is, in effect, a crude substitute for residual value protection: a credit backstop filling the gap where a residual value swap would sit if the market were mature enough to support one.
A functioning compute derivatives and residual value market—forward contracts, options, swaps on standardized compute units, insurance products on hardware residual values—would resolve much of this. It would give lenders a reference curve and a mechanism to hedge tail risk, borrowers a tool to lock in future revenue, and investors transparent price signals. Early efforts to build this market exist, but it remains nascent. Compute is at roughly the stage electricity was in the early 1990s, before deregulation and standardized power trading created the liquid markets that now underpin trillions of dollars in infrastructure finance.
Two equilibria
FluidStack’s revenue mix has shifted from 38% marketplace to 62% private cloud, with private cloud contracts averaging over $100 million versus roughly $340,000 for marketplace deals. This reflects a sector-wide transition: neoclouds are becoming capital-intensive owner-operators, which means the financing question only scales up from here.
One equilibrium is the current path of hyperscaler backstops, bespoke guarantees, and risk concentrated in a few balance sheets but invisible from the outside. It works, but it carries the correlation problem described above. If the monolines taught us anything, it is that credit enhancement you cannot aggregate is credit enhancement you cannot trust.
The other equilibrium is a formal compute derivatives and residual value market: transparent, liquid, distributed, with risk priced by markets rather than buried inside partnership announcements. But building that infrastructure takes time, and the AI buildout is not waiting.
FluidStack’s architecture is a proof-of-concept that demand for financeable compute is enormous and that very smart people are finding creative ways to meet it. The question is whether the financial plumbing catches up to the physical infrastructure before the next generation of chips renders the current collateral obsolete.
If you enjoy this newsletter, consider sharing it with a colleague.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:

Agreed. This was a credit substitution.
Every structural workaround in these deals are filling a gap where a standardized instrument would sit in a mature market. These may work at this stage. But it stops working at scale, under stress and if/when the relationships get stressed.
What changes this is an independent compute rental reference rate, a standardized depreciation methodology by generation, and residual value instruments with real market depth. Early infrastructure exists but it's oriented toward derivatives settlement, not credit document reference. The credit-specific analytical layer is still missing.
Your closing question is the right one. The window to build that infrastructure is probably shorter than it looks.
Does this type of market exist on more humdrum compute like standard x86 processor-hours or cloud RAM reservations?
In my work I see customers that implement chargeback, internal-dollar accounting that managers track, minimize and are bonused on. They use “gigabyte hours” - an app reserving a gigabyte for an hour. Similar to AWS ECS, but since cpu is shared and can be very over provisioned for business apps, memory reservation is a better proxy for app size.