CoreWeave: Building AI infra on rented foundations
CoreWeave isn't selling shovels. It's buying someone else's, and operating on someone else's land
Introduction: An Empire Built on a Shaky Foundation
CoreWeave is a prominent player in the growing AI infrastructure market. It positions itself as a GPU-optimized cloud provider, offering compute capacity to startups and enterprises building and deploying large AI models. And it is operating this business at a time when access to Nvidia hardware is both constrained and essential.
To its credit, CoreWeave has moved quickly to meet a clear market need. (It was originally founded as a crypto-miner, but it pivoted to AI when it became apparent that the crypto markets were less attractive than the burgeoning AI market.) While many of the largest cloud providers have dealt with GPU bottlenecks, CoreWeave has secured its supplies and built out infrastructure designed specifically for performance-sensitive workloads like large language model training, inference, and high-resolution rendering.
The company went public last week, launching its stock on the Nasdaq under the ticker symbol CRWV, and reportedly aimed to raise approximately $3 billion at a valuation around $26 billion. The offering included 49 million shares priced between $47 and $55 each. The IPO is a pivotal moment, not just for CoreWeave, but as a broader signal of investor appetite for AI infrastructure.
In 2024, CoreWeave generated $1.92 billion in revenue, but posted a net loss of $863.4 million. Microsoft accounted for more than 60% of that revenue, and the company has secured an $11.9 billion, five-year contract with OpenAI, which is also participating in the IPO through a $350 million private placement.
These headline numbers are impressive. But beneath the surface, CoreWeave is navigating a high-stakes balancing act. It relies on a single supplier, Nvidia, for its GPUs. It rents its real estate. It doesn’t own the chips1, the land, or the software stack. It’s an intermediary in a market dominated by hyperscaler giants, and those giants can change the rules at any time.
Performance Without Ownership
CoreWeave operates a network of high-density data centers designed to host thousands of high-performance GPUs in tightly coupled clusters, offering low-latency interconnects and high-throughput compute. This infrastructure is mission-critical for modern AI workloads.
However, unlike some of the hyperscalers or vertically integrated tech companies, CoreWeave does not own the real estate underpinning its infrastructure. Instead, it leases facilities from data center operators like Digital Realty and Chirisa, who provide space, power capacity, and often the mechanical and electrical systems.
This model enables faster deployment and geographic flexibility, without the heavy up-front costs of construction or land acquisition. But it also exposes CoreWeave to rent escalation, renewal risk, and the absence of asset appreciation. The data centers housing its infrastructure, which are arguably as central to its business as the GPUs themselves, are not on its balance sheet.
As demand for AI-ready data centers grows and suitable power-dense real estate becomes scarcer, landlords will extract increasing value from tenants. CoreWeave, having sunk capital into site-specific buildouts, will find itself locked in to long-term leases, limiting its leverage to renegotiate terms. This exposes it to inflationary pressures and long-term margin erosion.
Contrast this with McDonald’s, which strategically owns the land beneath most of its franchises. Real estate control isn't just a hedge—it’s a moat. In this light, CoreWeave looks vulnerable. It is not building an empire. It is renting a tent in someone else’s arena.
Dancing with the Dominant Chipmaker
At the other end of the stack, CoreWeave relies on a single supplier for its GPUs: Nvidia.
Nvidia not only manufactures the hardware but controls the surrounding software ecosystem with its CUDA, cuDNN, and other products necessary for modern GPU-based ML workloads. CoreWeave is not diversifying into custom silicon, nor is it pursuing alternative architectures. Its service is fundamentally Nvidia-native.
To date, the company has effectively secured Nvidia GPUs, including H100s and A100s, even during periods of extreme scarcity. But that access is a contingent advantage, not a durable one. As hyperscalers increase procurement and as sovereign compute initiatives scale up, CoreWeave’s supply advantage may erode.
Moreover, Nvidia’s own strategic posture adds further uncertainty. It may expand its direct-to-consumer services or privilege larger infrastructure partners. It may prioritize sovereign clients or hyperscalers offering global footprint scale. In any case, CoreWeave remains downstream of a dominant supplier with significant pricing and allocation power. And in a world where the fabs (not the buyers) are the constraint, Nvidia holds the cards.
Capital-Light or Capital-Deferred?
CoreWeave’s capital-light infrastructure is often framed as efficient. But it may be more accurate to describe it as capital-deferred. Leasing data centers allows for rapid scaling, but it requires long-term, non-cancellable contracts. The hardware depreciates rapidly, and if demand softens or prices compress, the downside exposure will be severe.
CoreWeave is effectively betting that continued scarcity will keep prices high long enough for it to recoup these costs. But once the bottleneck eases—and it will—margins will compress sharply. The arbitrage disappears.
This is not a strategy for long-term moat building. It is a high-stakes bet on timing: that CoreWeave can ride the scarcity curve long enough to transform itself into something more durable.
Where’s the Software Stickiness?
Another key question: how sticky is CoreWeave?
It currently offers containerized compute environments and orchestration features, but there is little evidence of a proprietary developer platform, SDK ecosystem, or AI-optimized stack that meaningfully locks in customers. It lacks a control plane, a proprietary framework, or any substantial abstraction layer that drives ecosystem gravity.
If AWS or Azure suddenly loosen their GPU quotas—or if Lambda Labs, Voltage Park, or other GPU clouds undercut on price—switching costs may prove minimal. Without ownership of a software layer or proprietary middleware, CoreWeave risks becoming a commoditized passthrough provider. Infrastructure without platform, in a world where platforms dominate.
The Bull Case: Riding the Scarcity Curve
Despite these structural vulnerabilities, the bullish case rests on short- to mid-term scarcity and rapid execution:
Scarcity monetization: CoreWeave secured GPU access when others couldn’t and is monetizing the imbalance.
Nvidia alignment: Some investors view it as the Nvidia-native cloud, much like AWS rode the Intel wave.
Non-hyperscaler demand: AI-native firms want more control and less friction than traditional clouds offer.
Capital deployment speed: By renting its real estate, CoreWeave can scale quickly without multi-year buildouts.
Liquidity pathways: The IPO and OpenAI’s private placement offer exit routes for early investors.
There is also the possibility that CoreWeave evolves. A bull might argue that AWS started fully reliant on Intel chips. CoreWeave could likewise develop a proprietary orchestration layer, chip co-design partnerships, or a developer SDK ecosystem. If the team proves agile and execution-focused, a moat could still be built.
But the escape routes are narrow. Building developer ecosystems, SDKs, or hardware partnerships takes time and capital. If scarcity ends before the moat is built, the window closes.
What Happens When the Scarcity Premium Ends?
At some point, GPU supply constraints will ease. When that happens, pricing will normalize, and CoreWeave’s margins will come under pressure. Customers will migrate to cheaper providers or hyperscalers with better software integrations. Lease obligations, sunk infrastructure costs, and rising rent will be anchors.
Unless CoreWeave diversifies beyond its current intermediary role—by owning software layers, securing exclusive supply, or vertically integrating its real estate—its structural exposure will become more visible as the GPU market opens up and competition for compute intensifies.
Conclusion: A Fast Climber on a Fragile Ladder
CoreWeave is executing extremely well. It has delivered infrastructure at speed, found product-market fit, and capitalized on one of the most valuable supply bottlenecks in modern tech.
But its position is structurally precarious. It relies on one provider for its GPUs, and it rents its physical footprint. It does not yet own the software, hardware, or distribution layers that confer lasting strategic control.
Whether it can evolve into a vertically integrated platform, a developer-first ecosystem, or a sovereign compute partner remains to be seen. For now, it is a high-performing, capital-leveraged intermediary in one of the most important markets of the decade. But it also sits between giants who can renegotiate the terms of engagement at any time.
By which I mean it doesn’t own the IP underlying the chips. It buys the chips and holds them as assets on its balance sheet, but all that entitles CoreWeave to is the revenues that flow from renting those chips out. Once the chips are no longer useful, it has to spend more cash to buy more chips.