Today’s free post tackles a question I’ve been hearing more often: how should institutional investors, including pension funds, family offices, and sovereign wealth funds approach AI exposure?
I’ve fielded several inbound inquiries from LPs trying to map the AI landscape to their mandates. What follows is how I think about AI as an investment opportunity. It’s contrarian, yes, but increasingly, it’s also correct.
If you find this useful, consider subscribing. I write across the entire AI stack, from startups to models to infrastructure, with a focus on capital flows, power dynamics, and long-range strategy. My contact details are at the bottom of the post. I’m always open to conversations.
If you want to support my work more directly, consider upgrading to a paid subscription.
The first wave of capital in this cycle came from insiders and high-conviction early movers. The second wave rode narrative momentum. The third, now underway, is institutional: long-duration capital entering AI through GP-led co-investments. But what these allocators are being offered is often not what they think they’re buying. The map no longer matches the territory.
The appeal is obvious: access to curated deals, platform validation, and exposure to the most hyped sector in recent memory. But AI is not like SaaS. And many LPs are inheriting a version of the future that was priced at the peak of the hype cycle. It’s a version that no longer maps cleanly to operational, technical, or geopolitical reality.
AI isn’t a normal technology cycle. It’s a nonlinear, energy-constrained, compute-gated industrial transition, layered atop sovereign interests and infrastructural bottlenecks. Understanding that complexity, and how your position in the capital stack interacts with it, is critical.
Let’s unpack the risks, reframe the incentives, and sketch a strategy that matches institutional capital with the kind of leverage that actually endures.
I. GPs and LPs Have Different Incentives
GPs and LPs operate under different constraints.
GPs optimize for time-sensitive outcomes: IRR, DPI, follow-on validation, narrative heat.
LPs, especially long-horizon allocators, optimize for durability, control, and strategic positioning over multi-decade horizons.
That doesn’t make one party wrong. But it does mean that what looks like a reasonable exposure to a GP, such as an inside round in a hot AI startup, can look like unhedged fragility to an LP with no board seat, no operational leverage, and no tolerance for stranded capital.
The real risk for institutional allocators isn’t bad faith. It’s mistaking GP-calibrated opportunity for LP-calibrated strategy.
II. The Mirage of AI Co-Investment as a Safer Entry Point
Co-investment is often framed as a way for LPs to gain efficient access to later-stage deals: closer to maturity, lower perceived risk. But in this AI cycle, most of these opportunities are not late-stage software companies. They’re early-stage research organizations with uncertain monetization, high burn, and upstream dependencies they do not control.
Common features:
Rely on APIs from foundation model providers (OpenAI, Anthropic, etc.)
No access to proprietary infrastructure (compute, energy, latency-optimized datacenters)
Burn capital at industrial scale, but report metrics like they’re SaaS
Exit timelines undefined, unless acquisition is subsidized by geopolitics or defensive Big Tech strategy
This is not software-as-a-service. This is inference-as-a-cost-center.
The GP may see an opportunity to mark up paper quickly. That may be a perfectly rational move for them. But an LP with longer timelines, less liquidity, and no technical diligence capacity may be taking on exposure that looks safer than it is.
III. The Underlying Fragility in Most AI Startups
Here’s the structural risk stack many LPs are stepping into:
Model Dependency: Most startups don’t own their foundation models. They resell access to OpenAI or Claude. If the upstream API changes, their product evaporates.
Compute Scarcity: Access to H100s, power, and cooling is not fungible. Scarcity at the infrastructure layer kills scaling plans, and it’s outside the startup’s control.
Moat Illusions: Fine-tuning is not a moat. Neither is vertical focus if the same model can be called by anyone with an API key.
Regulatory & National Security Overhang: AI is drifting into dual-use classification. Export controls, CFIUS reviews, sovereign licensing regimes: these will land unevenly.
Exit Illiquidity: Few of these companies are profitable. Many have no clear IPO path and no acquisition options beyond a handful of hyperscalers or state-backed buyers.
Compare two recent cases:
Stability AI: A foundation model company that burned through cash, lost technical leadership, and struggled to find product-market fit. LPs who entered during the hype are now sitting on illiquid, governance-light positions in a declining asset.
CoreWeave (via Blackstone): A GPU cloud provider with deep infrastructure control. Blackstone committed $7.5B not to AI models, but to the industrial substrate that all models require: land, power, cooling, and compute. They’re not competing with OpenAI. They’re renting them rack space.
That’s the difference between chasing the narrative and owning the constraint.
IV. Optionality Is Valid If You Price It Correctly
Some AI investments are rational bets on convexity: tail-risk exposure to transformative breakthroughs in language modeling, multi-modal systems, or agentic workflows.
But that’s not how most LPs are underwriting them.
If you’re making a volatility bet, treat it as such:
Size the position accordingly
Demand milestone-based tranching
Secure governance rights
Expect a power-law distribution, and prepare for zeroes
What you should not do is price optionality like enterprise software.
V. Sovereign Interests Are Not a Bug, But They Must Be Explicit
Many LPs, especially sovereign funds or state-aligned pools of capital, are investing in AI for reasons beyond return:
Technology transfer
Soft power signaling
National capacity building
Strategic hedging against U.S.-China bifurcation
That’s real. But it must be explicitly acknowledged in investment committees. What causes damage is when geopolitical logic gets laundered through GP term sheets, or when LPs mistake strategic positioning for capital-efficient return exposure.
VI. Where LP Capital Should Actually Be Going
Long-horizon capital thrives when it owns the foundation, not the froth.
Own the Bottlenecks:
Data centers: Grid adjacency, power draw, land, and cooling capacity are now sovereign chokepoints.
Compute supply chain: H100s, CoWoS, high-bandwidth memory, photonics: this is the new oil.
Energy: AI is now an energy sink. Baseload power and transmission resilience matter more than SaaS metrics.
Sovereign model infrastructure: National labs, dual-use model licensing, and frontier safety governance.
Case in point: Blackstone’s bet on CoreWeave isn’t about AI exposure. It’s about being indispensable to everyone who wants AI exposure.
Own Durable Verticals:
Regulated sectors (healthcare, defense, logistics) where deployment requires deep compliance and incumbents have data moats.
Buy Influence, Not Just Equity:
Board seats, preferred compute access, strategic vetoes
Align governance with capital scale
VII. A Framework for LPs to Evaluate AI Exposure
Six questions to ask on every AI deal:
Who owns the model?
Who owns the data?
Who owns the distribution?
What happens when OpenAI or Google releases the same feature?
Is this upstream, downstream, or disposable?
Does this serve a sovereign strategy, or threaten one?
If you can’t answer these, you’re hoping, not investing.
Final Word
Don’t just chase the AI narrative. Own what the narrative depends on to exist.
This is not about being cynical. It’s about being synchronized: matching capital strategy to the structural realities of AI as it scales into physical constraints and sovereign competition.
AI will create generational winners. But most value won’t flow through startups. It will accrue to those who control the inputs: compute, power, distribution, and regulatory permission.
The GPs are playing one game. LPs must know whether that game aligns with their own. When it doesn’t, the right move is not to abstain, but to reposition. Own the substrate. That’s not a hedge. That’s leverage.
Coda
If you enjoy this newsletter, consider sharing it with a colleague.
Most posts are public. Some are paywalled.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:
Great post! Key part of the article for me:
Takeaway #3: The Real Leverage Lies in Data Infrastructure, Not AI Applications
Rather than chasing frothy startup narratives, institutional LPs should target control over infrastructure bottlenecks—power, compute, data centers, H100s, and national model infrastructure. These are the sovereign choke points upon which all AI applications rely. Investments in industrial substrates (e.g., CoreWeave, grid adjacent data centers) offer more durable, asymmetric leverage than most application layer bets, which are prone to volatility, competition, and regulatory risk.
That’s the key difference between chasing the narrative and **owning the constraint.**