Sequoia Still Thinks AI Is Software
When models eat your hot startup don't say I didn't warn you
Sequoia Capital, one of the most influential venture firms of the internet era, recently hosted a talk about AI’s Trillion-Dollar Opportunity. It's full of optimism, frameworks, and catchphrases about the coming “agent economy.” You can read some notes from the talk here.
But beneath the buzzwords, a deeper flaw emerges: Sequoia still thinks AI is software. They are applying 2010s SaaS heuristics to a 2020s industrial transformation.
This isn’t just a category error. It’s a strategic misdiagnosis that will cost their LPs billions.
I. The Core Misread
Here’s Sequoia’s implicit model:
The value in AI accrues at the application layer
Infrastructure (compute, power, data centers) is “just plumbing”
What matters are UI polish, PMF, and vertical-specific apps
Killer apps will emerge, just like in mobile or cloud
In other words: this is just another wave of SaaS.
But this mental model is wrong. AI is not software. It is infrastructure-heavy, thermodynamically expensive, and physically gated.
II. AI is a Physical Substrate, Not a Digital Abstraction
Modern AI is bottlenecked not by clever apps, but by:
Power generation and grid interconnects
Physical land and real estate for data centers
Cooling systems and water usage rights
Multi-year permitting timelines
Supply-constrained GPU inventory with geopolitical exposure
These constraints don’t bend to agile cycles, MVPs, or blitzscaling.
Software is elastic. AI is entropic.
III. Value Accrues at the Base and the Center of the Stack
Sequoia imagines a thriving ecosystem of AI applications, each with vertical specificity, customer intimacy, and monetizable workflows. But they ignore two converging forces that are systematically collapsing this layer.
1. Infrastructure Scarcity
The true chokepoints in AI today are physical: power, land, cooling, and GPUs.
Capital flows toward constraints. That means Layer 1, not Layer 3.
Whoever controls physical AI infrastructure controls scale, timing, and margins.
2. Model Agglomeration
As models become more powerful and capable, they begin to absorb the functionality of the apps built on top of them.
Why use a standalone AI sales coach if GPT-6 already includes it as a plugin?
Why pay for an AI legal assistant when Claude or Gemini can summarize and draft contracts natively?
Why launch a “vertical AI startup” when OpenAI, Anthropic, or Meta can reabsorb your niche into the core model UX?
Models are becoming gravitational centers. Like Google in the 2000s, they’re positioned to ingest, commoditize, and repackage any isolated application insight back into the core experience.
The result: Apps are features. Models are black holes. Unless you own unique off-model data or have privileged distribution, you’re building on melting ice.
IV. The Agent Economy Is a Mirage
Sequoia trumpets the arrival of an “agent economy,” where AI agents autonomously transact, unlocking labor abundance and stochastic workflows.
But this vision is hand-wavy and premature:
There’s no functioning economic substrate for agents. No persistent identity, no trust primitives, no compliance architecture.
There’s no real incentive structure for agents to transact unless deeply embedded in workflows, marketplaces, or control systems.
And most “agents” today are just fragile wrappers over chat APIs. They lack memory, agency, statefulness, or emergent incentives.
To the extent this future arrives, it will be built atop:
Crypto-based coordination protocols
Model-native identity layers
Deep integration with infrastructure-level trust and power systems
But none of that exists yet at meaningful scale.
V. Sequoia’s Problem Is Institutional
Sequoia has to believe value accrues at the app layer. Why?
Their funds aren’t structured for capital-intensive infra plays.
Their LPs expect liquidity in 7–10 years, not 15–20.
Their GPs are optimized for product-led growth, not industrial-scale procurement.
So they default to what they know:
Software heuristics
Market segmentation
GTM strategies
“Taste as a moat”
They talk about “stochastic mindsets” and “agent workflows” because those are investable narratives under their current fund model. But these abstractions camouflage a lack of control over the actual constraints.
They treat models as middleware and apps as moats, but the real moat is upstream, in the infrastructure that powers, trains, and governs the models themselves.
VI. What Comes Next
The firms that win the AI era won’t look like Zoom or Notion.
They’ll look like:
Utilities: negotiating power purchase agreements and grid access
REITs and land banks: owning and zoning compute-friendly acreage
Commodities traders: arbitraging compute like barrels of oil
Defense-industrial contractors: aligning AI scale with national infrastructure
GPU sovereigns: securing long-term control of training hardware
This is slow, capital-intensive, and geopolitically entangled. But that’s where the leverage is. Not in your 43rd AI copilot for HR workflows.
VII. Conclusion
Sequoia is still pitching AI like it’s SaaS: Apps, engagement, vertical specificity, consumer trust, feature velocity.
But AI doesn’t follow the software script. It’s not cheap to build, not easy to scale, and not about finding PMF. It’s about infrastructure dominance, model gravity, and energy constraints.
If you want to win in AI, stop thinking like a product manager.
Start thinking like a power utility.
I like your arguments. This implies that computing (AI) , like electricity is or will be capital intensive . Take this further , Google , AWS and MSFT are the kings of the Western World
This looks like it was generated by chat gpt. Excess headings. Tight, formulaic sentences. Longer than necessary. Sigh. Hope I'm wrong.