I. Introduction: A False Hope
There’s a lie infecting Silicon Valley: that GPT-4 is the new iPhone, and prompt engineering is the new app store. But LLMs aren’t platforms. They’re predatory runtime engines, and if you build on them, they will eat you
To understand why, we need to revisit the history of another agglomerative force: Google in the 2000s.
II. The Google Precedent: How Aggregators Subsumed the Surface Web
In the early 2000s, Google was hailed as the gateway to the internet. But over time, it stopped merely pointing to useful tools and started becoming those tools:
It killed browser toolbars by integrating search into Chrome.
It killed RSS readers by absorbing their function into Google News and personalized feeds.
It killed weather sites, dictionaries, and finance dashboards by surfacing answers directly on the search results page.
Google became an interface centralizer, pulling more and more functionality into itself. It watched what users clicked on, then internalized that functionality in native interfaces, bypassing the services it once amplified.
Now LLMs are doing the same thing, but at an even more fundamental level.
III. LLMs as Agglomeration Engines
The difference is that LLMs don't need to point, scrape, or surface. They can emulate.
An LLM isn't a gateway. It's a latent software engine that learns to replicate the behavior of tools built on top of it.
Summarization tools? Prompt it.
Grammar checkers? Prompt it.
Meeting note takers, personal CRM, slide creators, email generators? Prompt it.
Any generalized workflow that can be expressed in language becomes a latent function inside the model. As soon as a use case proves popular, the model providers (OpenAI, Anthropic, Google) can internalize that behavior in their own interfaces: ChatGPT, Claude, Gemini.
And as the open-source Model Context Protocol (MCP) gains traction, this absorption will only accelerate. MCP is a framework that standardizes LLM access to external tools and data. If OpenAI integrates MCP, every app built with model-queriable data becomes a potential annex of the model runtime.
Wrapper startups are not building businesses. They are training their predator.
IV. The Incentives of the Model Providers
OpenAI, Anthropic, and other frontier labs have two core incentives:
Capability Accumulation: Every novel function performed through their model teaches the system something. The more diverse tasks users attempt, the more capable the model becomes. Startup wrappers essentially act as unpaid R&D labs for the foundational model.
User Lock-In: The more workflows the model can perform natively, the deeper its hold over users. Native memory, file storage, plugin support, and code execution aren't just features—they're control mechanisms. The goal is to become the cognitive OS of every knowledge worker.
In other words: the LLM doesn't want to enable you. It wants to be you.
V. The VC Delusion: LLMs are a SaaS 2.0 Platform
Despite this, investors are throwing money at GPT wrappers, convinced that this is the next great application frontier. But the structural assumptions that made SaaS 1.0 viable do not hold here:
SaaS 1.0 lived atop stable platform layers (Windows, iOS, cloud).
LLM wrappers live atop a constantly mutating substrate that actively internalizes them.
VCs want to believe in horizontal LLM-based tools because it fits a pattern they recognize. But the pattern is wrong. The LLM ecosystem is not a software platform. It is a parasitic substrate that digests anything shallow enough to represent in natural language.
In SaaS 1.0, the margin pooled at the application layer. In the LLM stack, the margin is gravitationally pulled to the foundation model layer—where data, usage signal, and user lock-in accumulate. The wrapper startup bears the cost of UX polish and user education, but OpenAI captures the learning and loyalty.
If your startup is just a prompt and some UI polish, you're a feature request away from being extinct.
VI. Outliers and Moats: Who Survives?
There are exceptions. A few categories of LLM-native or LLM-adjacent apps can build defensible moats:
Regulated workflows (legal, healthcare, compliance) where auditability, traceability, and liability create barriers to generalization.
Proprietary data ecosystems where the model lacks access: closed corpuses, secure walled gardens, telemetry.
Industrial domains like energy, manufacturing, or logistics, where domain logic, sensor data, and physical process integration prevent shallow replication.
Hardware-tethered applications, e.g. robotics or retail, where physical control flows require tight coupling.
User-generated data flywheels, where the app generates novel training data through feedback, iteration, and engagement loops (e.g., Windsurf, which OpenAI reportedly acquired for ~$3B).
But these are edge cases. Most LLM-native apps will never get there.
VII. Lessons for Entrepreneurs and VCs
1. Don't build what the model can learn. If your app is easily compressible into a prompt, it will be.
2. Build around what the model can't see. Proprietary, regulated, real-world, or deeply embedded contexts offer the best shot at defensibility.
3. Remember the core asymmetry: you're training it, not the other way around. Every user interaction with your app flows upstream as signal to the model layer.
4. If you want to build on LLMs, integrate them into something more complex. Use them as components, not as the whole stack.
5. If you're a VC, stop funding interchangeable wrappers. Fund verticalization, proprietary data access, and real integration work. Don't bet on "SaaS 2.0." Bet on the parts of the stack that can't be commoditized.
VIII. Conclusion: Build Where the Model Can't Go
The LLM revolution is real, but not in the way most people think. These are not platforms. They are agglomerators. They collapse the functionality of shallow apps into themselves. If you're building on top of an LLM, you are either:
A short-lived feature extension that will soon be absorbed, or
A deeply embedded, proprietary system that constrains the model to your context.
There is no in-between.
LLMs aren't SaaS 2.0. They're the SaaS endgame. And if you want to survive, build where the model can't go.