Most AI startups made the same mistake: they thought they were building on a platform. They were building inside a predator.
I. Introduction: The Cliff Disguised as a Runway
The AI startup ecosystem is waking up to a bitter truth. For the last 18 months, thousands of startups, many backed by top-tier venture firms, have bet their futures on the idea that large language models (LLMs) represent a new application platform. This idea is seductive. APIs are easy to use. Demos impress investors. Wrapper startups raise quickly and ship fast.
But most of these startups are built on a category error: they assume that model providers are stable platforms, analogous to AWS or iOS. That assumption is wrong. Model providers are not platforms They are predators.
II. The Mirage of Modularity
The central delusion in the LLM startup boom is the fantasy of composability. Founders assumed they could build billion-dollar products on top of Claude, GPT-4, or Gemini the same way companies once built on Windows or AWS. But unlike cloud infrastructure, foundational model providers are not neutral layers in a tech stack. They are vertically integrated, end-to-end product firms.
OpenAI doesn’t just want to license GPT-4 to developers. It wants to own the chat interface, the user account, the distribution, the trust layer. So does Anthropic. So does Google. The analogy to AWS breaks down because AWS never tried to compete with its customers for consumer mindshare. These firms do.
When you build on someone else’s model, your destiny is not in your hands. You are not a platform. You are an input. A test case. An experiment. If you grow too large, you are a threat. If you grow too slowly, you are a rounding error. Either way, you are disposable.
III. The Venture Ecosystem’s Strategic Mistake
The LLM wave has exposed a strategic blind spot shared by both investors and founders: a tendency to confuse ease of prototyping with durability of business model. Venture capital flooded into the space, buoyed by slick demos and fast shipping cycles. Startups that layered thin UX over public APIs were treated as infrastructural plays.
But they were not platforms. They were interfaces perched atop a volatile substrate.
Many believes that foundational model providers would behave like cloud infrastructure: predictable, stable, content to monetize compute. But these model providers are not inert pipes. They are dynamic players with their own downstream ambitions. They seek not to empower startups but to replace startups.
IV. Exceptions That Prove the Rule
Some startups will survive this collapse. A few might even thrive. But they share one thing in common: non-substitutable leverage.
Distribution Moats: Firms with embedded relationships (e.g., in healthcare, enterprise SaaS, or legal tech) that use LLMs to augment workflows that customers already rely on. Their strength isn’t the model. It’s the integration.
Proprietary Data: Companies with unique datasets, whether vertical structured or real-time, that make their product meaningfully better than what OpenAI or Anthropic can build in-house. Example: a radiology company with access to millions of labeled diagnostic scans. Note, however, that merely owning proprietary data is not sufficient; you have to be legally allowed to use it, and to integrate it into your product’s workflow.
Inference Control: Startups that host or fine-tune their own models, including small LLMs or synthetic architectures, gaining cost control, latency advantages, and product sovereignty.
Synthetic Platforms: A rare few are building orchestration layers, agent frameworks, or memory architectures that are sufficiently complex and defensible to attract network effects. These are not wrappers. They are emergent operating systems for intelligent work.
V. Why the LLM Wrapper Thesis Was So Seductive
Investors and founders both fell into the trap for a reason. LLM wrappers offer:
Instant demos: A few OpenAI calls and a React frontend, and you’re live.
Velocity: Teams could iterate, raise, and scale in weeks.
Low burn: Minimal infra, minimal hiring, cheap to test.
In an environment flooded with hype and capital, these advantages were irresistible. But they obscured the core strategic weakness: zero control over the core value engine.
Founders overfit to surface-level traction. VCs over-indexed on growth curves. Nobody asked the deeper question: what happens when OpenAI launches a feature that replicates your startup in three lines of code? What happens when Anthropic rate-limits you, or Google gives you a non-compete clause?
VI. Vertical Integration is Inevitable
Model providers are not acting irrationally. In fact, they are doing what any rational firm with monopoly power would do: climb the stack, extract margin, and control the consumer relationship.
The idea that foundational model companies would remain infrastructure-only was always naive. If you control the model, the interface, and the data flywheel, why would you leave money on the table? Why allow a third-party startup to become the next Salesforce when you can just become Salesforce yourself?
The AI ecosystem is undergoing a phase shift. What we’re witnessing is akin to Facebook absorbing the best features of its ecosystem (photos, check-ins, events) or Microsoft bundling Excel clones into the OS. When compute becomes intelligent, vertical integration becomes destiny.
VII. What Founders Should Do Now
If you’re building on top of someone else’s LLM, you must ask yourself:
What prevents OpenAI from doing this themselves?
What moats do I control that are not model-dependent?
What happens if API access is revoked tomorrow?
If you can’t answer those questions robustly, you need to pivot. Fast. That doesn’t mean abandoning AI. It means rethinking what layer you actually occupy.
Here’s a new heuristic:
If you’re closer to the user than the model provider is, you might survive.
If the model provider can replace you with a feature toggle, you’re already dead.
Founders should map their dependency stack and ruthlessly decouple from anything that can be commoditized. Data, distribution, and inference control are the real leverage. Everything else is at risk.
VIII. Conclusion: When the Substrate Shifts
There’s no shame in having bet on the wrong abstraction layer. But there is danger in clinging to it. LLM wrappers were an artifact of a moment, when access was wide open, differentiation was assumed, and vertical integration hadn’t yet arrived.
That moment is over.
The next era belongs to those who control more than just the interface. Data. Distribution. Infrastructure. These are the new moats. The rest is latency.
The platform trap has been sprung. The question now is: who escapes it in time?
Interesting piece, but I don’t quite agree with the idea that the big model providers want to dominate the entire loop. These are large, economically rational companies, and they tend to follow a familiar pattern: focus on doing one thing exceptionally well—in this case, building foundational models—and let startups handle the riskier, more chaotic business of creating and distributing applications.
Right now, offerings like ChatGPT are strategically useful to model providers because they serve as data funnels and feedback loops, helping them refine and evaluate their models. But that doesn’t necessarily mean they’re aiming to be customer-facing in the long term. As the ecosystem matures, it’s likely they’ll recede into the infrastructure layer, just as AWS powers the cloud without owning most of the apps people actually use.
Trying to dominate the entire stack would not only be strategically messy, it would also choke the very ecosystem that drives model usage and innovation. Their long-term win is to become indispensable utilities by powering the AI economy without having to build all of it themselves.
Good post and substack overall. I started reflecting on your idea — and I have a pretty different take. I don't think LLM startups will go away. I do think they'll have to do a LOT more than be a simple LLM wrapper.
My reply started running long, so I put it into a short Substack post here:
https://open.substack.com/pub/tomaustin1/p/ai-business-models-the-smart-grad?r=2ehpz&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false