28 Comments
User's avatar
Elan Barenholtz, Ph.D.'s avatar

Interesting piece, but I don’t quite agree with the idea that the big model providers want to dominate the entire loop. These are large, economically rational companies, and they tend to follow a familiar pattern: focus on doing one thing exceptionally well—in this case, building foundational models—and let startups handle the riskier, more chaotic business of creating and distributing applications.

Right now, offerings like ChatGPT are strategically useful to model providers because they serve as data funnels and feedback loops, helping them refine and evaluate their models. But that doesn’t necessarily mean they’re aiming to be customer-facing in the long term. As the ecosystem matures, it’s likely they’ll recede into the infrastructure layer, just as AWS powers the cloud without owning most of the apps people actually use.

Trying to dominate the entire stack would not only be strategically messy, it would also choke the very ecosystem that drives model usage and innovation. Their long-term win is to become indispensable utilities by powering the AI economy without having to build all of it themselves.

Expand full comment
Tom Austin's avatar

Good post and substack overall. I started reflecting on your idea — and I have a pretty different take. I don't think LLM startups will go away. I do think they'll have to do a LOT more than be a simple LLM wrapper.

My reply started running long, so I put it into a short Substack post here:

https://open.substack.com/pub/tomaustin1/p/ai-business-models-the-smart-grad?r=2ehpz&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment
Dave Friedman's avatar

Thanks Tom — this is a sharp and thoughtful reply.

I think we actually agree more than it might seem at first glance. Your “smart grad” framing captures one of the core points I tried to make: general intelligence doesn’t equal operational entanglement. I framed survivorship around non-substitutable leverage—distribution moats, proprietary data, inference control, and synthetic platforms—but the examples you give (like Harvey or Cursor) fall squarely into that category.

Your piece adds important texture to how this defensibility manifests in practice: regulatory traps, permissioning layers, real-world sales motion, and most importantly, the allocation of elite technical talent. That trade-off, between pushing model capabilities vs. chasing vertical specificity, is something too few analysts understand.

If anything, your post reinforces the deeper thesis: most LLM startups are doomed, unless they solve hard problems that the model providers can’t or won’t touch without sacrificing their core advantage.

Glad you wrote this up. I’ll be linking to it in my follow-up. It deserves a broader read.

Expand full comment
Zeke Kirson's avatar

Tom - as a follow up to your post I think what’s more powerful than a smart grad with 10 years of domain experience is that individual with the power of AI, like ChatGPT, at their backing.

I think what’s a lot more likely than specific AI agents operating are smart operators (experienced professionals) using the broad LLMs. And while you will need a lot less lawyers, accountants and investment professionals, these professions are not going away.

Why? Because it’s all about how does the service fit into the life of a customer.

Companies won’t want many different AI apps and they won’t trust agents acting on their behalf, at least not for a while. They already have access to smart operators that they employed and those individuals are going to get a lot more powerful with the assistance of AI.

What I think enterprises are looking for is an enterprise solution - one size fits all. Because even though say ChatGPT is 80% as powerful as a law specific model, one experienced lawyer with ChatGPT probably equates to a law specific model and has a lot more trust from the firm. And under one fixed price you also get many more benefits using ChatGPT as opposed to needing many different apps to perform different functions.

There’s historical context for this: Intel in operation crush beat out Motorola and Zilog who had better microprocessors because they were the only all encompassing enterprise solution.

Would be interested to hear your thoughts as I believe the assumption that AI agents are the future is overblown but I really have no idea how it all shakes out

Expand full comment
Tom Austin's avatar

I'll write a longer post on this if I get time. But first off — I love AI tools and think they are are the most amazing tools we've seen in my lifetime. I agree that there is a large set of customers who just want what we call "vendor consolidation" (e.g. one vendor to do everything for them). And the Claude / Open AI's can play that to a degree. I think it's more likely you'll start to see better and better "plug in" ecosystems (like apps in the Slack store).

Having said that — I think that you're raising 2 separate questions:

1. Do we get more and more / better "Co-pilots" that help humans do more work better or do we get more and better "independent agents / AI workers." I'd put my money on the former (Smarter and smarter co-pilots for a number of reasons). I was actually reflecting on this piece this morning — but it's going to be VERY hard to have "independent AI agents that can actually be fully functioning team members in modern workplaces in my mind — and I think we're very far from that. (this was me musing on that today, warning — it's long: https://tomaustin1.substack.com/p/ai-hype-the-wilson-problem-why-ai?r=2ehpz)

So, AI Agents aren't overblown in terms of giving talented individual contributors who know how to use them "superpowers", they are overblown in terms of being able to be standalone teammates.

2. What platforms win (e.g. one dominant "good enough" platform" like Microsoft Office or a bunch of "best in class tools" that may or may not integrate with each other). On this question, I think "it depends." We will often see SMB and mid-market companies that buy the "good enough" platform solutions (think hubspot for marketing, or Microsoft office (including Teams) vs. separate tools for each task) and a subset of enterprises as well. But more enterprises will buy the "best in class" solutions and pressure those vendors to make sure their tools all work together and are integrated well. I don't think that companies will willingly contribute their org culture and "secrets" to the learning of learning models even if / when these things evolve. And I think it's possible (not sure how likely) that we see the emergence of a tool / layer for managing all your own memories / data and interactions with foundation models (I brainstormed on that here yesterday as well: https://tomaustin1.substack.com/p/post-reply-i-hear-you-and-i-raise?r=2ehpz)

The thing I want to write about soon is why the predictions that SaaS businesses will do anytime soon is likely way off IMO. SaaS businesses are hard and have large teams focusing on product features that appear insignificant but really matter a lot and even larger teams focusing on customer support, customer implementations and integrations and account activation (and engagement). I think these will continue to favor vertical applications in many enterprise settings and also for many niche SMB's (e.g. my sister is a lawyer with her own business using a law specific ChatGPT product for contract review and drafting). Even this "simple seeming" wrapper in a highly regulated industry will continue to likely survive for a long time IMO.

I do think that the big foundation model providers want very much to be the "platform" (e.g. Apple or Slack) that all the apps have to plug into and live within. And that owning the "customer facing relationship" is really key. We're probably heading toward something like Salesforce AppExchange for AI—platform-based distribution with specialized functionality, rather than pure platform plays or completely fragmented tooling.

The missing piece in most analyses is implementation complexity. Even brilliant AI needs human judgment for deployment, customization, and ongoing optimization in enterprise contexts.

Expand full comment
Zeke Kirson's avatar

Thank you, Tom, that is a very thoughtful reply and helps clarify how I’m thinking about the application

Expand full comment
comex's avatar

I think this mixes together two threats of very different severities.

If Anthropic tries to shut down your API access, you can just switch to Google or OpenAI. LLMs are extremely fungible, and for the time being there’s enough competition that there shouldn’t be too much misbehavior.

But if Anthropic copies your product, then you’re in trouble.

Expand full comment
Dave Friedman's avatar

Well it's true that large language models are generally fungible. But if they are fungible then it follows that no matter which LLM you build on, you are at risk of them subsuming your product into their native experience.

Expand full comment
Zeke Kirson's avatar

Part of this head fake came initially from Jeff Bezos’ comments on AI and comparing the technology to electricity - something that could be built off.

But the reality is that GE and Westinghouse in developing and commercializing electricity did not have a relationship with their end customer. These LLMs have brand names and a literal relationship (in a different way than business has ever thought of before) with their customer, which can more easily translate into higher switching costs - because the AI has the context of thousands of questions and responses from the user - as well as the potential for scale and network economics (especially if they can own the distribution of the technology).

I think you raise an interesting point, although I’m not confident how it will play it out. The applications of this technology are so broad, meaning there are so many things that can be done with the tech that just one company, at least today, cannot possibly apply it to all potential applications.

Expand full comment
Pawel Brodzinski's avatar

Why are most LLM startups doomed?

Because most startups are doomed. And there's nothing magic about LLM startups that would make them fundamentally different.

That's the first answer to the question, even before getting into the friction between LLM providers and products built on top of said LLMs, or lack of sustainable competitive advantage (if I can build a product on top of LLM that easily, the next person can, too).

Expand full comment
Djamal Eddine Ouikene's avatar

Great article. As someone working in the field, I share your concern—it seems inevitable that the major players will push their LLMs into every possible application layer. Microsoft, in particular, is known for aggressively moving into the end-user application space. The only notable exception, as you mentioned, is AWS. I attended their last Summit and continue to be surprised by their strategy. Unlike others, they're staying focused on the infrastructure and foundational model layers of AI, showing little interest in targeting the end-user directly.

Expand full comment
Ved Shankar's avatar

I understand comex's point about models being fungible but I see what you mean by being in danger of being replaced by the general purpose model providers.

My guess is enterprise AI startups would be not be as doomed because the moat comes from Sales & Marketing. Similar for products that get enriched with usage over time.

There's also a question about perception: do you prefer using a dedicated AI tool for writing like Lex vs just a general editing through your favorite chatbot.

Also how much community loyalty have you built for your products (i.e. brand) can be another driver. You can see that divide between Anthropic and Open AI already. The models are not the differentiator (mostly) but Anthropic's stance on AI Safety is driving a certain tribe to them.

All guesses of course, trying to build distribution first before building any huge SaaS anytime soon.

Expand full comment
James's avatar

What do you think of LLM Wrappers like Cursor and Windsurf?

Expand full comment
Dave Friedman's avatar

I suspect they will fare better than most because they generate their own proprietary data which they can use to build a moat.

Expand full comment
James's avatar

I thought it’s more about integration for them. What proprietary data do they have? Chat history (memory)? Access to code bases?

Expand full comment
Dave Friedman's avatar

Windsurf for ex generates proprietary data when its users use their product, and it uses that proprietary data to train its own models. See here: https://windsurf.com/blog/windsurf-wave-9-swe-1

Expand full comment
James's avatar

I see. Thanks.

Expand full comment
icarus91's avatar

The fact that OpenAI just purchased Windsurf confirms the author's thesis (and the analogy to Facebook: crush the weak, swallow potential threats).

Expand full comment
Drew Meister's avatar

Great stuff David. Keep up the thoughtful commentary.

Expand full comment
Dave Friedman's avatar

Thanks! Appreciate the support.

Expand full comment
Drew Meister's avatar

I keep thinking what does this mean for Perplexity, the ultimate wrapper.

Expand full comment
Dave Friedman's avatar

Perplexity may be an exception that proves the rule. They have enough of a wedge with search, which ChatGPT isn’t (yet?) great at, and they have a clear monetization path. My bet is that one of the hyperscalers other than Google acquires them to fold them into a broader ai suite.

Expand full comment
Ryan's avatar

You make a legit sounding critique of the AI-wrapper thesis, but your conclusion hasn’t played out yet. “Google and OpenAI will eat ai wrappers”— what AI companies have gone under due to this? Cursor is thriving. There’s a gazillion AI copilot for sales, AI copilot for lawyers, type startups. It’s not just VC hype- they are growing and getting customers! All the 2010s dashboard-as-a-service type startups are getting beaten by ai native versions.

A big reason for the flooding of investment into AI wrappers is that for whatever reason people are startlingly willing to pay for AI. In the 2000s it was considered a non starter to even think about paying for software. In the 2010s people would balk at it. They had to fight for $5 or $10 per seat per month for B2B saas workflows. These days- $20 per person per month? Sure! Hell why not more? It’s got agentic AI!

I just don’t see OpenAI building salesforce and gong and intercom and slack all those random tools where people actually put in their credit cards. There’s plenty of money to go around.

Expand full comment
Dave Friedman's avatar

Look at the first generation of wrapper companies like Jasper and Copy.ai. Treading water, layoffs, price increases. All because ChatGPT subsumed their copywriting functionality into the base model.

Expand full comment
Neo Wang's avatar

Would you give a few examples of synthetic platforms that you consider promising?

Expand full comment
Neo Wang's avatar

An analogy of the right kind of product to build is a boat that floats on the rising water of model capability. https://open.substack.com/pub/wangleineo/p/shocked-speechless-by-ms-build-and?r=3otmv&utm_medium=ios

Expand full comment
waldo's avatar

Basically on most verticals you benefit with tight integration with a specialty provider or in house fine tuning, or both. No applications company can thrive without some in house ML talent, which is definitely hard to attract if you’re a no name founder.

Expand full comment
User's avatar
Comment removed
May 28
Comment removed
Expand full comment
Dave Friedman's avatar

Thanks. I agree that the space is very crowded and that AI is moving very quickly. The LLM startups that survive have one of the following things in common: (1) they own proprietary data, (2) they serve a niche that is too small and too regulated to be subsumed by the model makers, or (3) the workflow they've built generates a flywheel of proprietary data collection that serves as a moat.

Expand full comment