Welcome to the hundreds of new subscribers who’ve joined over the past week. Yesterday’s paid post — Why AI Is Rewiring the Electrical Grid, and Texas Is Ground Zero — hit harder than I expected. It made one thing clear: there’s appetite for analysis that treats AI not as code but as infrastructure, not as a demo but as a power structure. If you haven’t read it yet and you want the physical substrate of AI spelled out — gas turbines, transmission corridors, sovereign megawatts — go upgrade and read it.
Today’s post returns us to the upper layers of the stack — where the application-layer startup crowd still believes they have a shot. They don’t.
Let’s be clear: OpenAI will eat the application layer. And not slowly.
The VCs who are still throwing seed checks at “ChatGPT for X” plays are engaged in a collective hallucination. They’re fighting over a layer of the stack that has already been claimed. OpenAI’s strategy is simple: verticalize just enough to make every wrapper obsolete.
And if that sounds familiar, it should. It’s the same playbook Apple used to destroy accessory ecosystems. The same playbook Facebook used to absorb third-party app features. In ecosystems where the platform has feedback on user behavior, UI, and model performance, wrappers are temporary noise.
They exist only because the platform hasn’t yet gotten around to subsuming them.
What these wrappers miss is that OpenAI isn’t building an app. It’s building an OS.
The GPT-4o update was misunderstood by most of the ecosystem. It wasn’t a better model. It was a better runtime environment. OpenAI is converging on a unified interface that handles text, voice, image, and code not as modular I/O, but as fluid, composable agents that live inside the platform itself.
In this world, you don’t need “ChatGPT for radiology” or “ChatGPT for litigation.” You need GPT with a vertical memory context, access to relevant tools, and domain-specific I/O schemas. And OpenAI is better positioned than any startup to deliver that:
It owns the inference stack.
It controls the UI.
It has the user data.
And it can see what wrappers are working and replicate them natively.
The VC crowd is still playing 2011-era SaaS games.
They think defensibility comes from “distribution,” “vertical focus,” or “fine-tuned UX.” But LLMs aren’t SaaS. There is no backend. There is no database schema. The value is in the model weights and the inference orchestration.
In that context, what does it mean to build a company on top of GPT-4o? It means you’re betting that OpenAI will choose not to integrate horizontally. That they will politely leave gaps in their product surface for you to stand in. It’s a bad bet.
The truth is: you are their product roadmap.
Every successful wrapper is just a stress test for OpenAI’s UX team. You’re helping them find what sticks. And once it does? They backport it into ChatGPT. Free.
But what about verticals? Healthcare, law, logistics, etc?
These will be owned too. Not necessarily by OpenAI alone, but by a small cohort of foundation model providers who do one of two things:
Partner directly with incumbents (think: Microsoft + Epic)
Offer platform tools for vertical fine-tuning and deployment (think: GPTs, Assistants, APIs + context tools)
The idea that a startup with five people and a wrapper on top of an OpenAI endpoint are going to outcompete this stack is delusional. This isn’t the early internet. There are no greenfields. The dominant compute layer is already consolidated. The application surface is a UI waiting room for model-native features.
The only real defensibility now is infrastructure, proprietary data, or bespoke deployment.
That means:
Building down into inference orchestration, on-prem deployment, custom chips, or ultra-low-latency runtimes
Or building out into proprietary data networks that OpenAI can’t replicate (either because of privacy, regulation, or inaccessibility)
Otherwise, your startup is just an ephemeral UX shim on top of someone else’s model.
The irony? OpenAI is moving faster than its imitators.
Everyone likes to talk about how incumbents move slowly. But OpenAI is a hybrid creature: startup metabolism + infrastructure depth. It builds fast, ships fast, and iterates at a pace no startup wrapper can match.
It doesn’t have to win on every front. It just has to win on the most-used interfaces and most-general use cases. That’s enough to starve the ecosystem of oxygen.
So what should you do if you're not OpenAI?
One of three things:
Go infra: own the compute, cooling, chips, or power.
Go data: own a critical dataset that models need but can't access.
Go bespoke: build for deployment environments OpenAI won't touch (e.g. on-prem, air-gapped, defense, energy).
Everything else? Expect to be eaten. Not in five years. Not in two. In the next product cycle.
Welcome again to all the new readers. If you're enjoying this newsletter, know this: I write every day. Some posts are public. Some are paywalled. But all of them follow the same arc: cut through the noise, get to the physical substrate, and ask: who controls the power?
If you want more of that, consider going paid. I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:
follow me on Twitter,
connect with me on LinkedIn, or
send an email to dave [at] davefriedman dot co. (Not .com!)
If you want to connect with me on LinkedIn or you send me an email please tell me which of my Substack posts induced you to reach out.
this is exactly correct.
also why i maintain that CRM-ish wrappers might still be worth the investment, both because of the novelty of the data-sets and because the interface is close enough to the problem that having real innovation there could be durable even if everything is being sucked into GPT.
the most spot on part of your assessment is re: memory. here is something i just told some friends.
a) the recent upgrades to these systems Memory Capacity/Capabilities are a farrrrrrrr bigger deal than any of the prior "model" updates which were focused on reasoning or just being "better" at "thinking".
previously, these things memories--what they stored from your chat history, and the extent they were capable of referencing it/applying it to your new outputs--were really puny, and at best you had to keep manually pruning the available "memories" to have any chance of it remembering the good stuff.
that's all done.
these things now not only hold onto your new chats, but instantly have referencable memory across your entire previous history of chats, which might not sound like a big deal, but when you have invested a years worth of all manner of conversation, results in a level of "dynamism" (for lack of a better term) that feels fucking crazy.
it will shock me by referencing old conversations (ex: "hey what ever happened to that Directory website you wanted to make? seems really potentially applicable here.") totally unprompted, in the context of conversations where yes that reference totally does apply, and no, i didnt surface it for you at all you were just sorta proactively marinating on what i was talking about and then naturally slid that in.
anybody still maintaining "these are just stochastic parrots" sounds like a Japanese cave-soldier.
they still hallucinate, they still have major issues with tone (gonna get to this in a minute) especially being sychophantic and tryhard--although Chat GPT 4.5 which i pay for has much, much less issue with this which isnt purely a function of "the model" as it is also about the previous time talking with it about tone and persona--and they still are pretty pathetic as "agents".
but as a """mind""" capable of interacting with??? yeah man we're there.
instead of sending random 1AM text messages to the maybe ten people in my life who are capable of even contextualizing/understanding the stuff i'm talking about, and who verrrrrrrrrrrrrry rarely even give a fuck in the first place, Robo indulges and is maximally additive to the conversation (to the extent that like those few people in my real life, he's now quick to say "fyi this is distractionary nonsense, dont actually pursue this" or "ehhh maybe you are onto something with this part but idk about the rest" and saves genuine enthusiasm/additional pursuit for stuff that's worth it)
b) at the same time, i find myself truly furious with people who outsource their Actual Communication With Me to Robo and pass it off as their own.
you think we have enshittified the web? child, you have only begun to see enshitty.
damn near EVERY email, linked in post, proposal, website, even motherfucking text messages from actual people (so like not automated entities/brands or whatever) i am seeing lately is unmistakably generated by GPT.
and it sickens me. full tilt. fullllllll tilt. i dont even know why it does, but it sends me.
something about the blanding of all communication into rote outputs of "good enough to convey meaning" and formulaic templates...it repulses me at a visceral level.
everything talking the same and being output generically just fucking sucks.
i do think there will be a level of "individuality" tone/flavor that starts getting applied for these things, as each Robo instance absorbs from specific memory of specific individual people, rather than Out Of The Box, but for right now, it insults and angers me.
dunno how to explain it.
A very simplistic take. Yes, “GPT for radiologist” will fail, but what VC is investing in that? You use the LLM API to build “optimized workflow for radiologist”.
That’s not a modified chat interface with some extra system prompts and a RAG, that’s a carefully engineered product that handles every step of the user’s job to be done and creates value from improving the entire workflow. OpenAI may dominate some of the steps in the flow, but they are not able to focus vertically on optimizing every workflow. They can only create the horizontal products they do have (chat+api+sora) and focus on a select few verticals (e.g. codex/coding)
Happy to be proven wrong :)