Welcome to the latest edition of Buy the Rumor; Sell the News. We’re closing in on 2,000 subscribers, including institutional allocators, venture capitalists, litigators, senior executives and entrepreneurs. Thank you to all who have subscribed!
In today’s post, I explain how OpenAI’s introduction of native support for agents in ChatGPT threatens many startups building apps on top of ChatGPT.
If you want to connect with me directly, my contact information is at the end of this post.
OpenAI did not merely ship a feature when it released native support for AI agents within ChatGPT. It drew a boundary. This is the clearest demonstration yet of a thesis I’ve argued repeatedly: foundational models are agglomerating engines. They don’t compete with layers above them. They consume them.
Until now, AI-native startups were mostly wrappers around a single LLM API. Their value came from convenience, structure, or design: use GPT-4, but make it do this specific thing for this specific user. They were, in essence, feature bundlers.
Now OpenAI is the feature bundler.
From Tool to Agent
The new ChatGPT agent can browse the web, operate a terminal, call APIs, generate code, summarize PDFs, autofill forms, create slide decks, and deliver finished work artifacts, all from a single prompt. It chooses the tools, sequences the steps, and reports back when it’s done.
This capability fuses what were previously standalone products, including code interpreter, browser, and plugin platform, into a single cognitive-executive loop. No plugins, no add-ons, no third-party apps required.
The wrapper model’s days were numbered. OpenAI just started counting them out loud.
The Innovator’s Dilemma at the Feature Level
The last 18 months saw a gold rush of LLM-enabled verticals. Startups spun up around trip planning, contract summarization, spreadsheet analysis, research bots, and more. Some raised on the strength of a UI and a prompt template. But what’s happening now isn’t competition. It’s default absorption.
The agent can:
Navigate websites and book travel
Read dense PDFs and summarize them in editable PowerPoints
Execute code across real files
Clean a spreadsheet, chart it, and email it to your team
And all of it happens inside one persistent thread with memory, notifications, and handoff controls. It’s what Apple once did to utilities. It’s Sherlocking by default.
Before vs. After: Stack Compression
This is what the collapse of application layers looks like in real time.
What Survives the Agent?
Not everything gets eaten. But if your startup depends on simple orchestration over open-access data, you’re on borrowed time. Here’s what survives:
1. Proprietary Contexts
If the agent can’t access your data, it can’t do the job. This includes:
On-prem ERP systems
Regulated legal/financial datasets
Medical imaging archives (DICOM)
Industrial sensor telemetry
Owning the data pipe is the only firewall against agent absorption.
If the agent can’t reach your data, it can’t replace your workflow.
2. Real-Time and Edge Constraints
ChatGPT agents run in the cloud. Anything that requires low-latency, offline, or real-world control is safe, for now. Think:
Embedded ML in medical hardware
Factory automation systems (e.g., Rockwell ControlLogix)
Mobile inference on drones or wearables
Inference on-device is still outside OpenAI’s gravity well.
3. Human-in-the-Loop Guarantees
Enterprises don’t buy outputs. They buy guarantees. Startups that combine agent power with liability, auditability, or industry-specific compliance (SOC2, HIPAA, ITAR) have room to operate.
Also defensible: wrappers that specialize in governance, not just UX.
Counterpoint: But What About UX-Rich Workflows?
There’s a legitimate objection here: not all AI-native apps are just a wrapper. Some, like Notion, Figma, or Linear, embed LLMs into multiplayer, stateful, visually rich canvases that a chat interface can’t replace.
Fair. But OpenAI doesn’t need to replicate the UI. It just needs to pipe outputs into it. As soon as the agent can write to those apps via APIs or plugins, the cognitive lift shifts upstream again. ChatGPT doesn’t need to look like Figma. It just needs to do the hard thinking before handing off to Figma.
Founder Gut-Check
Ask yourself:
Can ChatGPT access 80% of the data your product needs?
Could a 200-line Python script replicate your core workflow?
Could OpenAI ship your product as a feature flag next quarter?
If you answer “yes” to two or more, you’re standing on trapdoor unit economics.
Strategic Playbooks That Still Work
Sensor-to-model verticals: Own the entire data pipeline, especially where retrieval is hard (e.g., mining, manufacturing, defense).
Compliance-first agents: Wrap OpenAI in a safety/compliance shell purpose-built for an industry.
Multi-agent orchestration: Treat ChatGPT as just one actor among many: route tasks across models, check costs, verify outputs.
Connector marketplaces: The first ones to build widely used connectors may gain persistent power (think Zapier in 2016, not 2024).
Near-Term Forecast (Speculative but Likely)
ChatGPT agent becomes the default UI: Plus and Pro users start delegating tasks, not just prompting.
Connector bazaar goes full App Store: Expect OpenAI to introduce revenue-share policies soon.
Antitrust attention ramps: EU and US regulators pivot from search monopoly to knowledge work monopoly.
Startup mortality spikes: Series A investors will start asking: Why can’t ChatGPT just do this?
Closing Thought
Foundational models don’t disrupt. They absorb. They don’t outcompete vertical apps. They swallow them, one API at a time. Your job now is to build what the agent can’t see, can’t reach, or can’t be liable for. The agent doesn’t need your startup. But your startup might need the agent.
Coda
If you enjoy this newsletter, consider sharing it with a colleague.
Most posts are public. Some are paywalled.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:
I am making agents build fully functioning apps in 2 hours that used to take me a year. Everyone, not just wrappers are in trouble. I wish I could paste pictures here, but I just built Monday.com for finance types is about 2 hours. Here's the LinkedIn URL. https://www.linkedin.com/feed/update/urn:li:activity:7352176010357846016/
Since the MCP protocol opposes this, by letting people swap out foundation models, model vendors might apply Embrace and Extend to MCP to lock agents to their model. Not that hard, since MCP's function call parameter inference is already only weakly defined. Best would be for open foundation models to gang together to support a common and capable profile of MCP. Best outcome for the world would be for the foundation models to become smaller and more reliant on MCP, and ideally run locally, since agents are less expensive, don't suffer from hallucinations, etc.