Discussion about this post

User's avatar
Elan Barenholtz, Ph.D.'s avatar

Interesting piece, but I don’t quite agree with the idea that the big model providers want to dominate the entire loop. These are large, economically rational companies, and they tend to follow a familiar pattern: focus on doing one thing exceptionally well—in this case, building foundational models—and let startups handle the riskier, more chaotic business of creating and distributing applications.

Right now, offerings like ChatGPT are strategically useful to model providers because they serve as data funnels and feedback loops, helping them refine and evaluate their models. But that doesn’t necessarily mean they’re aiming to be customer-facing in the long term. As the ecosystem matures, it’s likely they’ll recede into the infrastructure layer, just as AWS powers the cloud without owning most of the apps people actually use.

Trying to dominate the entire stack would not only be strategically messy, it would also choke the very ecosystem that drives model usage and innovation. Their long-term win is to become indispensable utilities by powering the AI economy without having to build all of it themselves.

Expand full comment
comex's avatar

I think this mixes together two threats of very different severities.

If Anthropic tries to shut down your API access, you can just switch to Google or OpenAI. LLMs are extremely fungible, and for the time being there’s enough competition that there shouldn’t be too much misbehavior.

But if Anthropic copies your product, then you’re in trouble.

Expand full comment
28 more comments...

No posts