We’ve reached the part of the AI curve where intelligence is cheap and everywhere, but discernment is not. The tools are general, the outputs are infinite, and the leverage lies entirely in what you choose to do with them.
Jack Clark’s recent conversation1 with Tyler Cowen provides a remarkably clarifying lens into where economic value is migrating in the AI era. Much of what Clark articulates aligns with what I’ve argued repeatedly: the real surplus doesn’t accrue to the models themselves, but to those who orchestrate them, frame them, and wrap them in human taste, trust, and distribution. Intelligence is becoming ambient and cheap; the question is who gets paid, and why.
The Collapse of Model-Based Moats
Clark notes that while systems like Claude can generate sales copy or perform therapuetic dialogue, people still prefer to transact with people. Claude can write a pitch, but it doesn’t yet sell Claude. This distinction is crucial: the economic surplus doesn’t go to the model that can do the thing; it goes to the interface that frames the thing in a human-relatable way.
This echoes my own stance that LLM capability is rapidly becoming commoditized. What remains scrace is the wrapper: the brand, distribution, worldview, and ability to slot the tool into an existing high-trust channel. Models are good; orchestration is where the money is. (Don’t confuse this with so-called “GPT wrappers”—those may wrap the underlying model but they don’t necessarily have brand, distribution, worldview, or the ability to slot into an existing high-trust channel. In many cases the foundational model trainers themselves—Anthopic, OpenAI, etc.—will be the ones to whom value accrues.)
Infrastructure as Destiny
Clark’s comments on electricity and turbine components are a sharp reminder that value often accrues beneath the shiny software layer. While the techno-optimistis chase the next leap in LLM capability, Clark and I both see the real choke points forming in energy, compute, and real estate. If inference cost becomes the gating constraint, as seems likely, then whoever controls the power grid, land near substations, and cooling infrastructure inherits the real leverage.
This is the same reason I’ve compared OpenAI and its peers to utilities, not SaaS companies. When everyone has access to comparable models, the differentiator isn’t intellectual, but material.
Human Taste as Scarce Resource
Clark emphasizes that even in a world flooded with synthetic content, people still crave a kernel of humanity. Substack, Patreon, and bespoke creator economies persist not in spite of AI, but because of it. The more the world is flooded with derivative noise, the more valuable well-calibrated taste becomes.
This maps directly to my own approach: AI destroys generic content economics, but elevates personality, curation, and framing. These are not merely artistic flourishes. They are the new pricing power.
The Rise of the Manager-Nerd
Clark predicts the ascendancy of “manager-nerds”: people who don’t build AI, but orchestrate fleets of agents. These are systems thinkers, synthesizers, and operational polymaths. Startups with five people and 100 agents will outperform 50-person companies with poorly orchestrated talent.
This again aligns with my framing: the edge is not brute force intelligence but judgment, composition, and narrative control. Value accrues not to those who can generate 1,000 answers but to those who know which answer to use, when, and how to present it.
Differentiation at the Edges
Clark rightly observes that LLM providers will differentiate not by raw IQ but by specialist and edge behavior: poetic tone here, scientific taste there. This suggests that even in a world dominated by a handful of foundation models, the frontier is human differentiation layered on top.
It’s an unbundling: general-purpose intelligence at the core, human-guided orchestration at the edge. In that world, what you know matters less than how you curate, how you contextualize, and how you distribute.
Big Firms Will Be Slow, and That’s the Opportunity
One of Clark’s less-discussed, but highly consequential, points is that large, incumbent corporations may adopt transformative AI more slowly than smaller actors or even government agencies. This is consistent with what I’ve said for months: Enterprise AI adoption will be far slower, messier, and more politically constrained than most forecasts assume.
Clark’s reasoning is revealing: the inertia of existing workflows, internal politics, and fear of liability all contribute to resistance. Meanwhile, small actors, including startups, power users, and rogue departments, move first, driven by pain, not policy. This delay in adoption is not just friction; it is arbitrage. It’s where the leverage lies for those who can build lightweight wrappers that plug into enterprise processes without needing to overhaul them. First movers who understand both AI and org politics will capture the value large firms are too slow to access.
Conclusion: Where to Build, Where to Invest
For those building in this space or allocating capital, Clark’s remarks reaffirm the framework:
Bet on infrastructure, not just algorithms.
Bet on taste and trust, not just throughput.
Bet on orchestration layers that abstract over AI, not AI itself.
Bet against large corporations adopting AI quickly or cleanly.
The model is the commodity. The interface is the moat.
The AI age rewards not just intelligence but judgment. Not just scale, but taste. Not just agents, but orchestrators.
If everyone has a god, the only thing left to monetize is the ritual.
For the purposes of this post I copied the transcript of their conversation and pasted it into ChatGPT and extracted points mentioned in this piece via ChatGPT’s parsing of the conversation. This is how I consume virtually all podcast content now; I do not have the brain required to listen to hours-long conversations.