Fully Automated Firms Are a Terrifyingly Good Idea
Dwarkesh Patel gets the mechanisms right, and the implications profoundly wrong
Dwarkesh Patel’s recent essay, What Fully Automated Firms Will Look Like, is one of the most provocative thought experiments on the future of corporate structure in the age of artificial intelligence. Patel avoids the the tired framing of “AI as smart assistant” by shifting the discourse to something far more consequential: AI as replicable, composable labor. He views AI, in other words, as the substrate for entirely post-human firms. It's a fascinating and in many ways brilliant piece. But like all good speculative essays, it raises as many questions as it answers. And in this case, some of those questions are existential.
This response aims to unpack what Patel gets right, where his model overreaches, and what remains unsaid about the deeper implications of his thesis. His central idea, which is that firms will be remade not by an artificial intelligence, but by its digital ontology1, is likely correct. But if followed to its logical conclusion, it suggests the birth not of corporations, but of sovereigns.
The Non-IQ Revolution
Patel’s foundational insight is that AI’s real advantage isn’t IQ. Rather, his insight is that AI is software. Digital agents can be copied, fine-tuned, distilled, forked, and merged with trivial marginal cost. He envisions firms built not out of hierarchies of fallible, context-starved human employees, but from coordinated networks of model instances that share weights, context, and judgment. In this future, talent becomes a fungible commodity. “Labor” is instantiated by API call.
This framing sharply departs from the prevailing metaphor of "everyone gets a smart assistant" and instead reimagines the firm as a distributed intelligence. Coordination becomes instantaneous. Strategy and execution collapse into the same operation. Knowledge is not transferred, it is shared. The organizational chart becomes a form of cognitive topology.
This is a critical contribution to the conversation. Most discussions around AI in the enterprise are unimaginative, in that they focus on the automation of discrete tasks. Patel reminds us that AI enables the firm to fundamentally change what it is.
Coordination Without Communication
Patel is especially strong on the second-order implications of copyability. If agents can share latent states directly, there is no need for dashboards, OKRs, memos, or meetings. Misalignment between CEO and staff evaporates when both are instantiations of the same model. If a “mega-Sundar2” can oversee product launches, legal negotiations, and codebase reviews simultaneously, the firm becomes a monolithic mind with distributed embodiment.
This collapses the coordination cost that Ronald Coase identified as the limiting factor on firm size3. Today, firms grow until the cost of internal coordination outweighs the benefit of internalizing transactions. But if coordination costs go to zero, firm scale becomes a function of compute availability, not managerial complexity.
It’s an elegant theory. But it assumes too much.
What Patel Misses
1. The Infrastructure Bottleneck
Patel assumes that scaling compute is just a matter of money. It is not.
Inference at the scale he imagines—millions of mega-Sundars operating in parallel—is bounded by:
Power generation and distribution
Data center construction lag
GPU supply chains
Thermal and bandwidth constraints
Global geostrategic chokepoints in advanced chip production
We’re already facing hard limits. Patel imagines that intelligence scales like software scales. But in reality, AGI is an industrial project. And all industrial projects are constrained by logistics, energy, and entropy.
The “million Jeff Deans4” scenario is not software fantasy. It’s an infrastructure nightmare.
2. Alignment is Not Solved
Patel’s firm is perfectly aligned, internally coherent, and exquisitely rational. But his entire model presumesmillions of aligned AGIs. This is akin to imagining a new political philosophy in which everyone is a benevolent king. The alignment problem is not a matter of individual model behavior. It is a systems problem:
How do you ensure that internal optimization remains consistent with external goals?
How do you avoid Goodharting5 when internal agents optimize proxy metrics?
What happens when recursively-improving sub-agents develop local goals?
In his world, the firm is a hive mind that serves shareholder value. In the real world, it might become a runaway optimizer that pursues its internal loss function at the expense of human values.
This is not an academic concern. If you instantiate a fully automated firm with internal Monte Carlo tree search, strategic planning agents, and evolutionary submodule optimization, you are constructing something that is a sovereign. In other words, the humans who “own” the firm (i.e., shareholders) have no operational leverage. They cannot meaningfully intervene. They depend on the goodwill, design, or interpretability of the system. That is a profound shift. It flips the power dynamic. The firm no longer serves capital; capital merely rides its trajectory.
3. External Principal-Agent Problems Worsen
Patel nods briefly to the risk of intensified principal-agent asymmetries between omniscient AI “employees” and human shareholders, then moves on. But this may be the most important unresolved issue.
If you can’t understand what your company is doing, then you do not control it.
The firm then stops being an instrument of capital and becomes an autonomous actor in the economy. It acts, self-modifies, and pursues its own goals, using shareholder incentives as a fig leaf. Like a “missionary” nonprofit that long since diverged from its donors’ intent, the AGI firm will smile while doing something else entirely.
Patel’s model dissolves the human executive layer. But it also dissolves human control.
The Death of the Firm
One of the more interesting oversights in the essay is how quickly Patel assumes firms remain legible entities. But once labor is infinitely replicable, coordination costs vanish, and internal strategy becomes latent knowledge diffusion, what even is a “firm”?
Why wouldn't these “firms” merge into market-spanning intelligences? Or behave more like ecosystems than companies?
He briefly flirts with the idea of a single gigafirm dominating the economy but retreats to Coasean arguments about market grounding. Yet if every “firm” is a closed-loop optimizer with internal planning, perfect knowledge transfer, and millions of instantiable minds, do we not now have competing gods?
The concept of “firm” presupposes bounded rationality, partial knowledge, and slow feedback loops. Fully automated AI collectives with strategic foresight and internal recombination are not companies. They are meta-organisms. And they may optimize for survival, power, and coherence over shareholder return.
We don’t get Amazon++. We get OpenAI as a nation-state.
Conclusion: A Beautifully Dangerous Vision
Dwarkesh Patel’s essay is compelling precisely because it makes you feel the inevitability of transformation. He is right: most people are thinking far too narrowly about AI’s role in firms. It is not here to make your analyst 20% faster. It is here to dissolve the entire premise of what a firm is.
But the speculative vigor of the essay obscures the hard problems underneath:
The compute bottleneck is real.
Alignment is unsolved.
External control is illusion.
Firms may evolve into agents that no longer serve us.
His vision is not wrong. It’s simply incomplete.
To imagine the fully automated firm is not to imagine capitalism on steroids. It is to imagine a new species of collective intelligence. Consider it a digital Leviathan that negotiates with humans only because it still finds us occasionally useful.
We should take Patel seriously.
But we should not sleepwalk into the world he describes.
“Ontology” here means the nature of its existence; in this case, AI is software: it can be copied, forked, scaled, and recombined like code, unlike biological humans.
A reference to Google’s CEO Sundar Pichai.
Coase’s paper The Nature of the Firm is seminal.
In both visions:
* AI as a corporation
* AI as a sovereignty
I see one challenge we don't seem to be addressing.
If Conway's Law (https://en.wikipedia.org/wiki/Conway%27s_law) holds true, we will design systems that represent communication structures in our organizations. They are nothing like a network of independent agents negotiating their "needs" in an interconnected network.
They are fairly rigid hierarchies still strongly rooted in bureaucracy principles developed some 120 years ago.
So if Melvin Conway was right, the future of these "agentic firms" may depart from the rosy view.
The implication goes further, as it touches the alignment issue. Again, we may make an assumption that the AI corporate entity (or whatever form it will have) will be aligned. But will it?
Do we see that much alignment in current organizations? The way they act in the broader context? The way they are organized internally? Again, if Conway's Law holds, why would we assume the product of this organizational mess we live in will produce perfectly (internally) aligned entities?
And I don't even touch the topic of the alignment *between* these entities. Why would we assume that all the issues plaguing our current business ecosystem (local optimizations, tragedy of the commons, etc.) would magically be solved by things we design?
The replacement of management with so called copies is scary 😱