If AGI Is Imminent, Why Are There 10,000 AI Startups?
If AGI is imminent, it will subsume all the AI startups, yet VCs keep throwing money at them
If AGI is coming, why are there thousands of AI startups?
This question, which arose from a brief Twitter exchange, cuts to the core of a deeper contradiction in the current AI investment landscape. VCs claim to believe in the imminence of AGI, but continue to fund an ecosystem of startups whose business models presume that AGI is not imminent. This is not a trivial inconsistency. It is a revealing contradiction that exposes the limits of our collective understanding of what AGI actually means, and what its arrival would logically imply.
The core insight is simple: if AGI truly arrives, then it would logically render the vast majority of existing AI startups obsolete. Most of these companies are built on the premise of narrow AI: specialized systems tuned for specific domains like legal tech, financial modeling, image generation, or code completion. But a true AGI, by definition, would subsume these capabilities and more. The only plausible survivors in such a world would be the foundational model builders themselves: entities like OpenAI, Anthropic, the hyperscalers, and perhaps a few others. Everyone else would either pivot to become an application layer on top of AGI or vanish.
So why are so few people behaving as though AGI is imminent? One answer is that they don’t really believe it. Another is that they believe it in the abstract, but haven’t reasoned through its implications. Either way, this belief-behavior gap is not just a quirk of investor psychology. It is a window into a deeper epistemological problem about how AGI is discussed and understood.
Here, we can adopt David Deutsch’s theory of explanation, building on Popper. A good explanation, Deutsch argues, is one that is hard to vary while still accounting for what it purports to explain. In contrast, bad explanations are protean: they can be easily reshaped to fit any outcome, and thus explain nothing. Viewed through that lens, most AGI discourse fails to qualify as explanatory. AGI is often described alternately as an inevitable outcome of scale, a mysterious emergent phenomenon, or a quasi-religious singularity event. But these descriptions are not theories in the scientific sense. They do not rule out specific outcomes. They are not structured in a way that makes them vulnerable to criticism or error correction. And crucially, they do not cohere with the observable behavior of the very people who claim to believe in them.
This is where the contradiction becomes epistemically interesting. Investors are, in many ways, the ultimate pragmatists. They may use optimistic language to attract capital, but their allocation decisions are grounded in expected value calculations. The fact that they continue to fund startups that would be annihilated by AGI suggests that, at some level, they do not expect AGI to arrive soon. Alternatively, they may believe it will arrive, but only in some constrained form that doesn’t threaten existing business models. But this is a tacit admission that what they call AGI is not what the term originally meant.
In the Twitter thread referenced earlier in this essay, Gary Rivlin raised an important point: does AGI necessarily lead to a single “God model,” or will it manifest as a multitude of domain-specific intelligences tuned to different tasks? This distinction is often elided in public discourse. But it’s not a trivial technicality; it goes to the heart of the definition. If AGI is just a collection of narrow AIs, then it is not general. If it is truly general, then it implies a kind of cognitive universality that, by definition, outcompetes and obsoletes task-specific systems.
The ambiguity of benchmarks only worsens the problem. What does it mean to say that “AGI is here”? Is it a system that passes some cognitive test? One that can get a job, raise a child, invent a theory? The definition keeps shifting, which makes it immune to criticism. This is precisely what Deutsch opposes: theories that are not hard to vary are not explanatory. They are rhetorical placeholders, not theories. They are vessels for confusion masquerading as insight.
Deutsch emphasizes that explanation, not prediction, is the heart of science. A theory that makes vague predictions and resists falsification is not a theory at all. It is not that such AGI theories are wrong; it is that they do not yet rise to the level of being wrong. They are simply untestable, unrefined, and unmoored from criticism.
So what would it mean to believe in AGI in a Deutschean sense? It would mean adopting a theory that explains what AGI is, what it isn’t, how we would know when it arrives, and what the consequences would be. It would mean making predictions that are bold, precise, and risky. These are predictions that could be wrong, and whose being wrong would tell us something important. It would also mean behaving as though those predictions are true. In other words, it would mean not funding 10,000 startups whose core assumption is that AGI won’t render them obsolete.
Until that happens, the AGI discourse will remain a mixture of hype, confusion, and quasi-religious fervor. The real work of understanding AGI has yet to begin. It must start with defining our terms, clarifying our assumptions, and subjecting our beliefs to rigorous criticism. Otherwise, we are not doing science. We are just telling stories.
Well reasoned. I believe relatively few VCs think AGI is imminent. Those VCs touting AGI as imminent own stock in LLM shops like OpenAI and Anthropic. Those founder-CEOs are currently in fundraising mode and prone to hype.
To sort of sidestep your main philosophical thrust, if an investor thinks that the odds of an all-out thermonuclear war have *increased*, it would not make sense to begin investing more for the short term. You'd want to make longer term investments. If there's a nuclear holocaust in six months who cares how your investments do? But if you invest for the long term and there isn't a nuclear war, you're in a better position. Go long. I believe there was some evidence of this behavior in the real world during the Cold War.
Of course in the case of AI startups, it's not unreasonable to hope your company gets bought by NVIDIA or Amazon in the two or three year long slow takeoff period. Then you have that capital to spend, which is fun. Or it may offer some advantage during the coming period of rapid change and growth.