Large Language Models Are Not Search Engines
Stochastic systems, vanishing guarantees, and why the old SEO mindset won’t survive the AI transition
Welcome to the latest edition of Buy the Rumor; Sell the News. In today’s post, I take on A16Z’s claim that Generative Engine Optimization (GEO) is the new SEO.
If you like what you read here, consider subscribing, and if you’re already subscribed, consider upgrading to a paid subscription. If you want to connect with me directly, my contact information is at the end of this post.
It was inevitable that once large language models started rewriting the interface to information, venture capitalists and marketing gurus would trot out the same tired metaphor they’ve relied on for decades: a new channel to “optimize.” A16z’s recent celebration of Generative Engine Optimization is a near-perfect specimen. It captures the full suite of illusions that infect this thinking: the idea that stochastic systems can be gamed like deterministic search indices; that marketers, of all people, are prepared to reason in distributions; and that economic incentives will magically align around them.
In short, it’s a narrative perfectly calibrated to raise money, sell slide decks, and convince brand CMOs to panic about missing the next wave, while being completely disconnected from how either LLMs or marketing actually function.
A shallow analogy to SEO, with none of the actual structure
The entire GEO hype rests on an alluring but profoundly flawed analogy: that LLMs are just a new kind of search engine, so naturally a new kind of SEO must emerge. But this is a category error.

SEO worked because Google built a deterministic index. It crawled billions of pages, scored them on transparent, if complex, features like backlinks and keywords, and returned stable rankings. You could run a content audit, change your site architecture, earn new links, and predictably watch your ranking improve. The ecosystem had a direct causal wiring: your levers, Google’s algorithm, your output. Marketers loved it because it was deterministic and largely explainable.
LLMs are not indexes. They are statistical models of language, trained on enormous corpora to predict token sequences. There is no top 10 list inside GPT-4 or Claude. There is only a tangled web of parameter weights encoding the probability that, given a prompt, certain tokens will follow. Trying to optimize your brand’s presence in that is like trying to guarantee your reflection in a kaleidoscope.
The a16z piece talks earnestly about “getting into the model’s mind,” as though it were a new kind of SERP. In reality, it’s a high-dimensional stochastic field. Your brand mention is not a ranked slot you can buy or earn. It’s a shifting cloud of probabilities that may or may not collapse your way when the model generates a response.
GEO demands a stochastic mindset, which marketers simply do not have
The second glaring flaw is epistemic. GEO only makes sense if marketing teams are ready to abandon deterministic thinking and start reasoning like probabilistic engineers. That means asking:
What is the expected mention frequency of our brand over 10,000 sampled completions of diverse prompts?
How might subtle corpus changes shift token likelihoods in future retrains?
How does temperature, top-p sampling, and prompt framing alter our probabilistic surface area across different LLMs?
Of course, marketers don’t think this way. They have spent decades conditioning themselves to think about deterministic KPIs: impressions, click-through rates, conversion funnels, lift from A/B splits. Their entire budgeting logic is built on predictability and direct causality. Even SEO, with its complex black boxes, delivered a comforting stability: rankings updated every few weeks, and you could draw neat attribution lines from spend to outcome.
Ask a CMO to spend $500K on a stochastic intervention that might slightly raise the latent probability of your brand appearing in random samples across a black-box LLM, with no guarantee of persistence after the next model retrain, and see how fast they run for the door.
The economic incentives don’t line up either
Then there’s the economic dimension. Google wanted to drive clicks out to websites because it monetized through AdWords and display networks on those sites. Keeping the ecosystem open fed its ad business. That’s why SEO existed in the first place: Google had an incentive to crawl and rank the entire web.
By contrast, most foundation model providers are building closed subscription ecosystems. They monetize by keeping users engaged inside the chat or completion window. The best user experience, from their perspective, is a fully contained synthetic answer: no external links, no outbound traffic, no leakage of engagement.
The idea that these platforms will voluntarily turn themselves into pipelines driving traffic to third-party brands is wishful at best. If any paid opportunity emerges, it won’t look like SEO. It will look like a private negotiation for premium placements, a new form of native advertising hardwired directly into the model’s RAG or preference layers. That is an ad buy, not an organic optimization. The economic landscape that made SEO worth gaming simply does not exist here.
A probabilistic mess with no stable ground to stand on
What’s more, the entire underlying substrate is profoundly unstable. Even minor prompt rephrasings can dramatically alter which brands get mentioned. Change the context window by 10 tokens, or adjust the system prompt’s tone, and you might collapse entirely different parts of the model’s probability distribution.
Worse, when foundation models do their next major training run, typically on an enormous shuffled corpus with new filtering heuristics, all your painstaking work to “embed your brand in the model’s mind” can vanish overnight. This is not like waiting for a Google algorithm update and tweaking your backlinks. It’s more like playing dice with 175 billion weighted faces every time.
The likely actual future: not GEO, but direct partnerships and owned channels
None of this means brands should ignore the LLM transition. But it means the logical paths forward look very different from the GEO mirage.
Brands will build their own RAG assistants. If you’re Canada Goose, it’s safer to deploy your own chatbot that fetches from your validated corpus, so you control every mention.
Foundation models will cut direct content deals. Expect Anthropic, OpenAI, and others to negotiate brand placements and certified content streams. Basically sponsored snippets at the model level.
Traditional brand equity still matters. If people talk about your brand in organic, culturally relevant ways, those patterns enter the textual corpus naturally. The best optimization is still making something people want to talk about, not hacking token probabilities.
The upshot: a VC daydream built on broken mental models
When you pull all this together, GEO is a largely fanciful construct. It tries to map deterministic mental models onto stochastic systems. It imagines marketers, the most causality-obsessed operators in the commercial world, suddenly embracing expected value over token distributions. It assumes economic incentives will emerge that mirror Google’s early openness, despite structural forces that lean toward closed-loop monetization.
In other words, it’s a classic case of the venture ecosystem trying to fit a new phenomenon into old frameworks so they can fund familiar-looking tooling startups, complete with dashboards and retainer-friendly jargon. But language models are not search engines. And stochastic systems do not submit to deterministic playbooks.
A wiser strategy would be to accept these new systems on their own probabilistic terms. That means investing in owned assistants, cultivating genuine cultural relevance, and preparing for direct negotiated integrations, not pouring capital into a world that no longer exists. In the end, the real lesson of GEO may be how quickly investors reach for comforting analogies, even when the ground beneath them has shifted beyond recognition.
Coda
If you enjoy this newsletter, consider sharing it with a colleague.
Most posts are public. Some are paywalled.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can: