The Meeker Report: Capitalism's AI Gospel and Its Blind Spots
Mary Meeker's 2025 AI Trends Report charts a future of exponential adoption and investor euphoria. But what isn't she saying?
Mary Meeker1 just dropped her 2025 AI Trends report, a 300+ slide compendium of hockey-stick charts, CapEx surges, and breathless adoption curves. If you wanted a single document that encapsulates Silicon Valley's consensus AI worldview—growth is good, more is better, and China is a threat—you've found your catechism.
But like most techno-optimist narratives, it's what the report doesn't say that matters most. The problem isn’t Meeker’s exuberance about growth. It’s her silence about the material preconditions of that growth. AI adoption curves don’t ascend in a vacuum. They run on electrons, water, and land. AI doesn’t scale on vibes. It scales on grid interconnects, substation backlogs, and megawatts-per-megachip.
What the Report Gets Right
AI Is a Compounder on Internet Rails
Meeker's sharpest insight is structural: AI isn't a new internet. It's a layer on top of the existing digital substrate. Adoption is fast not because AI is magical, but because the infrastructure for instant scale already exists. ChatGPT's global user growth outpaces the internet itself because it rides the rails of smartphones, APIs, and cloud distribution.
CapEx and Developer Growth Are Real
The developer ecosystems around NVIDIA and Google are exploding—6x and 5x YoY, respectively. CapEx among the "Big Six" (Apple, Microsoft, Meta, Alphabet, Amazon, NVIDIA) surged 63% to $212B. That money isn't imaginary. Neither are the chips or the hiring sprees.
Inference Is Getting Cheaper, Training Is Not
One of the more balanced takeaways: per-token inference costs are dropping, while training costs are skyrocketing. This leads to greater developer experimentation even as model development becomes more exclusive. It’s a tale of diffusion at the edge and concentration at the core.
AI's Physical World Infiltration
Real-world use cases, including robotics in China, autonomous taxis in SF, ambient AI scribes in healthcare, and more are starting to move from pilot to production. The “AI meets atoms” story is maturing.
What the Report Misses
The Energy and Infrastructure Cliff
The report glamorizes Jensen Huang's metaphor of “AI factories” but completely ignores the energy input side of that equation. Where is the power coming from? How does this scale without melting grids or overloading transformers? No mention of cooling, latency, or land constraints. It’s CapEx euphoria without physical context.
Open Source as Existential Threat
Meeker treats open-source models as a monetization headwind, not a foundational threat to API business models. She ignores the licensing innovation arms race (e.g., Mistral's Apache 2.0), edge inference, and decommodification of model weights. Open source isn’t just a competitor. It's a strategic regime shift.
Conflation of Adoption and Value Capture
Charts showing job postings, dev activity, and user growth are seductive. But adoption is not synonymous with economic transformation. Enterprise AI is still a value mirage for many, with latency between experimentation and ROI spanning years.
Superficial Geopolitics
China is framed as a competitive input-output machine (robots installed, LLM user share), but there’s no discussion of divergent institutional goals. No mention of epistemic control vs. capability. No examination of how authoritarian governance might suppress emergent AGI behavior.
What It Overhypes
ChatGPT vs Google Search
The claim that ChatGPT hit 365B annual searches faster than Google is a false equivalence. ChatGPT prompts are not the same as high-intent, monetizable queries. This is bandwidth inflation, not value density.
Turing Test Victory Laps
“73% of users mistook GPT-4.5 for human.” Yes, when talking about feelings and day-to-day trivialities. No one asked it to prove a theorem or interpret regulatory filings. Passing the Turing test in this setup is a parlor trick, not a benchmark.
Image and Audio Realism
Midjourney's progression and ElevenLabs' voice cloning are impressive. But photorealism does not equal defensibility. The leap from impressive to indispensable remains unproven.
Strategic Takeaways
Infrastructure and inference are diverging. The edge is democratizing, the core is concentrating. Power will accrue to those who control tokens, not just weights.
Sovereign bifurcation is coming. Nation-states will split the stack. Sovereign models, air-gapped inference, and data localization will fragment the universal model dream.
Enterprise adoption is slow and political. AI isn’t just a technology. It’s an internal governance and workflows transformation. The friction is underpriced.
CapEx signals intent, not inevitability. Massive infrastructure spend is a trailing indicator. Watch where token flow and downstream developer attention actually go.
The Contrarian View
Meeker's report is brilliant inside the system of techno-capitalist acceleration. But step outside the flywheel and things look different:
Inference cost compression = margin erosion
Power and cooling constraints = hard ceilings
OSS and sovereign models = moat destruction
Adoption != transformation
AI is real. The adoption curves are steep. But the report's most compelling charts raise a better question than "how fast?" That question is: What breaks first?
And that’s where the next story begins.
Excellent recap and analysis.
Value to big tech seems to be reducing the number of developers by increasing the amount of code written by AI.
I agree that enterprise adoption is going to have major headwinds.
Unlike past enterprise tech (CRM, ERP, data warehouse, BI, etc), AI is bubbling up throughout the company like stray Google sheets and dropbox links. So OpenAI has users at these companies, but there is little structure or direction into how it fits with the strategy.
Dave, another good post. You’re right to question constraints at the infrastructure layer (energy, data centers, etc). Keep pushing on that thinking. And also maybe comparisons of how China vs the US are approaching this layer. I’ve also been thinking a lot about the layers and system complexity and why massive societal level changes may not happen nearly as quickly (or be as positive) as the techno-optimist view and where are key decision points or leverage points to possibly nudge the future in ways that turn out better for us (I’m team human) .
Based on your openness to my last feedback note, I wanted to share these as another post.
https://open.substack.com/pub/tomaustin1/p/ai-layers-the-nested-layers-problem?r=2ehpz&utm_medium=ios&utm_source=post-publish
I’m sorry it has subscribe buttons (I couldn’t figure out how to turn them off on mobile app) where I wrote it.
I’m going to write several other posts fairly soon around hidden complexity in several of these layers and would love to share with you (either via email or as links in comments) if you’re open.
I’m finding your posts are encouraging deeper thinking on my part and unlocking new ideas — so thanks!