Enterprise AI Will Be a Trench War, not a Blitzkrieg
The macro inertia, micro entropy, and probabilistic chaos of deploying AI inside real companies
Silicon Valley is intoxicated by AI. Venture capitalists and founders alike are betting that large enterprises will adopt LLMs en masse, restructure workflows, and unlock billions in productivity gains. It’s an appealing thesis: plug in a model, automate a process, print money.
But this view is a hallucination of its own.
Enterprise AI adoption will be slow, fragmented, and politically fraught. It won’t resemble a blitzkrieg of innovation. It’ll resemble a trench war: grinding, unpredictable, and full of false starts.
To understand why, you need to consider two intertwined truths:
Enterprise transformation is bottlenecked by people, politics, and legacy systems.
LLMs are not like software. They are probabilistic, weird, and anthropomorphic, and thus fundamentally alien to enterprise IT norms.
Let’s unpack the layers of resistance.
I. The Organizational Terrain: Political, Bureaucratic, and Anti-Iterative
Enterprise IT is optimized for compliance, not creativity. The Valley builds for environments that assume speed and permissionlessness. Enterprises operate in environments that assume risk, regulation, and retrenchment. That gap is not just cultural, but structural.
LLMs are inherently probabilistic. They don’t guarantee correct answers. They’re powerful, yes, but also unpredictable. This offends the enterprise immune system:
CFOs don’t want “probably accurate” financial models.
Legal doesn’t want “usually compliant” redlines.
Risk officers don’t want “likely non-discriminatory” loan approvals.
As Krishnan writes, “Perfect verifiability doesn’t exist.” Enterprises are not set up to tolerate this. They expect deterministic guarantees, not Bayesian vibes.
II. Enterprises Don’t Have AI-Shaped Holes
Matt Clifford’s line—“there are no AI-shaped holes lying around”1—is clarifying. You can’t just drop an LLM into an existing org chart and expect transformation. You must re-architect the workflow, retrain the humans, rebuild the evaluation loop, and often reinvent the product.
This requires three things enterprises hate:
Ambiguity: What exactly does the AI do?
Iteration: How do we make it better?
Ownership: Who’s responsible when it fails?
That’s why most AI pilots die in the corporate innovation lab or get trapped in perpetual proof-of-concept purgatory. It’s not a technology problem. It’s a willingness-to-change problem.
III. Trust Is Asymmetric, and Error Is Fractal
Enterprises can’t afford asymmetric failure modes. An LLM that works 95% of the time still fails 1 in 20 time. And that one failure could violate GDPR, leak IP, or generate a racially biased output.
As Krishnan notes, this creates a Pareto frontier of failure. You can layer in tool use, RAG, and verification steps, but each added component introduces its own failure surface. LLMs passing messages down a chain tend to distort them at each step, akin to a game of telephone with probabilistic minds.
This dynamic kills the central dream of AI: composability. In reality, every added component increases systemic entropy. The result is often a local optimum that works well in demo environments and collapses under production stress.
IV. Enterprise AI Economics Are Inverted
Traditional software succeeded because marginal costs collapsed. AI doesn’t offer that luxury.
Inference costs are real, variable, and often opaque. Hallucinations increase support costs. Verification requires human oversight. And model churn, with new weights, new APIs, and new scaffolding, creates unplanned technical debt.
Krishnan nails it: “Cognition eating software brings back a metered bill.” Enterprises will discover that their beautiful AI-powered app suddenly has a unit economics problem when the monthly bill from OpenAI, Anthropic, or their fine-tuned in-house model starts eating margin.
The illusion of scale vanishes once your LLM starts costing more than your mid-level analysts.
V. Talent, Not Technology, Is the Real Bottleneck
Enterprises don’t have the internal fluency to use LLMs strategically. AI-native thinking, including prompt engineering, workflow scaffolding, and system orchestration, is rare outside the Valley. And unlike cloud or SaaS, AI changes the nature of cognition, not just infrastructure.
Most enterprises are stuck between two extremes:
Junior enthusiasts who lack political capital
Senior execs who lack technical understanding
The result: cargo-cult AI initiatives, hallucinated roadmaps, and decks full of nothingburgers. You can’t transform an org until the org understands what’s being built, and why it’s worth the risk.
VI. AI Isn’t the Product. It’s the Mutation Layer
The companies that succeed with AI won’t treat it like a product feature. They’ll treat it like a mutation layer. They’ll redesign workflows, reprice products, retrain teams, and rebuild decision loops to co-evolve with LLMs, not just deploy them.
This demands something rare: institutional adaptability. Most enterprises don’t have it. They’ve spent decades ossifying, not mutating.
And so, AI will not sweep through the enterprise like a viral SaaS tool. It will crawl, bleed, mutate, and be resisted, until it finds an internal champion with the guts and capital to do something politically unpopular and structurally inconvenient.
Then it will work. But not before.
Conclusion: AI’s Enterprise Future Will Look More Like SAP Than Slack
The Valley imagines LLMs as the next Slack: fast, viral, cross-functional. But the more apt analogy might be SAP: expensive, slow, high-stakes, and difficult to reverse once embedded.
That’s not a bug. It’s the nature of deep infrastructure change. And LLMs are deep infrastructure, epistemically and operationally.
Enterprise AI is inevitable. But it will be built through trench warfare, not a blitzkrieg. The firms that win will not be the fastest—they’ll be the ones that survive the entropy.
Credit again goes to Rohit Krishnan for surfacing this great quote.
Awesome article!
spent 20 years slinging Enterprise tech (including at Slack) and think this is 90% accurate; only caveat is that i do think there will be *some* use cases where you get SaaS-y tool style outbreaks, for very bounded functions especially in data-to-action workflows.
for example, CRM databases are still clunky and tbh pretty useless—but Gong and other tools which automate away the data entry facets can be married to ChatBot outputs which are specific actions could flip the traditional CRM model and quickly render it obsolete.