Welcome to the hundreds of new subscribers who have joined over the past few weeks. In today’s free post, I take a look at enterprise adoption of AI.
And if you like what you read here and you’re not yet subscribed, consider subscribing. Most of my posts are free though deeper dives tend to be paid.
I recently wrote about how enterprises will be slower to adopt AI than many people in Silicon Valley expect. That post focused on the organizational diffuclties that enterprises face when considering whether to adopt AI. This post is a follow up which focuses on a more technical aspect of enterprise hesitancy: the stochastic nature of large language models.
What is Stochastic Behavior?
Large language models are stochastic, meaning that their output is generated by a probability distribution.1 In practice this means that the output LLMs generate is non-deterministic. And enterprises are generally averse to non-deterministic things!
Compare non-deterministic output with a conventional, deterministic computing environment that most people are familiar with. Take, for example, a spreadsheet:

Every time you open this spreadsheet, the calculations returned are the same (assuming you don’t edit the formulas). Yes there are certain spreadsheet functions, like RAND()
which generate pseudorandom values every time a spreadsheet is opened and/or recalculated but the underlying algorithms which generate these pseudorandom numbers are themselves deterministic.
What Does This Have to do With Enterprise Adoption of AI?
Enterprises are not monoliths; they’re a collection of domains, each with different tolerances for stochasticity, risk, and explainability. So while enterprise AI adoption sounds monolithic in VC decks, in practice it’s highly localized to departments where the failure modes of stochasticity are non-fatal.
High-tolerance domains (early AI adoption)
Marketing: Stochasticity is fine: variability in copy, customer segmentation, or audience targeting is not catastrophic.
Sales Enablement: Generating slide decks, summaries, call prep: errors are annoying, not fatal.
Customer Support: AI can handle Tier 1 issues; escalation handles edge cases.
Internal Knowledge Search: Imperfect but often better than the existing SharePoint graveyard.
Low-tolerance domains (slow or no AI adoption)
Finance: Requires auditability, deterministic output, and compliance. “I don’t know how the model got this number” is not acceptable to a CFO.
Legal/Compliance: Hallucinated precedents equals lawsuits. Every word must be traceable.
Operations/Supply Chain: Errors can propagate through just-in-time inventory management systems and cause real world failure.
HR/People Ops: Stochastic evaluations or feedback summaries invite bias accusations and lawsuits.
Fundamental Technical Barriers
Unreliability of Outputs: LLMs are not yet robustly calibrated to express confidence or flag uncertainty in a way that downstream processes can reliably act on.
Lack of Determinism: Even with temperature set to zero, LLM outputs are path-dependent (prompt rephrasing affects output), which violates the core principles of repeatable systems design.
Non-Auditability: There’s no persistent internal state or traceable chain of logic, so outputs can’t be reconstructed or verified post hoc.
Why Most Enterprise AI Demos are Misleading
Nearly all current enterprise AI demos mislead because:
They cherry-pick use cases from high-tolerance domains.
They rely on human-in-the-loop systems.
They assume post-processing or filtering layers.
They sidestep regulatory auditability altogether.
This gives the illusion of readiness for core enterprise workloads. But without robust predictability, reliability, and traceability, AI remains a peripheral tool.
What Would Need to Change for Core Functions Like Finance to Adopt AI
Formal Verification for Models: Mechanisms akin to theorem provers or constraint solvers that guarantee compliance with financial rules or logic.
Auditable LLMs: Some combination of embedded logging, explainability layers, and reproducibility guarantees.
Hybrid Models: Symbolic + neural hybrids
Regulatory Sandboxes: Environments where A can be used in regulated domains without fear of punitive blowback, encouraging experimentation.
AI Will Be Pervasive but Selectively Trusted
Enterprise adoption of AI will be patchy and domain-specific, until there is a deep transformation of how these systems handle logic, provenance, and auditbility. The smart money isn’t on AI everywhere but on AI where it can tolerate being wrong. And that excludes the CFO’s office for the foreseeable future.
Dwarkesh Patel wrote a useful and speculative essay about what a fully autonomous enterprise might look like. (I wrote a response to it here.) It’s worth unpacking a bit what stochasticity implies about his arguments. Stochasticity undermines Patel’s notion of a fully autonomous company, at least as envisioned under the current LLM paradigm. It’s a seductive vision: a company could become a stack of agents: AI CEO, AI marketers, AI engineers, AI salespeople, etc., all working in concert with little or no human involvement.
But once you consider the three constraints of logic, provenance, and auditability, you quickly realize that Patel’s theory is unworkable given current LLM technology. That doesn’t mean it’s impossible over the longer term. But it does mean that no firm will be fully autonomous in the way that he and other AI accelerationists envision any time soon.
Coda
If you enjoy this newsletter, consider sharing it with a colleague.
Most posts are public. Some are paywalled.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:
follow me on Twitter,
connect with me on LinkedIn, or
send an email to dave [at] davefriedman dot co. (Not .com!)
This observation gave rise to the quip that large language models are but stochastic parrots.
The more I develop agents, the more I see your point. Most people who have unrestricted enthusiasm are either shovel sellers or people who lack technical depth on NLP, ML and Large Language Models.
Love your articles and analysis. On a related topic, sharing one of my articles
Innovations Speed Trap: AI’s Adoption Slowdown Paradox
https://open.substack.com/pub/pramodhmallipatna/p/innovations-speed-trap-ais-adoption
Are AI Agents Ready for Prime Time
https://open.substack.com/pub/pramodhmallipatna/p/from-automation-to-augmentation-the