Enterprise AI won't happen any time soon
Enterprises require repeatable, auditable processes
Welcome to the latest edition of Buy the Rumor; Sell the News. In today’s post, I take on the claim that enterprises will rapidly adopt AI to make their operations more efficient and autonomous.
If you like what you read here, consider subscribing, and if you’re already subscribed, consider upgrading to a paid subscription. If you want to connect with me directly, my contact information is at the end of this post.
Tech accelerationists love to chant that “AGI is around the corner.” Many go further: once AGI arrives, industry will transform instantaneously. Companies will automate entire functions, abandon human oversight, and rewire operations around stochastic reasoning.
But there’s a deeper truth hiding in plain sight: Even if we have AGI by 2030, enterprises will still take a decade or two to fully adapt. Not because they’re dumb or bureaucratic (though they often are), but because they’re structurally, legally, and culturally wired to resist exactly this kind of discontinuous shift.
The enterprise immune system is deterministic by design
Corporations didn’t evolve to be nimble Bayesian learning agents. They evolved to be deterministic machines.
Given the same inputs, they’re supposed to produce the same outputs.
They maintain audit trails, enforce standard operating procedures, comply with rule-based frameworks designed to minimize variance.
Their core systems, including ERP, accounting, procurement, and compliance reporting, are explicitly built on the idea that process is repeatable and that outcomes are causally explainable. Action A generated Outcome B.
This is not incidental. It’s the bedrock of how modern legal and economic systems allocate blame and enforce contracts. You can’t just show up to a courtroom or regulator and say Oh, well, our stochastic decision oracle generated this bizarre output, sorry.
You need a deterministic chain of causality that humans can inspect.
Even today’s AI runs into this wall
Large language models are stochastic by construction. They predict the next token from a probability distribution learned on vast corpora. That means:
Same prompt → slightly different completions.
Even setting the model temperature to 0 doesn’t guarantee identical output, due to hardware-level non-determinism in floating point numbers. These tiny stochastic variances in floating point math accumulate over billions of matrix calculations inside the neural network.
To enterprises, this is epistemic poison. They can’t guarantee auditability, so they wrap LLMs in human approvals or relegate them to low-risk functions (drafting marketing copy, summarizing meeting notes). They keep AI outside the core deterministic machinery.
This is why most of today’s enterprise adoption of AI amounts to toy copilots. Not because the models aren’t powerful, but because enterprise systems are designed to reject untraceable variance.
But what if we get AGI?
Accelerationists imagine this will change everything. They envision an AGI that can:
Reason across complex domains, integrate disparate data sources, generate better operational plans than any human team.
Execute flawlessly at machine speed, tirelessly optimizing outcomes.
And they’re right, in a narrow sense. A sufficiently advanced AGI could, in principle, outperform humans at nearly all knowledge work. But that’s not the only constraint. Enterprises exist inside regulatory, legal, and insurance ecosystems that simply won’t allow them to drop deterministic accountability overnight. They’ll demand:
New audit schemas, new compliance standards, new insurance underwriting norms.
Years of parallel testing to prove that AGI-generated outputs don’t produce catastrophic tail risks.
Slow, staged migration of responsibility from human managers to machine agents.
That’s not a quick software upgrade. That’s a multi-decade institutional restructuring.
History says so
Every major technology that altered core enterprise operations took decades to mature and diffuse:
And these were less epistemically radical than AGI.
Electricity didn’t require rewriting risk frameworks; it just replaced steam.
ERP made processes more deterministic, not less.
Even machine learning has largely operated in narrow bands (fraud detection, forecasting), embedded inside larger deterministic wrappers.
An AGI that actually makes enterprise-scale operational decisions would be far more of a discontinuity. That’s why it would take even longer for institutions to digest.
Palantir is the canary
Look at Palantir, arguably the most advanced real-world enterprise AI company of the last two decades.
Founded in 2003, it spent a decade trying to get enterprises (and governments) to adopt probabilistic graph models.
It wasn’t until ~2018–2020 that they started to achieve sustained scaling in commercial markets.
Why?Because even Palantir, which mostly offered decision support (not fully automated decision-making), had to spend 15+ years building the cultural trust, the validation regimes, the legal frameworks to allow its probabilistic outputs to influence core operations.

And Palantir didn’t claim AGI. It offered narrowly scoped intelligence platforms.
Imagine how much longer it would take for enterprises to trust a truly opaque AGI.
The compliance chokehold
Even with perfect AGI, regulated sectors like banking, insurance, pharma, aviation, and defense wouldn’t simply plug it in.
Basel III and Solvency II don’t just accept “the AI said so” as capital adequacy rationale.
FDA 21 CFR Part 11 requires deterministic audit trails for pharmaceutical process controls.
Sarbanes-Oxley mandates internal controls that are explicitly testable by external auditors.
A superintelligent system that outputs unexplainable guidance doesn’t fit these schemas. The regulators would force multi-year shadow deployments, where AGI decisions run alongside human ones, constantly reconciled and justified, before any real autonomy is granted.
The human bottleneck
Then there’s the organizational immune system.
Middle managers will resist tools that threaten their power.
Compensation structures will lag: companies won’t know how to reward human overseers of AGI-driven processes.
Boards will hesitate to sign off on risk they can’t fully explain to shareholders or courts.
None of this changes overnight, no matter how good the tech gets. It’s a human institutional latency, not a technological bottleneck.
So what does it mean?
It means the real story isn’t that AGI will instantly transform enterprises. It’s that AGI would trigger a decades-long, painful, fascinating reconfiguration of how human organizations model risk, assign accountability, and codify trust.
And that means most of today’s “enterprise AI” startups (and their VCs) still won’t capture the upside. They’re operating on 5–7 year fund cycles. The real shift would unfold on a 15–25 year horizon, likely benefitting a completely different generation of companies and institutional architectures.
The contrarian takeaway
Even if we get AGI by 2030, enterprises won’t truly restructure around it until the 2040s.
Enterprises have exquisitely attuned immune systems. They exist to manage liability, enforce repeatability, and ensure compliance in a world that punishes variance.
So yes, AGI might arrive fast. But the absorption of AGI by the enterprise? That will be slow, contentious, and one of the defining institutional battles of the 21st century.
And that’s what makes it so interesting to watch.
Coda
If you enjoy this newsletter, consider sharing it with a colleague.
Most posts are public. Some are paywalled.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:
I know middle management will resist this in all sorts of subtle ways, usually as concern trolling not unlike what happens now with self driving cars that are demonstrably better than human ones, but get nitpicked for every error.
The difference is that an AI-native enterprise can operate at a massive efficiency differential compared to an older one. So expect that domains where there is actual market competition to be the bleeding edge of adoption, likely through new companies who have no "frozen middle" to resist.
I'm from the naive tribe where AGI would Machine becoming Itself.
So on the adoption side, the time frame would be lessened just from 2 hypotheticals
1) Company Bio can be full of middle management and Company Artificial can produce/sell/deliver at much faster speed.
a) The Owners & Upper Management at Bio will quickly eliminate middle management. Profit trumps all else.
2) Seeing the extremes taking place in today's world, remember when the AGI protects itself, will be learning from what takes place. The RL placed on the shopper (training human) matched against logistical attack on legal/political will blow well meaning humans away... and the other humans will reap windfall.
Case in point> Steven Adler published a good look at the misinformation regarding current claim of 1,000 proposed AI regs on state level busting the myth. Today begins the debate in the House re the Senate bill so it will be interesting to see if the myth has effect...