I know middle management will resist this in all sorts of subtle ways, usually as concern trolling not unlike what happens now with self driving cars that are demonstrably better than human ones, but get nitpicked for every error.
The difference is that an AI-native enterprise can operate at a massive efficiency differential compared to an older one. So expect that domains where there is actual market competition to be the bleeding edge of adoption, likely through new companies who have no "frozen middle" to resist.
I'm from the naive tribe where AGI would Machine becoming Itself.
So on the adoption side, the time frame would be lessened just from 2 hypotheticals
1) Company Bio can be full of middle management and Company Artificial can produce/sell/deliver at much faster speed.
a) The Owners & Upper Management at Bio will quickly eliminate middle management. Profit trumps all else.
2) Seeing the extremes taking place in today's world, remember when the AGI protects itself, will be learning from what takes place. The RL placed on the shopper (training human) matched against logistical attack on legal/political will blow well meaning humans away... and the other humans will reap windfall.
Case in point> Steven Adler published a good look at the misinformation regarding current claim of 1,000 proposed AI regs on state level busting the myth. Today begins the debate in the House re the Senate bill so it will be interesting to see if the myth has effect...
I guess what strikes me is that companies love to blather about making ‘data driven decisions’ - as if we have finally emerged from the dark ages of the dreaded ‘gut feelings’ with all the attendant instincts and bias…
But in my experience for the last decade in a Fortune 100 company, precious few decisions actually get made that way. It’s more like trusted people making judgment calls…then seek the data to justify it retroactively. (Just like human brains do)
So what I’m wondering is - in terms of accountability or audibility, what’s the difference between a human sticking their finger in the air and saying ‘let’s do this not that’ versus an AI doing the same?
What are your thoughts about government adoption? This will be even slower. Even today we have society on apps, many banks and payments still on checks, and many government systems still on paper. I believe society would have stronger resistance to government functions and legal systems based on some degree of AI input/output.
Government adoption is tricky...I bet it will be adopted in some places more quickly than others. I have not given it much thought. Maybe I will set aside some time soon and write about it. It's an interesting question. Government faces its own set of challenges. And of course different governments (across countries and within countries) will have different approaches.
Strong well reasoned arguments as usual. I would add that in some cases the business case for change in current organisations (though resisted as you suggest) could be large enough that start up ventures can be successful. Examples abound of established businesses who could not adapt to and capitalise on new technology despite being the very organisations you would expect to do so (Kodak, Blackberry, Nokia, GM, Apple). As an example of an organisation that found a way to adapt I like the example of News International in the UK in the 1980s. Murdoch knew he had the organisational issues you describe, in this case the change from hot type by union workers to digital type by journalists. So he didn't try. Instead he built a shadow production facility in Wapping and when it was ready in January 1986 he didn't move the Fleet Street operations, he closed them. I'd also highlight Tesla as a startup built in the space left when established companies fail to adapt to new technology (hence GM in my list above).
PS; Apple is in my list of failed adopters of new tech when you would bet they'd be the leaders, and that's their failure on AI.
I'm saying that if the opportunity for AGI to disrupt existing business practise is there, don't look to incumbents to grasp the opportunity (for the reasons you gave), look to new entrants.
That doesn't answer the big issues of compliance and audit of course.
Yes, I agree that AI-native firms will disrupt many incumbents, especially in less regulated sectors. But even there I suspect that these disruptors will face a ceiling on the amount of disruption they can do until things like compliance, audit, etc., are reconciled with stochastic systems.
Generally, I agree with this chart. I do have one quibble though. I would strongly encourage dividing up ERP and Internet. Two massively different waves and time periods. I can say that, because I was there. The 90s (beginning in very late 80s and extending into early 00s) was the decade of ERP. It was the decade where SAP (and to a lesser extent Peoplesoft, Oracle, Siebel, et.) and also built the massive consulting firms (Andersen Consulting, PWC, EY, IBM, and others) all made their bones. The Internet was later…late 00s and 10s.
You make some good points - having worked in a regulated environment, I can see where the process and inertia will slow adoption rates.
Where I'd push back is on the timing, given the response to incentives faced by executives.
We can already see the rush to announce or fund any initiative with "AI" in the title, the mentions of AI on earnings calls, and the "received wisdom" the global corporate class is starting to align on.
More enterprise AI adoption will be driven by the fear of appearing "unsophisticated" at a dinner party rather than the most realistic business case or return-on-investment calculation.
A big corporation is like an oil tanker most days, but it can act like a plane when it has to under stakeholder pressure.
At the surface level, we’re already seeing a scramble for AI press releases, earnings call soundbites, board decks, and executive retreats all driven by the fear of appearing “unsophisticated.” That’s happening at breakneck pace, independent of real operational ROI.
But deeper down, where core deterministic machinery actually runs the enterprise (financial controls, compliance, insurance risk models, certified processes, etc), we still face all the structural, regulatory, and epistemic inertia I laid out.
So I suspect what we’ll get is a huge wave of adoption theater over the next 3–5 years, long before the underlying institutional guts adapt.
In some ways, that might even delay serious integration because once boards can point to “AI initiatives,” they may feel they’ve already checked the box, without confronting the hard problem of embedding stochastic reasoning into environments that historically punish variance.
I know middle management will resist this in all sorts of subtle ways, usually as concern trolling not unlike what happens now with self driving cars that are demonstrably better than human ones, but get nitpicked for every error.
The difference is that an AI-native enterprise can operate at a massive efficiency differential compared to an older one. So expect that domains where there is actual market competition to be the bleeding edge of adoption, likely through new companies who have no "frozen middle" to resist.
I'm from the naive tribe where AGI would Machine becoming Itself.
So on the adoption side, the time frame would be lessened just from 2 hypotheticals
1) Company Bio can be full of middle management and Company Artificial can produce/sell/deliver at much faster speed.
a) The Owners & Upper Management at Bio will quickly eliminate middle management. Profit trumps all else.
2) Seeing the extremes taking place in today's world, remember when the AGI protects itself, will be learning from what takes place. The RL placed on the shopper (training human) matched against logistical attack on legal/political will blow well meaning humans away... and the other humans will reap windfall.
Case in point> Steven Adler published a good look at the misinformation regarding current claim of 1,000 proposed AI regs on state level busting the myth. Today begins the debate in the House re the Senate bill so it will be interesting to see if the myth has effect...
I guess what strikes me is that companies love to blather about making ‘data driven decisions’ - as if we have finally emerged from the dark ages of the dreaded ‘gut feelings’ with all the attendant instincts and bias…
But in my experience for the last decade in a Fortune 100 company, precious few decisions actually get made that way. It’s more like trusted people making judgment calls…then seek the data to justify it retroactively. (Just like human brains do)
So what I’m wondering is - in terms of accountability or audibility, what’s the difference between a human sticking their finger in the air and saying ‘let’s do this not that’ versus an AI doing the same?
What are your thoughts about government adoption? This will be even slower. Even today we have society on apps, many banks and payments still on checks, and many government systems still on paper. I believe society would have stronger resistance to government functions and legal systems based on some degree of AI input/output.
Government adoption is tricky...I bet it will be adopted in some places more quickly than others. I have not given it much thought. Maybe I will set aside some time soon and write about it. It's an interesting question. Government faces its own set of challenges. And of course different governments (across countries and within countries) will have different approaches.
Strong well reasoned arguments as usual. I would add that in some cases the business case for change in current organisations (though resisted as you suggest) could be large enough that start up ventures can be successful. Examples abound of established businesses who could not adapt to and capitalise on new technology despite being the very organisations you would expect to do so (Kodak, Blackberry, Nokia, GM, Apple). As an example of an organisation that found a way to adapt I like the example of News International in the UK in the 1980s. Murdoch knew he had the organisational issues you describe, in this case the change from hot type by union workers to digital type by journalists. So he didn't try. Instead he built a shadow production facility in Wapping and when it was ready in January 1986 he didn't move the Fleet Street operations, he closed them. I'd also highlight Tesla as a startup built in the space left when established companies fail to adapt to new technology (hence GM in my list above).
PS; Apple is in my list of failed adopters of new tech when you would bet they'd be the leaders, and that's their failure on AI.
I'm saying that if the opportunity for AGI to disrupt existing business practise is there, don't look to incumbents to grasp the opportunity (for the reasons you gave), look to new entrants.
That doesn't answer the big issues of compliance and audit of course.
Yes, I agree that AI-native firms will disrupt many incumbents, especially in less regulated sectors. But even there I suspect that these disruptors will face a ceiling on the amount of disruption they can do until things like compliance, audit, etc., are reconciled with stochastic systems.
Generally, I agree with this chart. I do have one quibble though. I would strongly encourage dividing up ERP and Internet. Two massively different waves and time periods. I can say that, because I was there. The 90s (beginning in very late 80s and extending into early 00s) was the decade of ERP. It was the decade where SAP (and to a lesser extent Peoplesoft, Oracle, Siebel, et.) and also built the massive consulting firms (Andersen Consulting, PWC, EY, IBM, and others) all made their bones. The Internet was later…late 00s and 10s.
Yup, good point. ERP & Internet were really two different things.
You make some good points - having worked in a regulated environment, I can see where the process and inertia will slow adoption rates.
Where I'd push back is on the timing, given the response to incentives faced by executives.
We can already see the rush to announce or fund any initiative with "AI" in the title, the mentions of AI on earnings calls, and the "received wisdom" the global corporate class is starting to align on.
More enterprise AI adoption will be driven by the fear of appearing "unsophisticated" at a dinner party rather than the most realistic business case or return-on-investment calculation.
A big corporation is like an oil tanker most days, but it can act like a plane when it has to under stakeholder pressure.
There’s definitely a two-speed reality here:
At the surface level, we’re already seeing a scramble for AI press releases, earnings call soundbites, board decks, and executive retreats all driven by the fear of appearing “unsophisticated.” That’s happening at breakneck pace, independent of real operational ROI.
But deeper down, where core deterministic machinery actually runs the enterprise (financial controls, compliance, insurance risk models, certified processes, etc), we still face all the structural, regulatory, and epistemic inertia I laid out.
So I suspect what we’ll get is a huge wave of adoption theater over the next 3–5 years, long before the underlying institutional guts adapt.
In some ways, that might even delay serious integration because once boards can point to “AI initiatives,” they may feel they’ve already checked the box, without confronting the hard problem of embedding stochastic reasoning into environments that historically punish variance.