Every company will be an AI company
How do you get companies to adopt AI technology? At some point they will have to adopt it, but it's a hard road to hoe.
Introduction
There’s a lot of discussion now about AI in the enterprise: traditional, non-tech corporations will at some point have to adopt AI technology, in order to stay competitive. Lost in these discussions is any consideration of how they will adopt this technology. AI promises to up-end a lot of business operations, and companies will find that they have to re-orient their workflows to take advantage of AI. But the state of AI is constantly in flux. The field is rapidly developing, in pursuit of elusive AGI. Enterprises don’t buy technology on the promise of the arc of development, though. They buy on the basis of the problems that today’s tech can solve. We see a lot of stories about how senior business executives at Davos are talking about AI:
Mistral becomes the talk of Davos as business leaders seek AI gains
The future of work in the age of generative AI: insights from Davos
From Davos to dominance: how AI is rewriting our planet and business
And, this makes sense. I don’t think there is any senior executive—call them a CxO for short—who doesn’t think that AI won’t affect their business. And yet, I think that many traditional, non-tech businesses, will have much more trouble integrating AI into their workflows than is commonly expected. The executives quoted in the news articles linked above speak in general, high-level terms about the notion of AI. But they don’t engage with its substance. Conferences like Davos are where people speak in vague, high-level platitudes about the Important Topics of the day. But the details of those Important Topics are left to underlings.
And this is where I think there is a disconnect between the senior executives who say, essentially, “Sure, yes, AI is going to be very important to the future of [large, traditional, non-tech corp] because of [reasons]” and the people who develop AI technology. I see a lot of technologists seizing upon executives’ comments at conferences like Davos, and assume that the companies for which those executives work are fully integrating AI into their business. And to a very large extent, this just isn’t true. At least not yet.
I relate all of this because
has a great post about recent developments in AI, which touches on how companies can take advantage of AI tech. His suggestions are good. And yet, I don’t think many companies will follow them. I will quote his suggestions at length here, and then provide the comment I made on his post, so that you can see where I am taking this argument.His suggestions:
What useful thing you do is no longer valuable? AI doesn’t do everything well, but it does some things very well. For many organizations, AI is fully capable of automating a task that used to be an important part of your organizational identity or strategy. AI comes up with more creative ideas than most people, so your company’s special brainstorming techniques may no longer be a big benefit. AI can provide great user journeys and personas, so your old product management approach is no longer a differentiator. Getting a sense of what AI can do now, and where it is heading, will allow you to have a realistic view of what might soon be delegated to an LLM.
What impossible thing can you do now? The flip side of the first question is that you now can do things that were impossible before. What does having an infinite number of interns for every employee get you? How does giving everyone a data analyst, marketer, and advisor change what is possible? You can look at some of the GPTs my students created as inspiration.
What can you move to a wider market or democratize? Prior to AI, companies were often advised to put their effort into servicing their most profitable customer, but AI has greatly changed the equation. Services and approaches that were once expensive to customize have become cheap. Prior to AI, strategy consulting firms would only work for giant clients for large fees, but not they may be able to offer effective advising to a much wider range of businesses at lower costs. Custom tutoring and mentoring, once available only to the rich, may be widely democratized.
What can you move upmarket or personalize? At the same time, your organization’s capabilities have increased. If you were once a small marketing firm, you can use AI to punch above your weight and offer services to elite clients that were once only available from much larger firms. With giant context windows and fast answers, every customer may be able to have a personal AI agent who knows their preferences and previous interactions with the company and communicates with them according to their preferences. Figure out the most exciting thing you can do, and see if you can make it happen.
This is all reasonable advice, and the companies which follow it likely will find AI useful. However, here’s what I commented on his post:
The suggestions that you provide for leaders are reasonable, but I am skeptical that a CxO of a generic trad corp will have the werewithal or desire to do what you suggest. I predict that most trad corps will use AI to do seemingly simple, but actually complex, stuff like chatbots, etc. And I expect those initiatives to fail spectacularly. It will take trad corps a while yet to figure out how to integrate AI into their workflows. This presents an interesting problem for a company like OpenAI, which is trying to get trad corps to sign enterprise contracts with it. Less so for Google/MSFT/etc., which have the cash to ride out the learning cycle.
I want to unpack this comment a bit, because it gets to the heart of why I think that large, traditional, non-tech companies are going to have such trouble with AI. And, to the extent that large, traditional, non-tech companies have trouble with AI, that spells trouble for AI companies like OpenAI which are trying to build enterprise services businesses.
Let’s take Ethan’s suggestions one by one.
What useful thing you do is no longer valuable? Ethan notes that AI does some things well, but it doesn’t do everything well. But the suggestions he offers, which are good and accurate ones, are way too granular for a large company’s leadership to know anything about. What Ethan is essentially saying here is that a large company’s leadership should buy its staff access to AI technology, and let their underlings figure it all out. And, while this makes sense, this kind of bottoms-up discovery process is foreign to many hierarchical corporate environments. In order to re-orient your company around AI, you need to figure out, at a granular level, what AI can do for you, and what it can’t. And that’s not something that can be foisted upon the company from the top. And yet—the top has to support these initiatives, due to their cost, complexity, and so on. There are complex internal tensions to resolve.
What impossible thing can you do now? This question, essentially an inversion of the first, seems more amenable to senior leadership. But, again, details matter here. Senior leadership is usually aware of ‘things we’re not doing well because we don’t have the skills or knowledge to do them’. But getting from ‘X is impossible for us to do’ to realizing that AI can help you do X requires an intimate knowledge of what AI can and can’t do! So we’re back to square one. Much of implementing AI at a large, traditional, non-tech corporation entails developing a good sense of what it is good for, and what it is not. And this kind of granular information is just not senior executives’ comparative advantage. Recall the point I made about executives speaking at Davos, at the beginning of this post. Senior corporate executives’ remit is the abstract and vague. Again we’re left with this quandary: in order to know what formerly impossible thing is now doable, given AI, we need to know what AI is good at and what it is not. This kind of bottoms-up learning is all too rare at many large corporations.
What can you move to a wider audience or democratize? This seems like a fairly straightforward strategy exercise. It’s pretty evident to a company’s leadership who their main customers are. And, if they can use AI to profitably serve more customers, they’d certainly be interested. But we’re left with the same issue that I’ve commented on above: the company needs to understand what AI can do and what it can’t. This is granular knowledge that corporate leadership often does not have.
Roadblocks to AI at non-tech companies
Common roadblocks include:
Lack of expertise. Traditional corporations don’t necessarily have the recruiting expertise to find people with AI skills. Nor do they have the operational skillset required to integrate these technologies into their workflows.
Costs Though its true that the cost of intelligence computation, commonly quoted in terms of tokens, has declined to fractions of a penny per token, there are other costs, including staffing, services contracts, opportunity costs, etc. Corporations in general view technology as a cost center, and it is often harder than an outsider would expect to get a corporation to adopt a new technology. This is especially true for an enterprise-wide technology such as AI.
Technical challenges. The great thing about ChatGPT is that it ‘just works’ out of the box. You sign up for an account, provide payment information, and you can start using it immediately. This lulls people into think that all AI is as painless to set up and use. And that’s simply not true. Corporations will find that vast swathes of their organization need to be restructured and reskilled to use AI effectively. Just consider how Google has been going through rounds of layoffs to restructure its operations around AI, and then think about how a less technically savvy traditional corporation could do the same thing.
Here’s an interesting chart from a recent McKinsey report:
It’s about what you would expect: workers in the technology industry have the most experience with AI, but relatively few workers, across all industries, use AI in their work. For the most part, AI is and remains a consumer phenomenon. (Yes, there are obviously extant commercial use cases for AI, but surely you’ll agree with me that the ChatGPT phenomenon with which we’re all familiar has been mostly a consumer one.)
However, in spite of the apparent interest in AI, McKinsey also reports
few companies seem fully prepared for the widespread use of gen AI—or the business risk these tools may bring. Just 21 percent of respondents reporting AI adoption say their organizations have established policies governing employees’ use of gen AI technologies in their work. And when we asked specifically about the risks of adopting gen AI, few respondents say their companies are mitigiating the most commonly cited risk with gen AI: inaccuracy. Respondents cite inaccuracy more frequently than both cybersecurity and regulatory compliance, which were the most common risks from AI overall in previous surveys.
(“Inaccuracy” here appears to mean the tendency of generative AI tech like ChatGPT to hallucinate.)
It’s clear that companies have their work cut out for them when it comes to implementing AI technology. I suspect that a lot of technologists, especially those who work on AI tech at companies like OpenAI, Google, etc., underestimate the amount of institutional inertia at the large traditional non-tech companies to whom they want to sell AI compute. This isn’t to say that selling these services is an impossible task: Silicon Valley, after all, has been largely successful in getting corporate America to migrate to the cloud. But sales cycles in enterprise are long, and sellers looking to get large companies to adopt AI tech will need to figure out how to sell a very complex product to an organization reluctant to spend yet more money on technology whose payoff may seem, to the buyer, illusory.
Suggestions for companies looking to adopt AI tech
Given all of this, following are some suggestions for how traditional, non-tech corporations can adopt AI tech.
Invest in AI literacy at all levels: Corporations should prioritize AI education and literacy, from the boardroom to the operational level. Understanding AI’s potential and limitations is crucial for strategic implementation.
Adopt a phased integration approach: Start with pilot projects and small-scale implementations to build expertise and understand the impact on workflows before scaling up.
Focus on governance and risk management: Establish clear policies and guidelines for AI use, addressing potential risks and ensuring regulatory compliance.
Leverage external expertise: Collaborate with tech companies, consultancies, and academic institutions to bridge knowledge gaps and accelerate the adoption process
Prioritize flexibility and adaptability: Encourage a culture of innovation and flexivility, enabling the organization to pivot as AI technologies and business needs evolve.
Conclusion
Integrating AI into traditional, non-tech corporations is a complex, multifaceted challenge that requires strategic foresight, operational flexibility, and a commitment to continuous learning and adaptation. The potential benefits of AI are substantial, but realizing these benefits requires overcoming significant barriers, both internal and external. By addressing these challenges proactively, traditional corporations can successfully use AI.
A great summary of the “next big thing” with some sound guidance. But this seems like another verse of the same “changing the company” song. Some companies will succeed with the path you prescribe, and others won’t. We were told ‘everyone has to’ go to lean supply chain - or ‘green’ to extremes, or ‘remote work’ or block chain, and results are generally along a spectrum, even for those that were fully committed to the change. AI will change things, but it will develop its own warts and issues. Smart adopters will have a ‘trust but verify’ mindset that surfaces the problems quickly and honestly.