AI won't steal our jobs. It will change how we work.
Everyone thinks that AI will steal our jobs but the reality is more prosaic.
AI will not eliminate most jobs. AI will change most jobs. But there will still be work for humans to do. This is an important distinction, and it frequently gets lost in all of the news we read about AI “stealing” people’s jobs. Google recently announced that it is planning to lay off some of its ad sales employees, since they’ve created AI tools to sell ads in the employees’ stead. This, we are told, is robust evidence for the claim that AI will take all of our jobs. I don’t think that the world is this simple. It is rather lazy to conclude on the basis of one company—a company which, it must be pointed out, is an outlier when it comes to AI—that all companies are going to replace all employees with AI. Google has passels of PhDs it can use to build AI tools to replace jobs en masse. Most companies don’t. Most companies don’t have the technical resources that a trillion-dollar Silicon Valley technology company has at its disposal.
Sure, AI technology will become easier to use over time, as knowledge diffuses throughout the world. But it is a mistake to assume that every company will be able to copy Google and instantiate in software jobs formerly done by people. Airlines still use dot matrix printers, and there are some industries which have yet to fully adopt cloud computing. Institutional inertia is real. Just because a newer technology is superior to an older technology does not mean that all companies will adopt the newer technology. Cost considerations count for something, but so do organizational considerations.
Further complicating this issue is that some industries may find themselves constrained by regulations or laws which militate against the adoption of AI, or which simply require that a human do a given job. Many highly unionized companies will find that their ability to implement AI technology—even technology which merely helps employees be more productive—is severely constrained by union instransigence and inflexible contracts. And many of these highly unionized industries will find themselves unable to compete with non-unionized organizations which take adavntage of AI tech. Consider what will happen to the heavily unionized movie industry once generative AI is able to create a fully-formed Hollywood-style movie from a series of prompts.
Consider this from
:There is a pincer movement going on. Top execs are excited by GenAI and front-line workers are using it, coupled with the comparative ease of integration (certainly compared to cloud migration, for example) this pincer could squeeze some of the internal resistance that stops large firms acting quickly.
Let’s not pretend this doesn’t create tensions. Older workers might find it hard to adapt, and middle managers could face a mid-career criss, realising that their younger colleagues armed with AI make them redundant. The likely outcome is downward wage pressure: junior workers with AI competing with more expensive older workers for one, and the automation (and thus decrease in costs) of many cognitive tasks.
This sounds plausible to me, but it doesn’t amount to “AI will take our jobs.” (And, to be clear, I am not claiming that Azeem is making this argument.) What this does suggest to me, though, is that, as I indicated earlier in this piece, those workers who are adaptable and agile will fare well, and those who are not, will not.
Further, we don’t even know how to evaluate large language models (LLMs) in order to figure out what sorts of jobs they’d be good at. What potential is there for displacement, if we can’t even figure out what LLMs excel at? Here is
writing, in a long post cheekily titled Evaluations are all we need1:We built these “reasoning” engines, but we don’t know how they work. The modern instantiation of “smart middleware” which we are soon going to be using to fill jobs all over, to the point there is a literal panic in policy circles about unemployment caused by AI coming very soon. Elon Musk thinks “no jobs will be needed” at some point soon.
And yet, when I have spoken to friends in various fields, from finance to biotech drug discovery to advanced manufacturing, across the broad there is a combination of uncertainty on whether we can use them, what we can use them for, and even for those well prepared, how to actively start using them.
It makes sense, because they’re not just choosing a model to have a chat with. They are evaluating not just the LLM, but the system of LLM + Data + Prompt + [Insert other software as needed] + as many iterations as needed, to get a particular job done.
The confusion is rampant, and including with people who have used LLMs are are actively experimenting with them. They think “we have to train it with our data” without wondering which part of that data is the most useful bit. If it’s simple extraction, a RAG is sufficient. Or is it? Are we teaching facts or reasoning? Or are we assuming the facts and reasoning, once incorporated, will be more than the sum of its parts, as has been the case for all large general LLMs so far? Is “it gave me the wrong response” a problem with the question or the response? Or just a bug?”
In short, there is a lot we currently don’t know about LLMs. Just because a LLM may exhibit, or will exhibit, intelligence greater than most or all humans, does not mean that the LLM can be set upon a particular company’s problems, and do the jobs that humans previously did. We need to unpack this claim a bit, in order to get to the heart of the argument I’m presenting here. Assume you are tasked with building an AI bot to replace a bunch of jobs. We’ll call the company Disruptor. You sit down with your bosses at Disruptor, and they say “We’re replacing our marketing analysts with [an advanced form of] ChatGPT. Figure it out!”
You dutifully head off to your office, scratching your head. I know superficially what a ‘marketing analyst’ is but how do I instantiate it in software? I guess I’ll go talk to our marketing analysts and figure out what it is they do all day. So you go find Disruptor’s marketing analysts, and you talk to them. And they give you a description of their job: their responsibilities, who they report to, who reports to them, etc. And—you think you’ve figured it out! Of course this can be instantiated in code! You go back to your bosses and deliver the good news. You provide them with a time frame for writing up the code to interact with Advanced ChatGPT, and you tell your bosses to give the marketing analysts notice that they’ll be out of a job at the end of the quarter.
And then—disaster. It turns out that a lot of what a marketing analyst, or indeed, virtually all white collar knowledge workers, do, isn’t neatly captured by a description of their role, responsibilities, or whom they report to. Wrapped up in all of these jobs is all kinds of tacit knowledge and unstated processes, which neither you nor your interlocutor can possibly capture. Why, then, should we expect that an AI—even one smarter than all people—could capture these ineffable qualities that make work, well, work?
We are talking here about automation. And companies are notoriously slow to adopt automating technologies, even if the technology promises to reduce operating costs or improve efficiency. Here’s
writing on this topic:But while it’s accurate to say “automation is hard and complicated, and therefore slow,” it’s also deeply misleading. Stopping there obscures what social scientists call mechanisms—the specific interaction patterns that actually cause outcomes to happen or not to happen. If we don’t try to understand those, we’re ignoring huge swathes of psychological and sociological research on how individuals, groups, and organizations change as new forms of automation emerge. If we do understand mechanisms, we can make better predictions about when automating something will be quick, when it will be slow, when it will be easy, when it will be hard.
He continues later in his piece:
But retaining archaic automating technology turns out to be a side-effect of the way organizations automate in general. Every organization, everywhere, has the moral equivalent of dot matrix printers in service—I bet you are already thinking of one or two examples in your context. And the same mechanisms that drive the dot matrix phenomenon also impede automation that relies on the latest technology.
Anyone who’s yelling loudly about the need to put LLLms to use right away doesn’t understand this reality. And if you’re a pramatist—someone who wants to get great results with this new general-purpose technology without sacrificing your organization or team to do it—you need to understand this reality.
At root, the confusion arises, I think, from the following observations: AI technology is rapidly increasing in capabilities, and timelines for artificial general intelligence (AGI) have been radically compressed in recent years. However, there has not been a concomitant increase in the speeed with which legacy institutions (corporations) operate. While technology has rapidly been improving, organizational processes have remained static. And there isn’t any reason to think that this will suddenly change once AGI is here. Just because the world has AGI does not mean, for example, that Coca-Cola would be able to flip a switch, fire all of its employees, and turn over the keys to the (a?) AGI.
The reality is this: all of our jobs will change, often radically, due to AI. But AI will not “steal” or “take” our jobs or “displace” us. Rather AI will force us, and the companies for which we work, to change how we do what we do. Those who are able to be adaptable and agile will fare well, and those whose training or deportment makes them more rigid will fare poorly. The onus is on each individual person to figure out on which side of the adaptable/rigid line they fall, and adjust accordingly.
The title of his post is, of course, an allusion to Google’s paper which first proposed the transformer architecture for large language models, Attention is all you need.