The AI revolution will come at a later date
AI will change the world but it will take time, just like every other revolutionary technology
An interesting observation about AI is that two seemingly contradictory things can be possible: (1) there is an investment bubble in AI; and (2) AI technology is revolutionary tech which will ultimately prove its worth. These things seem contradictory because the first observation appears to assert the impossibility of the second observation. If there is an investment bubble in AI, after all, then a lot of investors will lose their money, and revolutionary technology ought to generate wealth! But consider other revolutionary technologies. Many people who invested early in electricity, railroads, and the internet, lost money on their investments. However, over the longer term, these technologies created much more wealth than was invested.
Part of the problem is that it simply takes time to figure out how to make money from a new technology. The dot-com boom of the late ‘90s was famous for companies that incinerated vast amounts of investor capital. One canonical example would be Webvan, but now grocery delivery is a fairly routine, if not widely adopted, business. Webvan failed for a variety of reasons, but perhaps the most important one was that the technology and infrastructure required to support last mile delivery of groceries simply was not available in the late ‘90s. It’s hard to remember, sitting here in 2024, but mobile devices with real-time mapping was not a thing in the late ‘90s. Nor was extensive broadband internet connections. Nor was any of a dozen other technologies that we take for granted today. We can scale up a last mile grocery delivery service today because much of the technology required to support such a business has been commoditized and instantiated on mobile devices.
So who is right? Are the optimists right that AGI is imminent, and that our world will look radically different by 2030? Or are the bears right, and AI is just another example of runaway Silicon Valley hype?
offers two reasons why the bears will ultimately be proven wrong:First, every new technology tends to create a speculative boom that ends in tears for a lot of investors. It happened with the railroads in the 1800s, and it happened with the telecoms in the 1990s. Yet nowadays, few people would deny that the railroads and the internet were well worth building, even if a bunch of people lost money at the time.
Second, new technologies tend not to improve productivity much at first, because people don’t know how to use them. It takes businesspeople a while to figure out how to restructure their business models to take advantage of the new capabilities. At first they tend to simply try to slot the new technology into their old models — basically what people are doing right now when they use AI to provide customer service or take orders at fast food restaurants. But the gains are typically marginal. Only later, once they find whole new tasks for technology to do, do they really start making money and boosting productivity.
This strikes me as basically right. The AI bulls who see a radically different world in five years don’t seem to understand that we have to figure out how to use AI productively in order for it to change the world in the ways they foresee. They ascribe far too much power to the notion of autonomous AIs recursively self-improving, and pay far too little attention to the inertia inherent in the world. Just because one can conceive of a recursively self-improving AI, and just because a recursively self-improving AI is not something which violates the laws of physics, does not mean either that it is inevitable, or that it will appear within the next several years.
The pessimists have an equally inaccurate picture of the world, originating, perhaps, in their overwrought concerns that AI will steal everyone’s jobs. While it is true that a more powerful AI can probably do many jobs we presently do, it doesn’t necessarily follow from that observation that a more powerful AI can do every job we can conceive of: labor markets are not zero sum. When things are automated, people tend to find other jobs to do.
Further, adoption of new technology is never as rapid as the bulls expect. Nor is it as slow as the bears expect. Twenty-five years ago, Marc Benioff founded Salesforce, and many skeptics scoffed at the notion that he would convince large companies to do business via web browsers. And yet here we are today, and every business in the world regularly conducts business via web browsers. Here’s Ben Evans:
People forget this now, but the iPhone took time as well. Apple sold just 5.4m units in the first 12 months, and it took until 2010 for sales to really work (the iPod took even longer). The same, of course, applies to the enterprise. If you work in tech, cloud is old and boring and done, but it’s still only a third or so of enterprise workflows 25 year after Marc Benioff tried to persuade people to do software in the browser.
I suppose that none of this will convince either the rabid bears or the perennial pessimists of their error: people generally don’t like to reconsider their views given contradictory information. Nonetheless, I suspect that the arc of history will prove Noah Smith’s view to have been the correct one.
Agree on all points.
The “how to monetize” question obviously also depends on the ROI, which depends on the cost of the investment. Right now AI requires a lot of energy, which is still expensive. Thesis: Cheap energy is to AI as gps and 3g was to last-mile delivery.