AGI is not imminent
Experts and insiders suggest aggressively compressed timelines for AGI. I don't think this is reasonable.
The more I think about the timeline for artificial general intelligence (AGI), the more skeptical I am that it is coming any time soon. Significantly influencing my thoughts here is this recent essay. I note three things:
Incentives of employees of leading AI companies: the people employed at OpenAI, and similar companies, have significant incentives to assert short timelines for AGI, since if they achieve AGI, they will become very wealthy;
Inaccuracy of experts’ forecasts: experts’ forecasts are generally inaccurate (see Philip Tetlock’s work on so-called superforecasters)
Market valuation relative to world GDP: the valuations of leading AI companies (OpenAI, etc.) don’t suggest imminent AGI. For example, OpenAI is “only” valued at approximately $100 billion, but world GDP is about $100 trillion. If OpenAI or others were really as close to AGI as some speculate, their valuations would be greater by at least an order of magnitude.
Let’s think a bit more about each of these claims.
Incentives
Incentives drive outcomes, and if you know that you will become very wealthy if your company achieves something, you have every incentive to (1) believe that the company will achieve that thing, and (2) widely publicize that certainty. But merely asserting that something will happen doesn’t mean it will happen. So it is hard for me to take the forecasts of insiders at OpenAI or similar companies seriously, given their incentives.
Inaccuracy of experts’ forecasts
A lot of AI experts, both inside leading AI companies like OpenAI, and elsewhere, assert impressively short timelines for AGI. AGI within a decade is not an uncommon refrain.
On the other hand, superforecasters excel at predictions. The term was invented by Philip Tetlock and Dan Gardner, for their book Superforecasting: The Art and Science of Prediction. Superforecasters are not necessarily subject matter experts, but are adept at gathering evidence, thinking probabilistically, and updating beliefs with new information. Further, they are often more accurate in their forecasts than traditional experts in specific fields.
Superforecasters challenge the conventional wisdom about expert predictions in several ways:
Aggregation of diverse perspectives: Superforecasters use a broad spectrum of information sources, including contrarian and non-consensus viewpoints.
Probabilistic thinking: Superforecasters excel in probabilistic thinking, allowing for a nuanced understanding of uncertainty.
Continuous updating: Superforecasts continuously update their predictions based on new evidence.
From the essay I link to above:
When do the superforecasters think AGI will arrive? As part of a forecasting tournament run between June and October 2022 by the Forecasting Research Institute, 31 superforecasts were asked when they though Nick Bostrom—the controversial philosopher and author of the seminal AI existential risk treatise Superintelligence—would affirm the existence of AGI. The median superforecaster thought there was a 1% chance that this would happen by 2030, a 21% chance by 2050, and a 75% chance by 2100.
These forecasts are less bullish, and more skeptical, about the prospects for AGI within the next few decades. To the extent that superforecasters’ track records for predictions are more accurate than the median set of predictions, I am inclined to agree with their view.
Market valuations
As I noted above, OpenAI was most recently valued by its investors at approximately $100 billion, and world GDP is approximately $100 trillion. If OpenAI was as close to developing AGI as is commonly surmised by both its insiders and the expert community, I would expect a much higher valuation for the company. This is because, were AGI developed, we would expect much economic activity to be mediated by that AGI.
This isn’t to say that one should expect OpenAI (or other AGI inventor) to capture most of the world’s $100 trillion of economic activity per year. However, we should expect OpenAI’s revenues to increase, significantly, as companies start buying compute from it in order to serve their business activities. And, at an annual revenue run rate of approximately $1 billion, OpenAI’s revenues are simply too small to suggest anything close to a $1 trillion or greater market valuation. (One could even reasonably argue that its $100 billion valuation is hard to justify.)