Introduction
Tyler Cowen recently wrote a post about Claude 3 and AGI, which is here. My intial reaction to it was that it was inscrutable. This post is my effort to parse what, exactly, he’s arguing. I had some assistance from ChatGPT (using GPT4)1, but what follows is my synthesis of Tyler’s blog post, ChatGPT’s interpretation of it, and my understanding of it. In other words, what follows are my words, but they are words that I have arrived at using ChatGPT as an assistant.
His post explores the complexities and ambiguities surrounding the concept of Artificial General Intelligence (AGI). He engages with current discussions and perceptions about AI capabilities and their future potential.
The definition problem: what do we mean by “AGI”?
He highlights the lack of a well-defined understanding of AGI. If we analogize a theoretical AGI to human intelligence, we quickly see that a general intelligence, whether artificial or natural, does not by itself confer an ability to do everything to which intelligence is applied. Make this more concrete: a person is generally intelligent, but a person can’t do everything that everyone else can do. If this observation is true for natural intelligence, why wouldn’t it also be true for an artificial intelligence? We are left to wonder: is general intelligence meaningful in AI? Is the term misleading or inadequate? Does it capture the essence of what intelligence encompasses?
The role of AGI in current debates
Then Tyler shifts to how the concept of AGI is used in debates, noting its counterproductive nature. He points out that predictions about AGI often reflect the biases or temperaments of the people commenting on it, rather than the actual capabilities or limitations of AI technologies. This observation underscores the problematic framing of AGI in discussions about AI’s impact on society and its technological boundaries. Our interpretation of AGI and AI is informed by our biases about these terms.
Historical perspective on AGI
Tyler notes that if Claude 3 Opus had been available five years ago, it might have been considered AGI by the standards of that time. In other words, five years ago only AI researchers had an inkling of the potential of large language models. To the rest of us from five years ago, Claude 3 Opus would seem like magic. It would seem like our conception—back then—of AGI. Perceptions of AGI are a function of the technological context and capabiilties of the era in which it is being contemplated. AGI, in other words, is a moving target, devoid of concrete definition. The AI of five years from now—call it GPT 6 or Claude 5—will seem like AGI to us here in 2024.
Concluding thoughts
He concludes with a provocative dismissal of the pursuit of “true AGI,” arguing that the expectations for AI in 2019 were based on a lack of imagination and critical thinking. This final statement serves as both a critique of past attitudes towards AI and a skeptical view on the feasibility or significance of achieving AGI as it is commonly conceptualized.
His post raises important considerations about the way we define, discuss, and anticipate AI’s development and capabilities. By challenigng conventional wisdom and inviting a reevaluation of our expectations, he questions not only the technological aspects of AI, but also the cultural and philosophical frameworks that shape our understanding of intelligence. His analysis provides what might be called non-consensus, or contrarian, insights. It also addresses the importance of rethinking concepts and terminologies that may not adequately capture the complexities of emerging technologies and their societal implications.
One might wonder why I haven’t used Claude 3 to analyze a blog post about Claude 3. The only explanation I have is that I have not yet bought a subscription to Claude 3, but I have one for ChatGPT.
Thank for this analysis. I have been listening to the 2021 Reith Lecture series that discussed just this issue. For those that are interested ChatGPT gives this summary: "The 2021 Reith Lecture series, delivered by AI expert Stuart Russell, centered on the theme "Living With Artificial Intelligence," offering a deep dive into AI's societal impacts and future implications. Russell, a prominent figure in the field from the University of California, Berkeley, tackled key issues across four lectures: the evolution and perception of AI, its use in military applications, its economic implications, and a proposed new model for AI development focused on ensuring beneficial outcomes for humanity. Through these discussions, Russell aimed to enlighten the public on AI's transformative potential while addressing the ethical, economic, and existential questions it poses for the future of human society.". Given the current state of play of LLMs 2021 seems a long time ago. Still a very interesting set of lectures though.