5 interesting links for May 16th 2023
Distribution matters for AI; AI & explosive economic growth; appropriate responses to AI risk; AutoGPT--an autonomous AI agent on your laptop; AI is people, not software
Following are five interesting links which I’ve collected recently. These links are about AI, but as AI seems to be touching approximately everything, these links are really about the world.
Distribution matters
It’s quickly becoming evident that the companies with pre-existing distribution networks will fare best during the AI ramp-up we find ourselves in. This is in part because companies with pre-existing distribution networks are larger, and so have more cash with which to train AI models, than are upstarts. But it’s also true that we don’t yet have a great understanding of what we can do with AI that is economically productive. Sure, we have an inkling of certain things: it can speed up coding, it can speed up copywriting, it can do legal scut work, etc. And while those are all important things, none of them is tantamount to an AI-driven business.
So we have a bifurcated AI market: on one side are the large tech incumbents racing to integrate AI tech into their products, and on the other side pure play AI startups are building AI-driven business models from scratch. The jury is out on which path will win over time. Thus far, the revenues seem to be accruing mainly1 to the incumbents. And, make no mistake: this is all very cool, very important, and will be a game changer for a lot of people.
Anyway, along those lines, Tanay Jaipuria surveys the current scene, and notes that a lot has changed since he did his last survey towards the end of 2022. Of particular note are the companies he chooses to highlight:
Microsoft
Meta
Amazon
Google
Apple
Save for some other tech companies like Oracle, SAP, Nvidia, etc.: these are the tech companies that have the greatest distribution networks. That they’re also among the largest tech companies in the world is no accident. While smaller tech companies like Notion which are implementing AI features, or Quora, with its PoeBot2, most of the attention is being paid to the large incumbents.
AI and explosive economic growth
One of the things that AI-philes like to fantasize about is that as AI becomes more advanced, economic growth will pick up. And, we’re not talking about the normal 2%-4% growth that occurs in bull markets. We’re talking about 10%, 25%, or even 100% (or greater) growth over fairly short time periods. If—and it’s a big if—AI advances sufficiently to drive this kind of growth rate in our economy, the implications will be, to use a hackneyed word, profound.
Open Philanthropy published a report on this topic. It’s hard to extract a good quote from this 30-page paper due to the scope of the question it examines, but its concluding paragraphs give some indication of its underlying findings:
The standard story points to the constant exponential growth of frontier GDP/capita over the last 150 years. Theoretical considerations suggest 21st century growth is more likely to be sub-exponential than exponential, as slowing population growth leads to slowing technological progress. I find this version of the standard story highly plausible.
The explosive growth story points to the significant increases in GWP growth over the last 10,000 years. It identifies an important mechanism explaining super-exponential growth before 1900: increasing returns to accumulable inputs. If AI allows capital to substitute much more effectively for human labor, a wide variety of models predict that increasing returns to accumulable inputs will again drive super-exponential growth. On this basis, I think that ‘advanced AI drives explosive growth’ is a plausible scenario from the perspective of economics.
The paper lacks the breathless hype emanating from certain venture capitalists, and in the regard, it is a measured breath of fresh air. AI technology holds a lot of promise, but it is as yet unproven whether it will generate super-exponential economic growth.
What’s the appropriate response to AI risk?
Some pundits, like Tyler Cowen, seem to think that AI-related risks are overblown, and that we (meaning the United States and the west more broadly) ought not squander our lead in AI to the Chinese by implementing any kind of AI research slowdown or pause. Other pundits like, famously, Eliezer Yudkowsky, advocate bombing data centers as a way to slow the progress of AI.
But what if there is a third option? A path between these two extreme views? Well, here’s Leopold Aschenbrenner, writing about what he considers to be a more moderate path forward:
Sam Altman and Ilya Sutskever (father of deep learning, Chief Scientist at OpenAI) say that scalable alignment is a technical problem whose difficulty we really shouldn’t underrate and we don’t have good answers for; they take AI x-risk very seriously. The main people who built GPT-3, and authored the seminal scaling laws paper—Dario Amodei, Jared Kaplan, Tom Brown, Sam McCandlish, etc.—are literally the people wholeft OpenAIto foundAnthropic(another AI lab) because they didn’t think OpenAI was doing enough about safety/alignment at the time. There are many more.
These people correctly recognized the state of AI progress in the past,and now they’re correctly recognizing thestate of AI alignment progress. Right now, we don’t know how to reliably control AGI systems, and we’re not currently on track to solve this technical problem. That is my core proposition to you, Tyler, not some specific scifi scenario or nine-part argument. If AGI is going to be a weapon more powerful than nukes, I think this is a problem worth taking very, very seriously. (But it’s also asolvable problem, if we get our act together.)
A vivid memory I have of covid is how quickly people went from denial to fatalism. On ~March 8, 2020, the German health minister said, “We don’t need to do anything—covid isn’t something to worry about.” On ~March 10, 2020, the German health minister’s new position was, “We don’t need to do anything—covid is inevitable and there’s nothing we can do about it.” Yes, covid was in some sense inevitable, but there were plenty of smart things to do about it.
The choice on AI isn’t “ongoing stasis” or “take the plunge.” We’re taking the plunge alright. If you thought covid was wild, this will be much wilder yet. The question is what are the smart things to do about it. I’ve made some proposals—what are yours, Tyler? There’s a lot of idle talk all around; I'd like much more “rolling up our sleeves to solve this problem.”
Tutorial: How to install AutoGPT on your local machine
AutoGPT refers to a set of procedures which allow a large language model (LLM) to operate somewhat autonomously. As compared to the stunted ChatGPT, it holds a lot of promise and peril. Anyway, here’s a tutorial for setting it up on your local machine.
The introduction explains why this is a big deal:
After the launch of ChatGPT, AI has brought a monumental change in how we perceive computing. You can now train your AI chatbot with your own data and develop apps with natural language. Developers are now owkring ont he next big thing—Autonomous AI Agent—a peek into the beginning of AGI (Artificial General Intelligence). Auto-GPT is one such tool that lets you achieve your goals by allowing LLMs to think, plan, and execute actions autonomously. You no longer need to add any input as the AI can think and take decisions rationally.
AI is not software. AI is people.
Ethan Mollick makes the somewhat controversial point that AI software operates less like traditional software, and more like people. But his insight is important: traditional software is, ideally, deterministic. Your bank software works the same time every time you use it, for reasons which ought to be self-evident. Software engineers don’t want to deal with uncertainty: if a user clicks a button, a certain action ought to always happen.
But LLMs operate differently. They’re stochastic. Every time you use an LLM you get a somewhat different result. And, at the risk of making a superficial observation: people are similarly unpredictable. Here’s what Ethan means in more detail:
What tasks are AI best at? Intensely human ones. They do a good job with writing, with analysis, with coding, and with chatting. They make impressive marketers and consultants. They can improve productivity on writing tasks by over 30% and programming tasks by over 50%, by acting as partners to which we outsource the worst work. But they are bad a typical machine tasks like repeating a process consistently and doing math without a calculator (the plugins of OpenAI allow AI to do math by using external tools, acting like a calculator of sorts). So give it “human” work and it may be able to succeed, give it machine work and you will be frustrated.
What sort of work you should trust it with is tricky, because, like a human, the AI has idiosyncratic strengths and weaknesses. And, since there is no manual, the only way to learn what the AI is good at is to work with it until you learn. I used to say consider it like a high school intern, albeit one that is incredibly fast and wants to please you so much that it lies sometimes; but that implies a lower ability level than the current GPT-4 models have. Instead, its abilities ranges from middle school to PhD level, depending on the task. As you can see from the chart, the capabilities of AI are increasing rapidly, but not always in the areas you most expect. So, even though these machines are improving amazingly fast, I have seen acclaimed authors and scholars dismiss AI because it is much worse than them. I think our expectations of AI need to be realistic - for now, at least (thank goodness!) they are no substitute for humans, especially for humans operating in the areas of their greatest strengths.
I say “mainly” here because obviously some AI upstarts like OpenAI, seem to be generating appreciable revenues through pure AI offerings. But a lot of the pure play AI startups seem to be offering merely shiny wrappers around AI tech, which wrappers are easily copied by the incumbents.
I admit here that I don’t really understand what PoeBot is supposed to be. Whatever it is, as a long time user of Quora’s platform, I hope that PoeBot works for Quora, because whatever they have been doing prior to the release of PoeBot has not been working: Quora’s experience has degraded, significantly, over the years, as it has tried to appeal to a mass market audience.