Interesting links for Jan 13, 2023
Using AI to improve AI writing; do markets anticipate AGI any time soon; AI augurs the return of the Socratic method; books about AI
Everyone’s talking about AI these days. You may have noticed that I’ve become besotted. AI looks to be one of those transformative technologies whose ultimate impact may exceed its hype. The jury is still out on decentralization and blockchain, but AI is something real and tangible. One can intuit real world use cases for ambient AI. One can easily envision a period a few years’ hence in which a future version of ChatGPT is on your mobile phone, and you speak to it as you do your intern or assistant now: “Build me an Excel formula which calculates a sliding fee schedule based on the following inputs….” Or any of an infinite number of other commands.
The computing will be ambient, it will be cheap, and it will be everywhere. And this will drastically reshape society in ways that we can’t yet anticipate.
It does not follow, of course, that it will be easy to make money investing in it. I expect VCs to pour tens of billions into all manner of AI tech startups over the next few years. I further expect that much of generative AI tech will be commoditized and open-sourced over time, which means that prices, and so margins, will collapse. Barriers to entry will be low, and competitors will sprout like fungus after a spring rain. It will be no fun if you’re an AI entrepreneur looking to build a latter-day Google. But for consumers, it will be among the most amazing times. The best of times, the worst of times. The expectations are great.1
That preamble out of the way, let’s take a look at some of the links I’ve collected recently.
Using AI to improve AI-generated content
I wrote yesterday of my attempt to train ChatGPT to give me something other than dull and insipid prose in response to my prompt, and largely failed. This technology is improving rapidly, of course, and I suspect that GPT4, supposedly coming out in a few months, will be even better at writing text than ChatGPT is. I tried to hack the system by iterating upon my prompt, instructing the machine to do certain things that I thought would make its writing more fluid and evocative. More human, in a word.
At some point generative AI will become so good that it may well become impossible to distinguish between human-generated and computer-generated text. We’re not there yet, but the author of this piece notes:
As generative AI technologies improve, it becomes more important to be able to detect AI-generated content. This is necessary for a myriad of reasons, such as preventing academic dishonesty (e.g, writing essays), detecting fake product reviews, identifying toxic messages, and combating the spread of disinformation and fake news.
Detecting AI-generated text, however, is not straightforward. One method of doing so is using machine learning algorithms to identify patterns common in AI-generated text….As MIT Tech Review notes, “detection models just can’t keep up” with the improving capabilities of AI-generated text. To reduce the risks posed by machine-generated text, it is imperative that researchers and practitioners continue to improve and refine techniques for detecting AI-generated content.
The near term future will be very weird.
Do markets expect unaligned AGI risk?
The efficient market hypothesis (EMH)asserts that financial markets factor in all available information when pricing assets. There are a variety of interpretations of it: strong, semi-strong, and weak. In the context of artificial general intelligence (AGI), any of the three forms of the EMH would seem to suggest that the financial markets ought to react to AGI expectations. And, if the market intuited, via its collective intelligence, that AGI is imminent, then certain things ought to happen. But we have not seen those things happen. Therefore, if we believe that the EMH is true (in any of its three forms), AGI is not imminent.
The link I give above is to Marginal Revolution. The essay in question is available here. Quoting part of what Tyler Cowen quotes on his blog:
In this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:
1. Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years.
2. Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.
I don’t pretend to know which of these two possibilities is more likely to be correct, but it seems like the kind of thing we should figure out. Fifty years from now is 2073, which would still leave 27 years for the most important century to assert itself.
The return of the Socratic Method
Scott Belsky argues that the rise of generative AI tools like ChatGPT augur a return to the Socratic method:
While traditional, memorization-driven, arithmetic-heavy (industrial era) education is already widely criticized, the prime elements of education (text books, linear learning, essay writing, etc) are all on the brink of being disrupted….ChatGPT has done to writing what the calculator did to arithmetic. But what other implications can we expect here?
[One implication is t]he return of the Socratic method, at scale and on-demand. The Socratic Method, named after the Greek philosopher Socrates, is anchored on dialogue between teacher and students, fueled by a continuous probing stream of questions. The method is designed to explore the underlying perspectives that inform a student’s perspective and natural interests. I experienced a couple years of this during business school…, and loved the student-directed nature of learning rather than being lectured at. The framework felt optimized for surfacing relevance and stoking organic intrigue. Imagine history “taught” through a chat interface that allows students to interview historical figures. Imagine a philosophy major dueling with past philosophers—or even a group of philosophers with opposing viewpoints.
If this all sounds fanciful to you, you may be someone who relies on credentialism. And you may think that the way that people ought to learn is to be lectured to by aforesaid credentialed people. I previously wrote about how ChatGPT allows one to rapidly scale learning curves and integrate new knowledge. This will be hard for conventional people to understand and accept. But the rise of generative AI tools augurs an era in which the credentialed have much less control over what students learn and how they learn it. One of the interesting conflicts that will arise out of the commodification of generative AIs will be that between educational traditionalists and AI-natives.
The best books on artificial intelligence
If you’re looking for some books to read about artificial intelligence, this list seems pretty good. It turns out I’ve already read two books on this list:
Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom
Thinking, Fast and Slow, by Daniel Kahneman
I picked up Pedro Domingos’ The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World.
Kudos to you if you get the joke/allusion.