AI Link dump: AI & creativity; AI regulation & political economy; AI & power generation
Will AI steal jobs from creative people? Will AI regulations be subverted by bad political actors? Where will the power required for advanced AI computing clusters come from?
The world does not owe you a living
The world does not owe you a living. I mention this not because it’s obvious, but rather because, for many people, it does not seem to be obvious. The latest contretemps to arise from OpenAI is its CTO, Mira Murati, saying “some creative jobs maybe will go away [due to AI], but maybe they shouldn’t have been there in the first place.” She said this at a talk she gave at Dartmouth, her alma mater. You can see the whole video here:
If you want to see the specific clip of her saying the quote I give above, you can see this tweet. A lot of people objected to this claim. Their ire seems aimed at AI more than anything else: AI, they claim, is not creative in the way that people are creative. That may be true, at least for now. AI may suck at creativity now, and it may be better at creativity in the future, but all that is irrelevant: most people are not sufficiently creative to warrant earning a living from it. The world doesn’t owe you a living. I suppose the optimistic spin here would be that AI will help more people be more creative. Whether that optimistic spin is true, of course, does not refute my central claim, which is that most people are not sufficiently creative to warrant earning a living from creativity. AI has little to do with the fact that most people can’t earn a living from creative pursuits: the world does not owe you a living.
AI regulation and political economy
California introduced a bill, dubbed SB 1047, meant to regulate AI. One take on the bill is from
, which you can read here. His analysis seems reasonable enough:disagrees. He doesn’t necessarily disagree with Zvi’s analysis, which he doesn’t discuss. Rather, he argues that the ultimate effects of a bill such as SB 1047 are not necessarily benign:During training, you will need to a) ensure no one can hack it (while it remains in your possession), b) make sure it can be shut down, c) implement covered guidance (here meaning guidance issued by the National Institute of Standards and Technology and by the newly created Frontier Model Division, as well as “industry best practices”), and d) implement a written and separate safety and security protocol which can provide reasonable assurance that the model lacks hazardous capabilities, or — if it has them — that it can’t use them. You will also need to include the testing procedure you will use to identify the hazardous capabilities — and say what you will do if you find them. Notably, the bill doesn’t specify what any of this looks like. Developers create and implement the plans; the government does not dictate what they are.
Hazardous capabilities are set at an extremely high threshold. We are not talking about hallucinations, bias, or Gemini generating images of diverse senators from the 1800s — or even phishing attacks, scams, or other serious felonies. The bill specifies hazardous capabilities as the ability to directly enable a) the creation or use of weapons of mass destruction; b) at least $500 million of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents; c) the same amount of damage, performed autonomously, in conduct that would violate the Penal Code; or d) other threats to public safety of comparable severity.
One of my central concerns with California’s SB 1047—and all regulation of AI models rarher than people’s conduct with AI—is that over time, any model-based regulation will be abused by the political system. No matter how well-written or well-intentioned model-based regulation is, I worry about this kind of policy not necessarily because of the policy per se but because of how I expect that policy to interact with our existing political and economic structures. In other words, it’s not so much the policy itself, but the political economy of the policy.
This is rather abstract and theoretical. Ball reifies his argument:
Let’s say that many parents start choosing to homeschool their children using AI, or send their kids to private schools that use AI to reduce the cost of education. Already, in some states, public school enrollment is declining, and some schools are even being closed. Some employees of the public school system will inevitably be let go. In most states, California included, public teachers’ unions are among the most powerful political actors, so we can reasonably assume that even the threat of this would be considered a five-alarm fire by many within the state’s political class.
As an employee of the Frontier Model Division, this is not so much your problem. Except for the fact that you regulate the same models being used to supplant the public school system. The Bitter Lesson suggests that over time, the largest, generalist AI models will beat models aimed at specific tasks—in other words, if educational services are to be provided by AI, it is quite likely to be the same frontier models that you, as an employee of the Frontier Model Division, were hired to regulate.
So perhaps you have an incentive, guided by legislators, the teachers’ unions, and other political actors, to take a look at this issue. They have many questions: are the models being used to educate children biased in some way? Do they comply with state curricular standards? What if a child asks the model how to make a bomb, or how to find adult content online? You, as the Frontier Model Division, don’t have the statutory authority to investigate these questions per se (at least not yet), but conceivably, you may be involved in these discussions. After all, you’re the agency with expertise in frontier models.
This is admittedly a speculative argument, and a lot of people are averse to speculative arguments. Nonetheless, it’s true that a lot of regulation that originally arose from the best of intentions has been abused by politically connected bad actors seeking to protect their turf. And, as Ball notes, few groups are more politically connected than teachers’ unions. Elsewhere, Ball suggests that lawyers and doctors, too, will increasingly agitate against more powerful AI, as those more powerful AIs encroach upon their turf. And guess what law these politically connected groups will use to advance their interests?
AI & power generation
One of the dangers of being exceptionally intelligent is that you fail to understand a lot of quotidian facts about how the world operates. Leopold Aschenbrenner is obviously an extremely smart guy. He graduated valedictorian from Columbia at 19 years old, and recently published a 165-page collection of essays, collectively called Situational Awareness, about AI. The main thrust of his argument is that much more powerful AI than what we have today is coming, and that it is coming relatively soon. He predicts artificial general intelligence to arrive around 2027-28.
Whether you argee with his forecast or not, his essays are worth reading. I remain somewhat skeptical of his conclusions because it appears that we are running out of data to train models on. He acknowledges this objection in his essays, and provides a coherent response to them, but I am not yet convinced. I wrote a bit about my skepticism here.
In any event, here’s German physicist Sabine Hossenfelder reacting to fellow German Leopold Aschenbrenner’s magnum opus:
Let us first look at what he says about energy limitations. The training of AI models in terms of computing operations takes up an enormous amount of energy. According to Aschenbrenner, by 2028 the most advanced models will run on 10 gigawatts of power at a cost of several hundred billion dollars. By 2030, they’ll run at 100 gigawatts at a cost of a trillion dollars.
For context, a typical power plant delivers something in the range of 1 gigawatt or so. So that means building 10 power plants in addition to the supercomputer cluster by 2028. What would all those power stations run on? According to Aschenbrenner, on natural gas. “Even the 100 [gigawatt] cluster is surprisingly doable,” he writes, because that would take only about 1,200 or so new natural gas wells. And if that doesn’t work, I guess they can just go the Sam Altman-way and switch to nuclear fusion power.
Her point here is that Aschenbrenner conveniently handwaves away the very real question of how advanced clusters AI computing will access the power they need. New power plants require a decade or more of planning and construction, with a full complement of regulatory requirements and impediments. These requirements and impediments run the gamut, from state to federal laws and regulations, to mollifying the local communities on whose land the infrastructure will be built, etc. This isn’t something that can just be magicked into being by invoking the AGI spirits. Aschenbrenner, like many equally brilliant people, confidently sees the future while ignoring the very quotidian obstacles which stand in its way.