Why the effective altruists lost the AI war
When you fail to use simple language to communicate complex ideas, people ignore you
I’m going to explain why the effective altruists lost control of OpenAI by explaining it in terms of Trump’s political ascendancy. This comparison will seem obscure or possibly irrelevant at first, but bear with me. Do not make the mistake here of inferring, on the basis of what follows, that I personally support Trump. This post is not about my political views. Rather, this post shows that understanding why Trump has been politically successful can inform us about why the effective altruists failed in their arguments against AI accelerationism.
Understanding Trump’s Rise
Of the many reasons given for Trump’s surprising political ascendancy since 2015, the most convincing explanation is the simplest: the language he uses is understood by his base of supporters. Consider the following:
Simplicity and Directness: Trump uses simple, straightforward language. He favors simple messaging over complex political jargon that can alienate or confuse some voters. His direct manner of speech is seen as a form of honesty or authenticity, which resonates with those who are skeptical of traditional political discourse.
Emotional Appeal: Trump uses emotional language to stir up strong feelings. By tapping into emotions like frustration, hope, or anger, he creates a sense of shared experience and understanding with his audience. Emotional connections rally support more effectively than fact-based arguments.
Repetition: Trump repeats key phrases and slogans (like “Make American Great Again!”). This repetition aids in message retention and creates catchy, memorable hooks that supporters rally around. It’s a classic marketing technique that fosters a sense of familiarity and solidarity.
Us vs Them Rhetoric: By clearly delineating between ‘us’ (his supporters) and ‘them’ (opponents), Trump fosters a strong group identity among his base. This approach can strengthen in-group loyalty, and it creates a common cause against a shared adversary.
Populist Themes: Trump frequently speaks on themes that resonate with populist sentiments, like anti-establishment rhetoric, criticism of elites, and advocating for the ‘forgotten’ people. This aligns well with voters who feel neglected or betrayed by traditional politicians and institutions.
Narrative Building: Trump skillfully constructs narratives that resonate with his base. He frames his situations in ways that align with the worldview of his supporters, often simplifying complex issues into clear-cut stories of right and wrong.
Confidence and Assertiveness: He speaks confidently and assertively, which his supporters see as a sign of strength and decisiveness. This appeals to voters who prioritize strong leadership and clear, firm decision-making.
Personal Branding: Trump’s manner of speaking reinforces his personal brand as a straightforward, no-nonsense outsider. This differentiates him from typical politicians and appeals to voters tired of conventional political approaches.
Trump’s communication style effectively engages his base by being simple, direct, emotionally charged, and consistent with populist themes. His approach fosters a strong emotional connection, creates a clear group identity, and appeals to those who value straightforward, decisive leadership.
How this relates to Effective Altruists
Now, if we compare all of these aspects of Trump’s communication style to those of the effective altruists who called for OpenAI to curtail its development of artificial intelligence, we quickly see the problem. Here’s Noah Smith explaining this in detail:
I think there’s a deep, fundamental reason that AI-focused EA blew its big chance. AI risk thinkers were always able to come up with lots of scary sci-fi scenarios about how generative AI could cause a global calamity. Those scenarios weren’t obviously impossible; it’s clear that they’re worth worrying about.
But when it came to recommendations for policy to diminish the risk of these scenarios becoming reality, the AI risk people were always short on actionable ideas….The people who are scared of AI doomsday risk tend to believe in a “fast takeoff” in which AI goes very very rapidly from the GPT-style chatbots we know today to something more like Skynet or the Matrix. It’s basically a singularity argument.
…
And “shut it all down” is what the OpenAI board seems to have had in mind when it pushed the panic button and kicked Altman out. But the effort collapsed when OpenAI’s workers and financial backers all insisted on Altman’s return. Becuase they all realized that “shut it all down” has no exit strategy. Even if you tell yourself you’re only temporarily pausing AI research, there will never be any change — no philosophical insight or interpretability breakthrough — that will even slightly mitigate the catastrophic risks that the EA folks worry about. Those risks are ineffable by construction. So an AI “pause” will always turn into a permanent halt, simply because it won’t alleviate the perceived need to pause.
And a permanent halt to AI development simply isn’t something AI researchers, engineers, entrepreneurs, or policymakers are prepared to do. No one is going to establish a global totalitarian regime like the Turing Police in Neuromancer who go around killing anyone who tries to make a sufficiently advanced AI. And if no one is going to create the Turing Police, then AI-focused EA simply has little to offer anyone.
I see this as another case of a modern intellectual movement that is far better at identifying problems than it is at suggesting solutions. My prediction is that basically all of these movements will attract a lot of initial attention, but then gradually be ignored over time. The AI scenarios that EA folks suggest certainly are scary. But until EA comes up with some solution other than “shut it all down”, the people developing AI are simply going to pray for the serenity to accept the things they cannot change.
The crux of Noah’s argument is that intellectuals are great at identifying problems, but rarely are they able to propose solutions to those problems. And, for most people, that’s a non-starter. It’s great to tart up your rhetoric in elegant intellectual arguments. But in order for most people to pay attention to you, you have to offer them something concrete in exchange for their time. There is a reason that Bill Clinton’s political advisor James Carville repeatedly invoked his campaign theme “It’s the economy, stupid.” Bill Clinton and his coterie of policy advisors were all incredibly smart and accomplished people. But they wouldn’t win a presidential election if they couldn’t concretely tie their high-minded ideals to quotidian concerns. Few things are more concrete and quotidian for the average voter than the economy. And, in 1992, when Bill Clinton was running against the incumbent, George H.W. Bush, the country was in the middle of a recession. “It’s the economy, stupid” had an emotional resonance to it akin to what “Build a wall!” had in 2016.
The effective altruists have none of this. They offer cerebral, obscure arguments. Several years ago, I spent a lot of time going to blockchain and cryptocurrency events in New York City. I was curious about the industry. I would read a lot of high-minded claims about how decentralization was going to change the world. I’d attended talks full of impenetrable jargon like “Byzantine fault tolerance,” “cryptographic hash function,” “Merkle tree,” etc. It was rapture for nerds.
The problem that both the crypto-cognoscenti and the effective altruists have, and which is true of intellectuals in general, is that they have no way to make their arguments salient to a popular audience. Theirs is not a populist movement. Theirs is an intellectual movement. And the one thing that all intellectuals struggle with is understanding that the manner in which they think and speak is not legible to most people.
It’s considered gauche to comment on your intelligence, especially in relation to others’ intelligence. And yet, in order for intellectuals to make headway with their arguments, they have to at least tacitly acknowledge that the arguments they present aren’t legible to most people. If effective altruists want to be, well, effective, they have to make arguments that are less abstract and more concrete. In Noah Smith’s terms, they have to offer concrete solutions for the problems they’ve identified. They have failed to do so, and they lost control of OpenAI as a result.
Or, perhaps, they are unpersuasive because they’re simply wrong.