Why don't AI doomers short the market?
A lot of effective altruists believe that advanced AI augurs doom, yet they don't short the financial markets
Here's a puzzle: many people in the Effective Altruism (EA) movement are AI doomers, but they don’t make financial bets in support of their bearish stance. In other words, if you think that AI will destroy the world, then why not be short the financial markets? Tyler Cowen recently asked this question. He admires the EA movement’s eagerness to learn about a wide range of fields—animal welfare, global health, AI safety, and more. Yet, he finds it puzzling that their engagement with finance is almost nonexistent. This gap, he argues, is a missed opportunity to align their predictions with their actions in a meaningful way.
Cowen’s main argument is simple but provocative: the EA community seems to overlook finance as a tool for aligning their beliefs with their actions. Imagine if you genuinely believed in a significant chance of an impending AI disaster or some other catastrophic event. Wouldn’t it make sense to put your money where your mouth is? Cowen suggests shorting the market as one such strategy. But when he brings this up, the responses he gets are anything but encouraging. As Cowen says, “If you pose the ‘have you thought through being short the market?’ question, one hears a variety of answers that are what I call ‘first-order wrong.’”
To provide some context, shorting the market means betting that the value of certain assets—such as stocks or indexes—will go down. Essentially, you borrow shares, sell them at the current price, and then hope to buy them back later at a lower price, pocketing the difference.1 It’s a way to profit when things go wrong, and it’s often seen as a contrarian or even pessimistic stance. For those who believe that advanced AI poses an existential risk to humanity, shorting the market would be a logical financial expression of this belief. It’s betting that catastrophic events will cause significant market downturns.
This context matters because Cowen argues that if the EA community truly believes in the likelihood of AI-induced doom, they should be willing to explore financial strategies that reflect this conviction. Market prices, in a way, serve as collective expressions of belief about the future. If a large enough group of people with genuine knowledge and conviction about an AI apocalypse were to act on these beliefs by shorting the market, we might expect to see this reflected in market behavior. The fact that such market movements aren’t happening implies that either the broader investment community doesn’t believe in these risks, or that EAs aren’t effectively integrating their beliefs into tangible financial actions.
As Cowen puts it, market prices could serve as a kind of “testing referendum” on these predictions. After all, markets aggregate a wide range of information. If markets aren’t pricing in an impending catastrophe, it might be time to reassess those predictions. Cowen argues:
Once shorting the market even enters serious contemplation (never mind actually doing it), you also start seeing current market prices as a kind of testing referendum on various doomster predictions. And suffice to say, market prices basically offer zero support for all of those predictions.
But why haven’t EAs gone deeper into finance? One reason is simply that shorting the market is risky. It’s not just about understanding finance intellectually but also about being willing to bear potentially huge losses. There’s also an ethical dimension here. Profiting from a predicted catastrophe can feel a bit like rooting for it, which could make those dedicated to minimizing suffering uncomfortable. This discomfort from benefiting from doom is a possible deterrent.
Moreover, many EAs might see other kinds of interventions, such as funding AI safety research or pushing for regulatory changes, as more effective uses of their time and resources. (They do, after all, call themselves effective altruists!) EAs might consider financial maneuvering to be a sideshow rather than a real contribution to their cause.
Cowen also notes that AI will soon be sophisticated enough to tell us how to short the market. This raises an interesting question: if AI is so powerful and capable, why aren’t EAs using it to help make smarter financial decisions? If they’re concerned about AI outthinking humanity, why not leverage that very intelligence for something concrete, like optimizing market positions? This inconsistency suggests that EAs may need to reconcile their views about AI’s transformative power with practical, actionable steps beyond theoretical debates.
It’s also important to recognize the psychological factors at play. Cognitive biases, fear, and discomfort can inhibit even the most rational people from acting on their beliefs—especially when the action involves complex and risky financial strategies. EAs excel at abstract, probabilistic thinking, but there’s a big difference between understanding risk and actually putting money on the line. It’s much easier to debate AI alignment issues than to navigate the high-stakes world of financial markets.
Cowen’s argument also touches on market efficiency. His assumption is that markets should reflect all known risks, but that claim is questionable. Markets are not always efficient, especially when dealing with rare, hard-to-quantify events. The 2008 financial crisis is a good example of markets failing to price in massive risk.2 So perhaps the lack of market movement related to AI risks isn’t because these risks aren’t real, but because they’re hard for traditional financial systems to understand and price correctly.
Ultimately, Cowen’s critique isn’t just about finance. It’s about how deeply people are willing to live their beliefs. He urges EAs to learn more basic finance, arguing that it could help them align their actions with their worldview. As Cowen puts it:
I nonetheless would urge many EA, rationality, and AI doomster types to learn more basic finance. It can liberate you from various mental chains, and it will be useful for the rest of your life, no matter how long or short that may be.
In the end, Tyler Cowen is calling for consistency. If EAs are willing to dive deeply into AI risk, global health, and animal welfare, why stop short of finance? Financial strategies like shorting the market might not be for everyone, but understanding them could open up new avenues for impact. Even if the only benefit is a better understanding of how beliefs about risk align with market expectations, it would be worthwhile. Cowen’s piece ultimately challenges EAs to expand their toolkit—to make sure it’s as versatile and robust as possible for tackling the world’s most pressing challenges.
There are other ways to short the markets, including: (a) synthetic shorts, in which you buy a put option and sell a call option with the same strike price and expiration date; (b) inverse ETFs and leveraged inverse ETFs; (c) bear spreads, in which you buy a put option at a higher strike price and sell a put option at a lower strike price, both with the same expiration date; (d) reverse convertible notes, which are structured products that banks sell, which provide high yields and downside risk if the price of the underlying asset falls; (e) sell futures contracts on the index or asset you want to short; (f) Contracts for difference (CFDs), which are illegal in the United States, but legal elsewhere, in which you enter into a contract which pays the difference between the opening and closing prices of an asset; and (g) a pairs trade, in which you go long one stock and short a related stock or index.
Obviously, once the extent of the 2008 Global Financial Crisis became evident, markets fell. But relatively few people correctly bet that the market would fall. The movie The Big Short, of course, relayed the stories of some of the prescient financial markets players who did foresee problems. But, again, they were a minority of participants. The markets could have just as easily kicked the can down the road for a later reckoning.
Many AI doomers are more scared by the "Skynet scenario" - from the Terminator movies - a point-event where a human-hating AI wrecks civilization in a short time, as opposed to a "slow doom" where humans are reduced to keeping the datacenters where our AI overlords live in good repair...
A "slow doom" probably can be shorted, while a point-event can't be, as counterparty risk is obviously enormous if the evil AI decides to launch everyone's nukes or something similar. And until the Skynet scenario happens, the market may well be booming as the AI goodness gooses the markets.
And yes, long-term shorts where the bad scenario develops slowly are highly expensive and you may be wiped out of your position the day before the market responds to Slow Judgment Day...
Very thought-provoking. Thanks for bringing this up, as it deserves more conversation. I think there are a few issues with Tyler’s argument.
While I don't expect this personally, for those who believe AI doom is nearly certain, many expect that it will come suddenly, unexpectedly, and in a way that renders any market shorts effectively moot. Effectively, they see it as an extinction-level event that invalidates most financial strategies.
For others, these risks may be viewed as decades away and long-term puts would likely be cost-prohibitive, with premiums eroding over time.
On the other hand, for the median AI-concerned person, they might foresee a median future with enormous positive gains from AI while still considering catastrophic risks serious enough to address. In these cases, shorting the market isn’t quite the right fit; it’s more like nuclear risk—a significant concern that justifies action, even if the likelihood remains relatively low.
As to your point, in these scenarios, it seems rational for those concerned to fund research and policy efforts over making financial bets, as these seem more aligned with the possible time horizons and forms that existential risks might take.