2 Comments

Many AI doomers are more scared by the "Skynet scenario" - from the Terminator movies - a point-event where a human-hating AI wrecks civilization in a short time, as opposed to a "slow doom" where humans are reduced to keeping the datacenters where our AI overlords live in good repair...

A "slow doom" probably can be shorted, while a point-event can't be, as counterparty risk is obviously enormous if the evil AI decides to launch everyone's nukes or something similar. And until the Skynet scenario happens, the market may well be booming as the AI goodness gooses the markets.

And yes, long-term shorts where the bad scenario develops slowly are highly expensive and you may be wiped out of your position the day before the market responds to Slow Judgment Day...

Expand full comment

Very thought-provoking. Thanks for bringing this up, as it deserves more conversation. I think there are a few issues with Tyler’s argument.

While I don't expect this personally, for those who believe AI doom is nearly certain, many expect that it will come suddenly, unexpectedly, and in a way that renders any market shorts effectively moot. Effectively, they see it as an extinction-level event that invalidates most financial strategies.

For others, these risks may be viewed as decades away and long-term puts would likely be cost-prohibitive, with premiums eroding over time.

On the other hand, for the median AI-concerned person, they might foresee a median future with enormous positive gains from AI while still considering catastrophic risks serious enough to address. In these cases, shorting the market isn’t quite the right fit; it’s more like nuclear risk—a significant concern that justifies action, even if the likelihood remains relatively low.

As to your point, in these scenarios, it seems rational for those concerned to fund research and policy efforts over making financial bets, as these seem more aligned with the possible time horizons and forms that existential risks might take.

Expand full comment