Jevons’ Revenge: The Coming AI Supercycle
How cheaper inference expands demand and drives both local and hyperscale growth
Over the next five years we’ll see a lot of headlines about the explosion of on-device inference: iPhones running quantized LLaMA models, Snapdragon PCs touting AI copilots, Android handsets spitting out summaries and translations without touching the cloud. If you’re a casual observer, it looks like this must be bad news for the datacenters. If AI runs…
