What if we're building the wrong kind of AI?
Ilya Sutskever's case that AGI won't come from a bigger LLM, and why that possibility destabilizes every GPU forecast
TL;DR
Ilya isn’t warning about alignment. He’s arguing that trillions in GPU capex may be aimed at the wrong paradigm.
His premise: transformers interpolate; humans generalize. The gap is sample efficiency, robustness, and continual learning.
Bigger models now show diminishing returns: scaling improves benchmarks but not true generalization; failures get s…

