Is scale all you need?
Are larger models all you really need to get to AGI? Or is another approach required?
With the recent release of Google’s Gemini 1.5 and OpenAI’s Sora, I’ve been thinking a bit about “scale is all you need”—the hypothesis that says, more or less, that the more data you throw at AI training models, the better the output. I think that, to the extent that “scale is all you need” holds true…

