Where is the profit in AI?
Token costs for large language models are fractions of a penny and declining. How will OpenAI and its competitors make any money?
If “scale is all you need” for AGI, then training and inference costs for AI only increase, as you need more powerful GPUs for more scale. But the output of the training—the large language models like OpenAI’s GPT4—is being commoditized. Further, tokens for these large language models now cost fractions of a penny, and OpenAI and its competitors have repeatedly reduced these prices. There’s not much room left for token costs to decline: we are rapidly heading to a world in which computational intelligence is effectively free. (Tokens are the smallest unit of data input and output that one uses when interacting with a large language model, and costs are paid on a per-token basis.)
So how do you finance the expense of training ever larger large language models if revenues go to zero? You need to build value-added services on top of the commoditized LLMs. And these value-added services need to have sustainable pricing power. Microsoft and Google have their entire user bases of billions of users to build for, and a wide community of developers who want to build cool stuff on top of LLMs. OpenAI has its ChatGPT product, and it is trying to build a developer community on top of ChatGPT with its GPT Store. Further, all of these companies have an API product that allows developers to build directly on top of their LLMs. Meta’s Llama is open source, and Meta has billions of users, so their offering is attractive to developers, as well. It’s unclear what Anthropic and Mistral bring to the table. There are many other companies and projects building different flavors of LLMs, as well, but the ones mentioned here are the major ones.
Keep reading with a 7-day free trial
Subscribe to Buy the Rumor; Sell the News to keep reading this post and get 7 days of free access to the full post archives.