How will OpenAI grow its revenues?
Its competitors can afford to lose money on customer acquisition. That creates interesting market dynamics
I’m pretty skeptical that OpenAI will be able to grow its revenues to justify its present valuation. There are just too many competing models out there, with more or less equivalent functionality, and it’s very hard to compete against either Google or Microsoft’s balance sheet. On the balance sheet point: both Google and Microsoft can fight a war of attrition by reducing token costs to near zero, for a nearly indefinite period of time. Bleed other providers dry. Stated more plainly, they can subsidize customer acquisition costs with their large balance sheets in a way that OpenAI can’t.
This argument raises several pertinent points. Read on for an analysis of the dynamics at play, some potential strategic maneuvers OpenAI could try, and the broader implications for AI service providers.
Competitive landscape
Functional parity: The AI market is crowded, with numerous foundational models boasting similar functionalities and capabilities. This commoditization risk means differentiation becomes crucial, not just in capability, but also in areas like ease of integration, developer ecosystems, and specific use cases. OpenAI is competing against Microsoft’s and Google’s vast user numbers and well-oiled distribution machines.
Subsidized token costs: Google and Microsoft can use their large balance sheets to subsidize AI access costs, and aggressively gain market share at OpenAI’s expense. This strategy, akin to Amazon’s approach in retail and cloud services, pressures smaller players’ margins. Though it is true that Microsoft has invested in OpenAI, it is also true that Microsoft competes with OpenAI. Incentives drive outcomes, of course, but it’s unclear which set of incentives obtain here: is Microsoft first an investor in OpenAI, or is it first a competitor to OpenAI? If it’s the latter, that spells trouble for OpenAI.
OpenAI’s potential strategic responses
Niche specialization. Focus on specific industries or applications where OpenAI’s models offer unparalled value. This could involve deep integration with vertical-specific workflows, offering bespoke solutions that larger, more generalized platforms may not effectively address. The problem with this strategy is that it forces OpenAI to burrow even further into the enterprise software space. That seems at odds with its apparent research-based focus on developing AGI. Enterprise product development and software research are not necessarily complementary activities, and effectively managing both under one corporate umbrella could prove tricky.
Innovation and rapid iteration. Leverage its ability to outpace customers in innovation. While large corporations have significant resources, they often face bureaucratic hurdles. OpenAI could focus on cutting-edge research and deploy updates faster than its behemoth competitors. The problem here is that while ‘cutting-edge research’ sounds sexy, it’s not clear that cutting-edge research is actually saleable. In other words, most enterprises buy technology that solves specific problems. These products aren’t necessarily state of the art. State of the art doesn’t necessarily move the revenue needle, no matter how sexy AI accelerationists may think AGI is.
Developer and community ecosystem. Build a robust ecosystem around its technology. This include fostering a developer community that creates applications or services based on OpenAI’s platform, driving adoption and locking in users through network effects. The problem here is that they have already tried this, with their GPT store, and it doesn’t seem to be gaining much traction. And, their Developer Relations head recently left, for unclear reasons.
Strategic partnerships and collaboration. Align with other companies that can benefit from AI advancements but do not wish to develop their own AI technologies. These partnerships could offer mutual benefits, including shared revenues or co-developed products. This kind of thing sounds nice in theory, but it is hard to execute in practice. Each party has its own interests and incentives, and it’s ultimately hard to judge whether a given partnership addresses the overarching issue of scaling OpenAI’s revenues.
Unique business models. Explore alternative revenue models, such as premium support services, bespoke model training, or even a tiered model that offers basic services for free while charging for advanced features or capacities. Of course, OpenAI already does some of this with its ChatGPT subscription. GPT4 is paid while GPT3.5 is free. But it’s hard to scale revenues sufficiently on the back of $20/month subscriptions.
Broader implications
Regulatory and ethical considerations: As AI becomes more integrated in business and daily life, regulatory scrutiny will increase. Companies that proactively address these concerns, perhaps through transparent practices and ethical AI use, may find favor with both regulators and the public. Yes, regulatory capture is a real issue, but (1) most people aren’t aware of regulatory capture, much less why it is an issue, and (2) companies don’t necessarily care whether they do business with companies that have captured regulators. Concerns about regulatory capture move the needle for a certain type of intellectual, but those people are outliers.
Open source and community models. There’s a potential shift towards more open, collaborative models of AI development and deployment. This could democratize AI technology, reducing the dominance of any single player and potentially altering the competitive landscape in unforeseen ways. Of course, OpenAI, in spite of the ‘open’ in its name, doesn’t seem to be much interested in open source development. And, the point about regulatory capture above still obtains.
Technological breakthroughs: Unpredictable advances in AI could signficiantly alter market dynamics, offering new opportunities for companies to leapfrog competitors. Until the recent release of Claude 3 Opus, it was conventional wisdom that OpenAI’s GPT4 model was state of the art. Now, at best, the two models appear to be at parity, with Google’s Gemini 1.5 not far behind.
We use ChatGPT Turbo with Claude 3 as a redundancy and a third redundancy that will go unnamed for now. In our testing Claude is 99% as good as ChatGPT turbo.