OpenAI converts to a B corp; Gov Newsom vetoes AI legislation
OpenAI realizes that incentives drive outcomes and Gov Newsom seeks more regulatory authority over AI models
OpenAI converts to a B Corp
OpenAI plans to convert itself from a non-profit with a capped-profit unit bolted onto it to a more straightforward for-profit model. Specifically, it will be a B Corp. Its competitor Anthropic is also a B Corp. This seems driven both by investor preferences and OpenAI’s long-term goals.
There’s a lot of Sturm und Drang out there about this move, which I hope to avoid with this post. What you will find here is a simple examination of the reasons why OpenAI would want to convert itself from a overly complicated hybrid non-profit with a capped for-profit subsidiary to a simpler for-proft B Corp. If you’re looking for commentary about Sam Altman’s empire building motivations, or important questions about AI risk, etc.—this is not the post for you.
What investors want
OpenAI’s current structure is a confusing one. A nonprofit oversees a for-profit subsidiary, and the for-profit subsidiary is a capped-profit one, which means that investors can earn no more than 100x their investment. This complexity makes OpenAI as an investment less attractive to large investors who want, and who see the potential for, greater returns. Transitioning to a for-profit model removes these limitations. Further, investors prefer a simpler structure like a B Corp because it aligns with industry norms and offers clearer governance and financial incentives.
Why OpenAI made this decision
On OpenAI’s side, there are a couple of reasons why they have acceded to investor demands.
Funding for AGI. OpenAI is in a race to develop AGI with hyperscalers like Google and Meta. Google and Meta have hundred billion-dollar balance sheets with which to finance their pursuit of AGI. OpenAI does not have these resources at its disposal. A for-profit model allows OpenAI to attract more capital by offering the potential for significant investor returns. Incentives drive outcomes, and investors’ incentive is the potential to make a lot of money. The large-scale computational infrastructure and research required to advance to AGI entails acquiring significant amounts of investor capital.
Simplified Governance. The current nonprofit structure, which includes various subsidiaries, including the capped profit structure, is, at best, confusing. Incentives are unclear, conflicts are rife, and investors don’t know where they stand. Converting the organization to a for profit B Corp reduces governance confusion and aligns it more closely with traditional business practices. This is crucial for efficiently scaling the business and pursuing commerical AI applications like ChatGPT and related products.
Critics worry that transitioning to a profit-driven model will steer OpenAI away from its original mission of ensuring tha AGI benefits all of humanity. I suppose that these concerns are legitimate. However, achieving AGI requires vast amounts of investor capital, and OpenAI simply can’t access that capital by maintaining its current, overly complicated, structure. Either OpenAI retains its complicated structure, and foregoes the opportunity to achieve AGI before its well-finance competitors do, or it accedes to investor expectations and simplifies its structure.
The balancing act
Given all of this, how can OpenAI balance its profit motive with its goal of ensuring that AGI benefits all of humanity? OpenAI seems to be pursuing a few different strategies:
Adopt the B Corp Model. By structuring as a B Corp., OpenAI can legally commit to balancing profit with purpose. B Corps are required to consider the impact of their decisions on all stakeholders—not just shareholders—including employees, customers, and society at large. This model could help OpenAI formalize its commitment to ethical AI development while still providing attractive returns to investors. The public benefit structure also allows them to pursue long-term objectives that might not align strictly with short-term profit maximization.
Maintain Nonprofit Oversight. OpenAI is expected to retain its nonprofit arm, and that nonprofit is expected to retain some control over the company’s governance. The idea here is that the non-profit unit ensures that the for-profit entity’s actions remain aligned with the overarching goal of AGI benefiting all of humanity. While the nonprofit’s power will diminish, it can still serve as a check against any significant deviation from OpenAI’s original vision of creating AGI for the benefit of all.
Are these strategies sufficient? Or are they mere window dressing? The cynic would say that they’re mere window dressing, and that Sam Altman and his minions can manipulate the company behind the scenes. They can convey the illusion of conforming to the ideals expressed above, while in reality pursuing their own profit-driven ends. And, sure, that’s a risk. On the other hand—what alternative does OpenAI have? It needs vast amounts of investor capital, and investors seek returns. Investors generally are not interested in putting a ceiling on their potential returns, and when it comes to AGI, investors, rightly or wrongly, see multi-trillion dollar markets arising. That tends to focus the mind on the bottom line, and it tends to bring the profit motive to the fore.
Governor Newsom vetoes a bill
California’s Governor Newsom vetoed Senate Bill 1047, aka the “AI bill”. He wrote, in his letter to the California State Senate:
By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047—at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.
What Newsom is referring to here are the compute thresholds of the bill. Specifically, the bill established reporting requirements for AI models that are trained on at least 10^26 integer floating point operations per second (FLOPs)1. A plain reading of Newsom’s objection to the bill suggests that he thinks all models should be subject to regulatory reporting, not just very large ones. (How one squares that with “the potential expense of curtailing the very innovation…” is unknown.)
Silicon Valley seems to be taking a victory lap here, conveniently ignorning the plain meaning of Newsom’s letter. He clearly wants more, not less, regulatory authority over language models. And yet, we have tweets like Marc Andreessen’s2:
In a very real sense, AI and its regulation is the national security-related issue of our time, and the federal government ought to be taking the lead in imposing regulations on AI. (Don’t infer from this that I’d be more supportive of a federal bill which regulates models similarly to what California proposes.) But to the extent that the United States federal government dictates its national security priorities, and given that AI implicates vast swathes of the nation security interest, I don’t see why the federal government shouldn’t or won’t get involved in regulating AI.
Yeah, fine, but HAPPY BIRTHDAY!