Lawyers & AI; Real estate & AI; Contrarianism & AI; Faster AI development means safer AI; VC math
A collection of interesting AI-related links
Following are a collection of links, and a short explanation of VC math.
The VC math thing doesn’t really fit in with the links, but a few days ago I stumbled across a Twitter thread of someone who didn’t understand what venture capitalists invest in, and what they don’t invest in. So I thought I’d put something in this post, on the off chance that it’s useful to someone, somewhere, trying to understand the venture capital industry.
Anyway, let’s dive in.
Lawyers & AI
Predictions are hard, especially about the future, which is why we all love making them. And Peter Turchin, the pessimistic scientist, offers up the following forecast: aggrieved lawyers, immiserated by advanced AI, will foment discord and revolution in the United States within the next decade. He writes:
The A.I. revolution will affect many professions requiring a college degree, or higher. But the most dangerous threat to social stability in the U.S. are recent graduates with law degrees. In fact, a disproportionate share of revolutionary leaders worldwide were lawyers—Robespierre, Lenin, Castro—as well as Lincoln and Gandhi. In America, if you are not one of super-wealthy, the surest route to political office is the law degree. Yet America already overproduces lawyers. In 1970 there were 1.5 lawyers per 1000 population; by 2010 this number had increased to 4.[xi]
…
If the outlook for most people holding new law degrees looks dire today, the development of new A.I. will make it much worse. A recent Goldman Sachs report[xiv] estimates that 44% of legal work can be automated—lawyers will be the second worst-hit profession, after Office and Administrative Support. If market forces are allowed to have their way, we will create a perfect breeding ground for radical and revolutionary groups, feeding off the vast army of intelligent, ambitious, skilled young people with no employment prospects, who have nothing to lose but their crushing student loans. Many societies in the past got into this predicament. The usual outcome is a revolution or a civil war, or both.[xv]
When I first used ChatGPT in November 2022, my initial thought was “this technology will eventually be terrible for many lawyers.” But I remain skeptical that the result of lawyers being forced out of the law due to artificial intelligence will result in a revolution or civil war.
In general, I am skeptical of people’s forecasts about how AI will affect jobs, and the timelines over which these effects will occur. I’m pretty bullish on AI technology, but I think that a lot of AI insiders overestimate the speed with which it will pervade the enterprise (including law firms). Today’s AI technology is fairly powerful, and tomorrow’s will be even more powerful and capable. And yet, enterprises will remain slow to integrate the technology into their operations. Consider how long the cloud has been around, and how many new customers cloud computing companies sign up even today: there is a lot of growth remaining in enterprise cloud computing. I don’t see any reason why AI will not proceed similarly.
So, when I hear compressed timelines for the institutional adoption of AI technology, and the dire consequences that will result from that adoption, I am somewhat sketpical. I think, with respect to lawyers, a more likely scenario is that AI makes slow in-roads into the practice of law, and that the value of a law degree will diminish over time as law firms and other legal organizations learn how to use the technology to their benefit. This doesn’t seem like a sudden change to me, but rather a gradual adaptation. And people tend to adapt to gradual changes fairly well. Of course, there will always be disgruntled law school graduates, and clueless law professors who extol the virtues of a career in the law. But those are not the things out of which revolutions are made.
Real Estate & AI
Tyler Cowen argues that among the many things which artificial intelligence will change is real estate. At first glance, this doesn’t make a lot of sense: artificial intelligence is software, and, well, real estate is anything but digital. But he makes the following observation:
One notable feature about artificial intelligence is that it can make many companies smaller. AI can do a lot of the work that previously would have required a large office staff. Not surprisingly, the companies that use AI best are AI companies themselves. Clearview, a facial-recognition company that has had a major impact on the war in Ukraine, has only 35 employees. OpenAI during the time when it developed GPT-4 had fewer than 300 workers. Anthropic, an OpenAI spinoff, has about 200 employees.
Smaller companies, of course, require less office space. Further, the trend towards remote and distributed workforces means that demand for office space will decline, irrespective of a trend toward smaller companies. So when you combine the effects of remote work and AI-induced smaller companies, you have a drag on the commercial real estate sector. And a decline in demand for commercial real estate has all sorts of second-order effects that economists like to think about. A lot of cities derive a lot of their tax base from commercial real estate-related taxes. If demand for commercial real estate is in secular decline, then many cities’ tax bases are in decline as well.
Given that AI technology and the trend towards remote work gives workers more choice about where to live, it is inevitable that AI will render some cities winners and others losers.
Cowen further observes:
The ability of AI to economize on (some) jobs doesn’t mean that we will see mass unemployment. While AI will replace many jobs, it will allow many more projects to be started. There will be more coding, more design, more scientific advances, and at the broadest level simply more plans of many kinds. Those plans might range from better green energy to the creative arts to more public health projects, and much more.
But most of those projects won’t be done in traditional offices, even if the core of central management is located there. Many of these workers won’t even have permanent ties to the company, just as Hollywood assembles creative teams to achieve specific ends but eventually move on to the next project.
Over time, the more change we see, the more that real estate decisions will be determined by where these in-demand workers want to live. They are likely to prefer attractive areas, not too far from central management, with sunny climates, good schools, reasonable taxes and lots of amenities. Infrastructure such as airports and internet quality will matter a great deal.
He expects the future labor market to be much more dynamic and fluid than previous ones. I suspect that is true. I also think it’s true that the nature of real estate is such that the people who own it will have a hard time adapting to these dynamic and fluid labor markets. Many real estate operators still conceive of a house or apartment as a place where you live, and an office as a place where you work, and that neither location fulfills the role that the other serves. In other words, the notion of working from home, or remotely, is anathema to staid and conservative real estate operators.
Contrarianism & AI
Here’s an interesting Substack post of some contrarian views about AI. The author refers to these views as controversial, but they seem contrarian to me.
In spite of my bullishness about AI, I mainly agree with these observations. I commented1 on the piece:
This all sounds very reasonable to me. I'm pretty bullish on AI technology but I think that a lot of AI insiders overestimate the speed with which it will pervade the enterprise. Today's AI technology is fairly powerful, and tomorrow's will be even more powerful and capable. And yet, enterprises will remain slow to integrate the technology into their operations. Consider how long the cloud has been around, and how many new customers cloud computing companies sign up even today: there is a lot of growth remaining in enterprise cloud computing. I don't see any reason why AI will not proceed similarly.
Further, I think that most people's experience with AI technology will not come through ChatGPT or Claude or whatever, but rather through enterprise-grade apps, akin to how most people's experience with cloud computing comes from them doing banking online, or reading Kindle books on their iPads. And these kinds of things just take a long time to develop, *even if* the foundational technologies are improving very rapidly.
Faster AI Development Means Safer AI
NVIDIA CEO Jensen Huang claims that we need to accelerate development of AI, in order to ensure its safety:
“We need to accelerate the development of AI as fast as possible, and the reason for that is because safety requires technology,” Huang said in an interview at The Forum with Sung Cho, co-head of Tech Investing for Fundamental Equity in GSAM. The Forum is a daily meeting at GSAM, and a core part of its investment culture, that convenes leading experts to discuss global trends that impact our investments.
Consider how much safer today’s passenger cars are compared with those of earlier generations, Huang suggested, because the technology has advanced. He cited as an example how OpenAI’s ChatGPT uses reinforcement learning from human feedback (RLHF) to create guardrails that make its responses more relevant, accurate, and appropriate. The RLHF is itself an AI model that sits around the core AI model.
Huang lists examples of other AI technologies that hold promise for making the models safer and more effective. These range from retrieval augmented generation, in which the model gets information from a defined knowledge base or set of documents, to physics-informed reinforcement learning, which grounds the model in physical principles and constraints.
In some sense, this is a self-serving, and not entirely convincing, argument. The more resources are devoted to the development of AI technology, the more demand there will be for NVIDIA’s GPUs.
A better argument for radically advancing the development of AI technology is that advanced technology in general may forestall and even prevent the bad consequences of fertility collapse.
VC math
“VC math” refers to the unique way venture capitalists assess and manage their investments, particularly in the context of startup financing. This approach is driven by the high-risk, high-reward nature of investing in early-stage companies. Here are the key components of VC math:
High Failure Rate, Big Wins Needed: VCs understand that a significant portion of startups in their portfolios will fail. Therefore, they look for investments that have the potential to return many times the original investment to compensate for these losses. This is often summarized as the “home run” approach.
Portfolio Approach: VCs invest in a range of companies, expecting that most will fail or yield minimal returns, a few might provide moderate returns, and one or two might be major successes (e.g., “unicorns” valued at over $1 billion). The successes need to be significant enough to cover the losses from other investments and provide a substantial overall return.
Power Law Distribution: In VC portfilios, returns typically follow a power law distribution, meaning a small number of investments generate the vast majority of returns. This is different from a normal distribution where returns are more evenly spread out.
Valuation and Dilution: VCs pay close attention to company valuations and the dilution of shares. Early-stage investments might command higher equity stakes due to higher risks, and subsequent funding ronds might dilute these stakes. Hence, VCs must balance the need to invest enough to maintain influence while not over-diluting the company.
Example2
Let’s consider a simplified example to illustrate VC math:
A VC firm has a $100 million fund.
They invest in 20 startups, each receiving $5 million.
The expecation is that most of these startups will fail, a few will return the investment or a little more, but at least one needs to be a major hit.
Scenario
15 startups fail completely, resulting in a loss of $75 million.
4 startups are moderately successful, returning double the investment, totaling $40 million.
1 startup is a major success, returning 30 times the investment, totaling $150 million.
In this scenario, the VC’s total return is $190 million ($40 million + $150 million) from the $100 million invested. Despite the high failure rate, the one major success offsets the losses and provides a substantial profit.
This example highlights the essence of VC math: a portfolio approach where the focus is on finding and nurturing a few high-potential startups that can deliver outsized returns, thereby compensating for the high risk and failures of other investments.
In practice, this means that if you’re an entrepreneur looking for venture capital funding, you have to be able to convince a venture capitalist that your company has the potential to generate liquidity, either through an acquisition or IPO, of at least $1 billion. Most companies can’t achieve that kind of valuation because their business model or market doesn’t support it. VCs want to find those companies which they think can achieve sufficient scale such that they create a sustainable $1 billion-plus valuation.
You may note that I used a portion of this comment earlier in this Substack post. I plagiarized myself.
This is a highly simplified example of VC math. Anyone with any experience in the field will immediately understand why it’s not plausible (no accounting for liquidation preferences, etc.) The example is provided to illustrate the concept of the scale required for venture capital investments.