AI link dump: politicians & AI; doctors & AI; corporations & AI; lawyers & AI
A variety of AI-related links to whet your appetite
Politicians and AI
An interesting puzzle is this: how do you lobby politicians for solutions to long-term risks related to AI, when politicians only think in terms of the next election cycle? Think about this from the politician’s perspective: it’s very easy for a politician to see the wisdom1 in, say, supporting an environmentalist group’s efforts to stop development on some ecologically fragile land. The payoff, from the politician’s perspective, is tractable within the timeframe of electoral politics. You can anticipate the inevitable campaign ad: No one has saved more land from real estate development than me!
It’s much harder to get politicians to focus on issues whose payoff may be years or decades hence. And this is the situation that Silicon Valley’s AI lobbyists find themselves in. Here’s Politico reporting:2
As scores of tech-funded EAs spread across key policy nodes in Washington, they’re triggering a culture clash—landing in the city’s incremental, detail-oriented culture with a fervor more akin to religious converts than policy professionals.
Regulators in Washington usually dwell in a world of practical disputes, like how AI could promote racial profiling, spread disinformation, undermine copyright or displace workers. But EAs, energized by a uniquely Northern Californin mix of awe and fear at the pace of technology, dwell in an existential realm.
Silicon Valley and Washington, DC are in the middle of a great conflict, each finding the other to be from a different planet. Tyler Cowen has written about this conflict before, and
has a great series about why AI may upend much of our governing institutions.Doctors & AI
There’s a simple intuition about medicine and AI, and it’s this: modern healthcare generates an enormous amount of data, often unstructured, and AI is great at rapidly analyzing unstructured data. Can AI make medicine and healthcare more efficient? The US government claims that health care spending in the US grew 4.1% in 2022 to $4.5 trillion. It seems like it would be a great thing if AI could generate better outcomes on all that spending.
Here are some interesting articles that have crossed my desk recently:
AI has the potential to make the practice and delivery of healthcare more efficient and accessible. Regulations, institutional inertia, and protectionist policies could all inhibit its adoption. Doctors’ and nurses’ natural instinct is to protect their professional prerogatives. If they think that AI threatens to encroach on their jobs, they will use every tool at their disposal to thwart the threat.
It is worth noting that the American Medical Association purposefully refers to artificial intelligence as “augmented intelligence”:3
The AMA House of Delegates uses the term augmented intelligence (AI) as a conceptualization of artificial intelligence that focuses on AI’s assistive role, emphasizing that its design enhances human intelligence rather than replaces it.
(Emphasis is mine.)
These words are intentional: they’re very carefully considered phrasings which position the AMA as believing that AI is no more than a complement to its doctors’ skill and knowledge. While that is not necessarily an unreasonable position to take, it is also a considered one. It suggests that the AMA will do everything in its considerable power to ensure that powerful AI technology does not encroach on doctors’ prerogatives. Whether that is better for patients in the long run seems an open question.
Corporations & AI
An idea that’s been floating around is this: large companies have billions, or possibly trillions, of internal documents throughout the entire enterprise. But most of these documents are unstructured data, and, worse, no one knows all of the internal data that a large corporation stores on its computers. What if a large company—say Walmart or Pfizer or JP Morgan or Cargill—could run their own large language model (LLM)4 over all of its unstructured, internal data? Wouldn’t it be interesting if a LargeCo marketing analyst could query a LargeCo-specific version of ChatGPT? Call it LargeCoGPT:
Hey LargeCoGPT, I’m looking for information on the number of marketing emails we sent in Q3 2022. The context here is that my boss and his colleagues don’t think that our marketing spend is effective, since so many defective products are being replaced at our expense due to our warranty policies. We need to fix our marketing to make sure that customers are receiving well-targeted emails. Can you give me some information on the emails we have sent, their click rates, etc. Also provide some suggestions about how to optimize our email marketing, and what KPIs would demonstrate effective marketing spend.
This sounds like it would be a great tool for a marketing analyst. And sure enough, companies like Databricks are trying to sell companies on this very use case. Here’s how Databricks frames this:
LLMs can drive business impact across use cases and industries — translate text into other languages, improve customer experience with chatbots and AI assistants, organize and classify customer feedback to the right departments, summarize large documents, such as earnings calls and legal documents, create new marketing content, and generate software code from natural language.
All of these claims are correct. And of course Databricks is the perfect vendor for your corporate LLM project! But as always there’s nuance to be had. Do you need to build your own custom LLM? Or can you fine-tune an extant LLM? Anyway, here’s a great article at Hacker Noon about this debate.
One possible rubric for thinking about this question is: the larger and more technically-oriented a company, the more likely it may be that a custom LLM is a reasonable project. For all else, fine-tuning extant LLMs may be the best bet. More generally, these are organizational, and not technological decisions. They are another reason to think that, though virtually all companies will eventually become AI-first companies, it will take longer for companies to restructure their operations around AI tech than is commonly assumed. For more on this topic, see my post from earlier this week.
Lawyers & AI
When I first used ChatGPT, back in November 2022, my initial thought was “This is going to really affect the practice of law.” Not this, as in the ChatGPT of November 2022. That ChatGPT was really too primitive to affect the practice of law. But it was clear to me that the technology was already powerful, and was quickly improving, and it was the slope of improvement which hinted to me that the future of ChatGPT and related technology would restructure the practice of law.
To be clear, I don’t think we’re presently at the state where AI has restructured the practice of law. But it is clear that the directional arrows of progress are such that, if you’re a lawyer, you ought to be paying attention to what AI can do. Elite law firms, normally thought of as staid and conservative, have started adopting AI tools from Harvey. Here’s an article about how the Magic Circle5 firm of Allen & Overy is using Harvey’s tech:
In a world where law firms are often criticised for being slow to adopt new technologies, we should applaud Allen & Overy for its wow-factor launch yesterday (16 February) of ‘co-pilot’ Harvey, which helps lawyers to conduct research and due diligence using natural language instructions, leveraging the most up to date OpenAI large language model. I’d be lying if I said I don’t have some serious reservations.
Founded in 2022 by former O’Melveny & Myers antitrust litigator Winston Weinberg and former DeepMind, Google Brain, and Meta AI research scientist Gabriel Pereyra, Harvey is a verticalised version of what I understand to be GPT-4, which has been trained on the entire corpus of the internet. By verticalised, I mean that Harvey has further trained the model with legal sector-specific data. Harvey, which in November last year received $5m in investment from OpenAI, has been working with a number of law firms – including A&O – in beta.
The model, which has now been rolled out by A&O across its 43 offices, can automate various aspects of legal work, such as contract analysis, due diligence, litigation and regulatory compliance. It can generate insights, recommendations and predictions without requiring any immediate training, which A&O says will enable its lawyers to deliver faster, smarter and more cost-effective solutions to their clients.
Elsewhere, we have seen lawyers use ChatGPT to generate hallucinations about cases which don’t exist.
Lawyers clearly need to learn how to use this technology. But the intuition is fairly straightforward: lawyers deal in the manipulation of language, and today’s generative AI technology is great at producing copious amounts of language.
On the other hand, some pessimistic people, such as Peter Turchin, believe that the adoption of AI tech by the legal industry will only further entrench the bimodal distribution of lawyers’ compensation, and therefore further immiserate those lawyers on the left-hand side of that distribution. Immiserated lawyers, he argues, are those who foment revolutions. The names Castro, Robespierre, and Lincoln are trotted out in support of this claim. I don’t quite buy his argument. Quoting from his piece:
The rise of intelligent machines will undermine social stability in a far greater way, than previous technological shifts, because now A.I. threatens elite workers—those with advanced degrees. But highly educated people tend to acquire skills and social connections that enable them to organize effectively and challenge the existing power structures. Overproduction of youth with advanced degrees has been the main force driving revolutions from the Springtime of Nations in 1848 to the Arab Spring of 2011.[x]
The A.I. revolution will affect many professions requiring a college degree, or higher. But the most dangerous threat to social stability in the U.S. are recent graduates with law degrees. In fact, a disproportionate share of revolutionary leaders worldwide were lawyers—Robespierre, Lenin, Castro—as well as Lincoln and Gandhi. In America, if you are not one of super-wealthy, the surest route to political office is the law degree. Yet America already overproduces lawyers. In 1970 there were 1.5 lawyers per 1000 population; by 2010 this number had increased to 4.[xi]
Another view on the future of lawyers, given AI, is available here.
“Wisdom,” here, should be understood as the politician understands wisdom, not how a normal person understands wisdom. It is easy to argue that prohibiting development in the name of ecological preservation is not wise.
“EAs” in this quote refers to effective altruists.
I can’t recall where I first found out about the AMA’s positioning on this issue. If I could I’d credit that source here.
Large language models (LLMs) are the underlying AI tech that powers chatbots like ChatGPT.
This refers to a collection of elite British law firms in London.