What's Gary Gensler's beef with AI?
LLMs like ChatGPT are non-deterministic; how does herding behavior arise from non-determinism?
Note: I’ve updated this post from its original version. For those updates, see the second section of this post.
Apparently, crypto world’s bête noire, Gary Gensler, is concerned about AI. He is quoted in an article as saying:
“A growing issue is that [AI] could lead to a risk in the whole system,” Gensler said. “As many financial actors rely on one or just two or three models in the middle … you create a monoculture, you create herding.”
Without more context, this quote doesn’t make much sense to me. The article continues:
This herding effect can be dangerous if there is a flaw in the model that might reverberate through markets during a time of stress, causing abrupt and unpredictable price changes in markets. Gensler pointed to the examples of cloud computing and search engines as markets for tech products that have quickly become dominated by one or two major players, and he said he worries about similar concentration in the market for AI technology.
This still doesn’t make much sense. Either Gensler doesn’t understand AI technology, or the quote leaves out relevant context, or the journalist doesn’t understand the topic.
There are some regulatory issues which I can see Gensler or other regulators being worried about, and these are worth consideration. Off the top of my head, following are some regulatory issues which may be keeping people in DC up at night1:
Market Manipulation Risks: AI and machine learning algorithms can be used to create sophisticated trading strategies. These technologies could be used for manipulative practices like front-running or creating false market signals, which can distort market prices and undermine investor confidence.
Lack of Transparency: AI systems, especially large language models (LLMs), such as that used by ChatGPT, can be “black boxes”, making it difficult to understand how they make decisions. This lack of transparency can pose challenges in ensuring compliance with market regulations and in identifying potentially unethical or illegal trading practices.
Data Privacy and Security: AI systems require vast amounts of data, raising concerns about the privacy and security of sensitive financial information.
Ethical and Bias Considerations: There’s an ongoing concern about biases in AI algorithms, which can result from biased training data or flawed algorithm design. These biases can lead to unfair trading practices or discriminatory outcomes that may violate regulatory standards.
Regulatory Readiness: Extant regulatory frameworks may not be fully equipped to address the unique challenges posed by AI in financial markets. This includes the need for updated regulations and perhaps new approaches to monitoring and enforcement.
Systemic Risks: The widespread use of AI in financial markets could potentially lead to systemic risks.
The most charitable interpretation of Gensler’s quote would be that by “herding” he’s referring to a systemic risk in which many different financial markets participants use the same AI-based trading strategy, and so cause unexpected behaviors in the markets. While I suppose that’s possible in theory, the non-deterministic nature of LLMs’ outputs suggests that’s not too likely. To put this simply: you and your friend can run the same prompt on ChatGPT, and you’ll get very different outputs.
So, to the extent that herding behavior in the financial markets creates systemic risks, I don’t understand why a group of traders using the same AI models would create such herding behavior.
I ran the original post (everything above the section break) through ChatGPT and asked it for its interpretation. This was actually surprisingly insightful, and it teases out some issues which are not apparent from the original article which I am commenting on. What follows in italics in this section is its output2.
Gary Gensler’s concerns about the risks of AI in financial markets, as you’ve described, focus on the potential for systemic risks arising from a homogenized reliance on similar AI models. To unpack this, let’s consider the nuances of AI technology, especially in the context of financial markets, and the potential implications of widespread, uniform adoption.
Understanding the concerns:
Homogenization (Monoculture) Risk: Gensler’s reference to a “monoculture” in AI models suggests a scenario where most market participants use similar or identical AI-driven strategies. This could lead to a lack of diversity in trading behaviors. While it’s true that large language models (LLMs) like ChatGPT produce non-deterministic outputs, the concern isn’t about identical responses to the same prompt but rather about convergent behavior patterns emerging from similar decision-making frameworks.
Herding Behavior: In financial markets, herding refers to traders collectively moving in the same direction, often irrationally or without independent analysis. If many traders use similar AI models, their collective actions might exhibit a form of digital herding, leading to amplified market movements. This is a concern even if the AI’s outputs are non-deterministic, as the overall strategic direction influenced by AI could still be convergent.
Systemic Risk Amplification: In a highly interconnected market, the simultaneous failure or misjudgment of AI models could cascade, leading to rapid and unpredictable market fluctuations. This risk is akin to the systemic risks posed by over-reliance on certain financial practices, as seen in past financial crises.
Counterpoints and Considerations
Non-Deterministic Nature of LLMs: As you pointed out, LLMs like ChatGPT generate varied outputs even with identical inputs. This variability could, in theory, mitigate the risk of homogenous behavior. However, the overarching strategies and patterns these models learn could still lead to a form of convergence in market behavior.
Market Manipulation and Transparency: AI can potentially be used for sophisticated market manipulation, and the ‘black box’ nature of these models complicates regulatory oversight. This ties into Genseler’s broader concerns about market stability and integrity in an AI-driven landscape.
Regulatory Challenges: The current regulatory frameworks may not be fully equipped to address the novel challenges posed by AI in financial markets. This includes difficulties in monitoring AI-driven trading strategies and ensuring compliance with existing financial regulations.
Diversity of Models: While there’s a risk of convergence, the financial sector could mitigate this through the use of diverse AI models and strategies. This diversity would be essential to prevent the kind of monoculture Gensler warns against.
Conclusion:
Gary Gensler’s concerns, based on your interpretation and the context provided, seem rooted in the potential systemic risks and market stability issues that could arise from widespread, homogenized use of AI in financial trading. While the non-deterministic nature of LLMs might suggest a lower risk of identical outputs, the broader concern is about strategic convergence and the resulting market dynamics. It’s a complex interplay of technology, market behavior, and regulatory readiness, and while Gensler’s quote may not capture all the nuances, it highlights significant issues that require careful consideration in the evolving landscape of AI and finance.
None of this, by the way, should be construed as an endorsement by me of regulators’ thinking. The point to this exercise is to understand how they think and what their concerns are, not to endorse their way of thinking. My very general view is that the type of person who becomes a regulator is the type of person who tends to miss the forest for the trees.
Again, no endorsement of regulators’ thoughts is implied here.