Introduction
If you believe people like Leopold Aschenbrenner and Dario Amodei, advanced AI1 will be here by the end of the 2020s. Their predictions strike me as wildly optimistic, but they are also closer to the state of the art than I am. Thus I am inclined to defer to their views, even if it may reasonably be said of both that they’re talking their book2.
Given that advanced AI might arrive within the next several years, I’ve been noodling on the following thesis: Institutionalists will not fare well in a world in which advanced AI exists. I don’t know whether this thesis is correct, and this post is an attempt to speak out of both sides of my mouth, or, more charitably, give reasons why the thesis might be true, and why it might not be true.
Let’s first define the word “institutionalist”
The thesis turns, crucially, on the definition of “institutionalist”. For the purposes of the thesis and this post, I think of an institutionalist in the following way. An “institutionalist” is someone who places a high value on established institutions—such as government agencies, legal frameworks, corporations, educational systems, or financial organizations—and believes in maintaining and improving society through these formal structures. Institutionalists generally favor stability, order, and continuity, seeing institutions as essential mechanisms for managing complex social, economic, or political systems. They tend to prioritize rule-following, adherence to established processes, and incremental change over rapid or disruptive innovation.
Institutionalists typically trust that institutions have been designed to promote the public good, uphold traditions, and protect against risks, even if they sometimes work imperfectly. They may also advocate for refining or reforming institutions to better align with societal needs but are less likely to support complete upheaval or unconventional approaches that could undermine institutional authority.
Arguments in favor of the thesis that institutionalists will not fare well in a world with advanced AI
Slow Adaptation to Rapid Technological Change. Institutionalists prioritize stability, order, and adherence to established procedures. Advanced AI evolves rapidly and requires swift adaptation and innovation. Institutionalists will struggle to keep pace with these changes due to their preference for incremental adjustments over disruptive shifts. This will lead to obsolescence in a world where agility is crucial.
Resistance to Disruptive Innovation. Estbalished institutions have rigid bureaucratic structures and entrenched interests that resist significant changes to the status quo. Advanced AI will require these institutions to overhaul existing processes, roles, and even their core functions. Institutionalists will oppose such drastic transformation, hindering their ability to use advanced AI effectively. Such refusal will put them at a disadvantage relative to more adaptable organizations.
Burden of Legacy Systems. Institutions typically operate with legacy systems and infrastructure that are incompatible with cutting-edge AI technology. Upgrading these systems is costly and time-consuming. Institutionalists will be reluctant or unable to make the necessary investments. This will cause their organizations to fall behind those that can implement advanced AI solutions more readily.
Competitive Pressure from Agile Actors. Non-institutional actors, such as startups and tech companies, are more willing to embrace risk and experiment with new technologies. They can adopt advanced AI more rapidly and innovatively than traditional institutions. Institutionalists will find themselves outcompeted by agile players.
Shift in Authority and Decision-Making. Advanced AI will allow us to decentralize decision-making by providing data-driven insights directly to individuals and smaller organizations. This will diminish the traditional authority and control that institutions hold. Institutionalists, who rely on hierarchical structures and centralized control, will find their influence wane. Advanced AI will empower alternative models of organization and governance.
Talent Acquisition Challenges. Advanced AI requires specialized skills and expertise. Institutions will struggle to attract and retain top AI talent due to bureaucratic constraints, less competitive compensation packages, and a perceived lack of innovation. This talent gap hinders their ability to develop and implement advanced AI solutions effectively.
Arguments against the thesis that institutionalists will not fare well in a world with advanced AI
Access to Significant Resources. Institutions control substantial financial resources, extensive datasets, and established networks, all of which are crucial for developing and deploying advanced AI technology. Institutionalists can use these resources to invest in AI research and infrastructure. They will outperform smaller organizations which lack these resources.
Influnce over Regulation and Standards. Institutionalists typically have considerable sway over policy-making and regulatory bodies. They can shape the legal and ethical frameworks for governing AI, ensuring that regulations favor their interests or create barriers to entry for competitors. This can help institutions maintain their dominance in the face of technological change.
Public Trust and Credibility. Established institutions enjoy a higher level of public trust compared to newer, less-known organizations. In areas like healthcare, finance, and governance, people prefer the reliability and accountability of institutions when dealing with advanced AI applications. Institutionalists can capitalize on this trust to maintain their relevance and authority.
Capacity for Adaptation and Innovation. While institutions may be slow to change, they are not incapable of adaptation. History shows that many institutions have successfully navigated technological revolutions by eventually integrating new technologies into their operations. Institutionalists can drive internal reforms to embrace AI, improving efficiency and service delivery while preserving their core values and structures.
Risk Management and Ethical Oversight. Institutions have extensive experience managing risks and ensuring compliance with laws and regulations. As advanced AI raises complex ethical and legal issues, institutionalists can leverage their expertise in governance and oversight to implement advanced AI responsibly. This will position them as leaders in deployment of advanced AI.
Integration of AI into Existing Frameworks. Institutionalists can integrate advanced AI technology into their existing frameworks to enhance, rather than replace, their functions. By using AI to improve decision-making, optimize operations, and personalize services, institutions will be more effective without sacrificing stability. This allows institutions to benefit from advanced AI while maintaining their foundational principles.
Conclusion
The impact of advanced AI on institutions is uncertain. On the one hand, the rapid pace of AI development and the need for agility and innovation present significant challenges for institutionalists who favor stability and established procedures. They risk falling behind more adaptable competitors and losing influence as authority becomes more decentralized.
On the other hand, institutions control considerable resources, influence, and expertise that can be harnessed to adopt and shape AI technology. Their ability to manage risks, ensure compliance, and maintain public trust might be significant advantages.
The fate of institutionalists, given advanced AI, depends on their willingness and ability to adapt. Those who can reconcile their foundational values with the demands of technological innovation will thrive, while those who resist change will be marginalized.
I am using the vague term “advanced AI” here in place of the equally vague term “AGI”. Lots of people talk about AGI, but I have never seen a robust definition of what, exactly, AGI is. We’ll know it when we see it, I guess. My framing for “advanced AI” is “AI that is much more capable than the AI we have in November 2024.”
Two things can be true: one is talking one’s book, and one’s forecast in favor of one’s book is correct. Just because someone stands to financially benefit from a forecast does not, by itself, invalidate the forecast.
Interesting. One comment:
'Public Trust and Credibility.'
Many institutions have burned most of their public trust and credibility to the point much of the public is putting more trust in startup institutions.
Thus that might not be as much of a barrier as suggested.