AI and the Institutional Reckoning: Adapt or Perish
AI will disintermediate many sclerotic institutions
Introduction
The more powerful AI becomes, the poorer most institutionalists will fare1. By “institutionalists,” I mean those people who work in institutions, deeply embedded in existing structures: large corporations, government bureaucracies, universities, and other hierarchical systems. They cling to longstanding routines, regulatory frameworks, and conventional thinking. Disruptive AI rewards agility, experimentation, and data-driven decisions It punishes slow adaptation, bureaucratic overhead, and risk aversion.
Institutionalists and Their Vulnerabilities
Institutionalists typically operate within large, longstanding organizations. Think banks, governments, universities, big tech companies, or multinantional corporations. Historically, these organizations have some key advantages:
Economies of Scale: Large institutions dominate many markets simply due to their size and the resources they aggregate.
Bureaucratic Stability: Hierarchical decision-making channels decisions through committees and adherence to standard protocols, which benefits those institutions which operate in predictable fields.
Regulatory Capture: Large organizations have the resources to lobby legislators and influence regulators in accordance with their prerogatives.
Information Asymmetry: Institutions amass a large amount of information about the industry in which they operate. Smaller competitors, legislators, and regulators often struggle to keep abreast of the amount of information required to compete, legislate, or regulate.
While these features create stability and resilience in conventional environments, they also create vulnerabilities in dynamic contexts. AI renders economies of scale less relevenat if core functions are automated. Bureaucratic structures are slow to adopt disruptive solutions. Furthermore, AI democratizes knowledge, which challenges institutions that rely on expertise or exclusive data sets.
Automation of Complex Tasks
Historically, large institutions have had an advantage because they can assemble large teams of humans to tackle bureaucratic or administrative tasks. From processing mortgage applications to reviewing legal documents, these tasks were once labor-intensive. Powerful AI disrupts this model by performing these tasks more quickly and accurately than large teams of people. This reduces operating costs and shortens project timelines. For instance, an advanced large language model can scan thousands of pages of legal documents in minutes, and identify relevant clauses or potential pitfalls. Tasks which once required a hundred-person department can be accomplished by a lean, tech-savvy startup that harnesses cloud-based AI tools.
This undermines one of the key institutional advantages—scale for the sake of scale. If an AI system can manage the lion’s share of the labor, then having hundreds or thousands of employees becomes less of a clear differentiator. In fact, a large staff may become a burden, as overhead costs balloon with no corresponding productivity benefit.
Slow Adoption and Innovation Cycles
Institutions are bound by policies, stakeholder expectations, and risk aversion. New technology goes through multiple rounds of vetting and budgeting approvals2. While those processes reduce impulsive decisions, they also extend the time between the appearance of a disruptive innovation and its internal implementation. If the global tech landscape changes rapidly—every 12 to 24 months, or even more frequently—an 18-month adoption timeline can mean falling irreversibly behind in crucial AI capabilities.
In contrast, smaller organizations and startups thrive by moving quickly. They have flexible structures, shorter decision-making cycles, and fewer layers of approval. They can pivot swiftly, adopt the latest AI frameworks, integrate them into their products, and iterate as the technology evolves. Meanwhile, many large institutions remain ensared in bureaucratic friction. By the time they gain approval to implement an advanced AI solution, that solution is probably outdated.
Decentralization and Disintermediation
Decentralization refers to a system’s capacity to distribute authority and processes across multiple nodes rather than vesting them in a central body. AI often pairs with other decentralizing technologies—like blockchain and distributed computing—to enable new forms of organization. These organizations encode decision-making in algorithms rather than in hierarchical structures. Decentralized finance (DeFi), for instance, can bypass conventional banks and investment firms3, offering peer-to-peer lending or automated market making. These developments threaten institutions whose core role is to act as intermediaries. Intermediaries, on the other hand, offer trust and oversight in return for fees and compliance burdens.
Further, institutions which rely on controlling distribution or supply chains will similarly be disintermediated. AI-based platforms can connect producers and consumers to each other directly by optimizing routes, matching supply and demand in real-time, and automating tedious logistical tasks. This means that traditional middlemen will lose their foothold.
Transparency and Data-Driven Decision Making
Another hallmark of bureaucratic institutions is the capacity to manage information flows. Whether it’s a government ministry that keeps detailed records, a university that manages specialized research data, or a major corporation that uses proprietary analytics for competitive advantage, controlling information has historically conferred power. However, as AI matures, it lowers the barrier to advanced analytics. Data that was once too complex or voluminous for small players to parse is now tractable with off-the-shelf AI solutions.
Moreover, AI reveals inefficiencies within large institutions that were previously obscured by organizational complexity. Stakeholders and the public may demand more transparency once they see how data-based insights can unmask hidden inefficiencies or suspect practices. A large public bureaucracy, for instance, might face increasing pressure to justify its staffing levels or elaborate processes when advanced analytics suggests alternative models could deliver the same services more effectively.
Contrarian View: Why Institutions Might Survive or Thrive
While the above points illustrate strong reasons why AI threatens institutionalists, the story isn’t so straightforward. Institutions still command significant resources and can weaponize AI in ways that preserve or even enhance their power.
Massive Capital Reserves
Large origanizations have vast capital reserves4. They can deploy these to acquire AI startups, build in-house AI labs, or partner with top universtities and research centers. Some of the world’s leading AI research occurs inside large tech companies (Google, Microsoft, Meta, Amazon, etc.5). Their institutional structures haven’t collapsed. Instead, they’ve adapted to harness AI, and have retained their dominance.
This points to an important distinction between legacy institutions that fail to innovate and institutions that successfully transform. If a large corporation invests heavily in AI research, creates agile internal structures, and fosters a culture of experimentation, it can harness the best of both worlds—capital scale plus cutting-edge agility.
Regulation and Policy
Governments and large corporations frequently enjoy direct lines of communication with regulatory bodies. If AI becomes disruptive in ways that create systemic risks—privacy violations, biased algorithms in lending decisions, or even national security concerns—regulators could place strict requirements on AI deployment. Compliance costs would then soar, favoring deep-pocketed incumbent institutions that can afford the overhead. In effect, regulation could be used to raise the barrier to entry. This would discourage smaller players who lack the resources to meet extensive compliance mandates.
Moreover, governments themselves are giant institutions. If they manage to roll out AI responsibly, they could bolster the effectiveness of public services—healthcare, law enforcement, infrastructure management—and maintain strong public trust.6 This synergy between AI deployment and regulatory frameworks could reinforce the power of institutions, rather than erode it.
Network Effects and Market Lock-In
Many institutions, such as those operating in finance, social media, and software, benefit from network effects. Users are reluctant to leave entrenched platforms because their social circles or critical data are already there. For example, large tech platforms such as Amazon or Google have built extensive ecosystems that users find indispensable. Even if an agile startup develops a new AI-based service, it may struggle to lure enough users away from the major platform to achieve a self-sustaining network.
This dynamic exists in big banks as well. Although financial technology startups have introduced AI-based tools for everything from lending to investment management, the brand trust and regulatory compliance of well-established banks still exert a strong pull on many customers. For all these reasons, one can argue that powerful AI might help large institutions maintain or expand their compeititve moat—particularly if they adopt AI more aggressively than smaller competitors anticipate.7
Institutional Adaptation Strategies
Institutions that want to survive in our AI future must shift from a risk-averse, hierarchical culture to one of continuous learning and rapid experimentation. That this will be challenging for many institutions and the people who staff them, is an understatement. Following are some strategies that can mitigate the risk of decline.
Dedicated AI Labs and Partnerships
Institutions should establish special AI labs or innovation centers which are deliberately insulated from the bureaucratic inertia of the main organization8. These labs can move quickly to prototype new tools and solutions. The lab’s successes can be scaled across the broader organization in measured steps. (It’s worth mentioning here that most institutions won’t be able to do this, simply because they don’t have the skillset or remit to do so.)
Institutions can also accelerate innovation by partnering with large tech companies and AI startups. This frees them from having to bear the cost of in-house development. Through joint research programs, institutions learn about cutting-edge techniques, while researchers benefit from the institution’s resources and data.
Cultural Overhaul and Decentralized Structures
Hierarchical, top-down decision making is too slow for AI-driven markets. Institutions must empower cross-functional teams, flatten organizational charts, and adopt agile methods. This means giving frontline employees the autonomy to experiment with new AI tools, gather customer feedback, and iterate without waiting on multiple layers of approvals. Many institutions will find themselves incapable of doing this.
Companies like Google or Amazon, while large, have long championed this model. Their teams often operate with a level of autonomy to fail fast, and test new features. Such cultural shifts demand leadership buy-in and will challenge institutions with decades of tradition.
Data as a Strategic Asset
Most large institutions possess a wealth of data that smaller competitors lack. For instance, a major bank might have customer transaction data spanning decades, or a large telecom company might have traffic patterns from millions of devices. If integrated properly, these datasets can power advanced AI models that offer deep insights.
However, the data alone is not enough. Institutions must also invest in robust data infrastructure. They have to ensure data quality, security, and compliance. They have to hire staff who specialize in data engineering, analytics, and machine learning operations. This data can become a moat, enabling predictive models or personalized user experiences that are hard for newcomers to replicate.
It goes without saying that government institutions, in particular, will struggle here, simply because they are unable to hire the relevant talent. Private institutions, especially tech companies, have the budget to hire this talent, which means that the tech companies might further entrench their positions.
Regulatory Influence
Institutionalists can work closely with policymakers to craft regulations that address valid concerns around bias, privacy, or consumer protection. If done responsibly, these rules might provide a broad societal benefit—ensuring AI is trustworthy and accountable—while also playing to the strenghts of established institutions.
That said, there is a delicate ethical and political balance here. Overly burdensome regulations tend to inhibit genuine innovation and create artificial monopolies. Nonetheless, institutionalists have the lobbying power and relationships to shape these discussions in their favor. By anticpating regulatory changes, they can ensure they have the internal governance to meet the requirements before their smaller competitors.
Speculative Futures: Best-Case, Worst-Case, and a Likely Middle Ground
Best-Case Scenario
In a best-case scenario, AI becomes a universal productivity lever. Institutions—particularly those willing and able to transform—adopt AI to streamline processes, reduce overhead, and provide more innovative solutions to customers. This fosters a symbiotic ecosystem where large players and nimble startups collaborate, each contributing different strengths. Government agencies, revitalized by AI, deliver public services more effectively, bridging social gaps. In this narrative, institutions do not fade away but adapt and even thrive, leveraging their resoruces to create beneficial AI breakthroughs that diffuse throughout society.
Worst-Case Scenario
A darker scenario envisions institutions crippled by internal inertia, failing to adopt AI quickly or effectively. They hemorrhage talent to startups and tech giants that use AI more creatively. Over time, entire industry segments become dominated by emergent platforms that displace conventional middlemen, banks, publishers, and even universities. Bureaucracies unravel as their roles become obsolete, or are outsourced to more advanced, decentralized networks. Social chaos might ensue if unemployment spikes in sectors reliant on institutional structures. Without strong leadership or adaptive capacity, institutions fade from relevance, replaced by a patchwork of AI-driven entities.
A Likely Middle Ground
Reality is complex, and the future likely holds a mixed picture. Certain sectors—especially those that can be digitized quickly—may see smaller disruptots outcompete legacy institutions. In contrast, heavily regulated or capital-intensive industries might see larger players remain or grow in dominance, aided by their poltical connections, cash reserves, and data repositories. The institutional landscape will fragment into winners and losers based on how quickly and effectively each organization aligns itself with the AI revolution.
Within governments, policy frameworks could swing between pro-innovation and pro-regulation stances. Some jurisdictions might adopt progressive AI policies, attracting startups and research centers, while others maintain the status quo. Over time, we may see competition between jurisdictions, each seeking to balance innovation with social responsibility.
Conclusion
The notion, offered at the beginning of this essay, that the more powerful AI becomes, the poorer most institutionalists will fare is grounded in real concerns about inertia, risk aversion, and slow decision-making. AI’s potential to automate complex tasks, democratize expertise, and disintermediate entire industries poses a legitimate challenge to large, bureaucratic structures. Nevertheless, this challenge is by no means an automatic death sentence for institutions. Institutions that leverage their strengths—capital, massive data, regulatory insight, and existing brand trust—can effectively adapt and might even enhance their power.
Thus, we can anticipate a distribution of outcomes. Some institutions will languish, undone by an inablity to embrace technological transformation. Others will thrive by redesigning their organizational structures, investing heavily in AI, and rewriting the rules of engagement for their sectors. Policymakers and society at large will be forced to navigate the pros and cons of newly empowered entities, whether they be institutional holdovers or emerging startups. Ultimately, AI offers not a guaranteed path to institutional demise but an invitation to rethink how organizations function, how they deliver value, and how they sustain their relevancy in a rapidly evolving techological era.
This observation, and the essay which follows from it, was inspired, in part, by this tweet.
Even outside of AI-specific domains, there is a discontinuity between technology development cycles and government approvals processes. The result is that the technology which government staff are given to do their job is often outdated by consumer standards. A good book about this is
‘s Recoding America. Her Substack newsletter, Eating Policy, is also very good.This statement of course elides a whole bunch of regulatory concerns, which are beyond the scope of this essay. Nonetheless, one can conceptually see a future in which algorithmically-generated decentralized finance functions are widely adopted by more forward-thinking organizations, while hidebound and sclerotic organizations stick to traditional financial systems.
For businesses, these capital reserves exist on their balance sheet and in their access to lines of credit, debentures, etc. For governments, these capital reserves exist in their sovereign authority to create more money.
That I did not mention Apple in this sequence ought not be ignored, though Apple’s problems with AI are beyond the scope of this essay.
One need not be especially cyncial to doubt that this will come to pass. Government, it may reasonably be said, is not known for its ability to deploy cutting edge technology very well. (See the second footnote for more information about this.)
It is common wisdom among venture capitalists and the entrepreneurs they fund that incumbents are always and everywhere at risk from being disrupted by plucky upstarts. And, while this disruption sometimes occurs, it’s also true that venture capital is shot through with survivorship bias. Venture capitalists tend to remember the successes (Google killed newspapers’ classified ads business), and ignore the many failures (no fintech has dislodged the incumbent banks).
This is somewhat similar, conceptually, to the famous Skunk Works at Lockheed Martin.
The institutions are so slow, but have employees that are at a minimum tinkering.