When AI Speed Meets Institutional Lethargy
The United States Supreme Court took 10 months to tell us what everyone already knew about tariffs. That delay is a feature of every institution you depend on, and AI is about to break all of them
This past Thursday, the United States Supreme Court ruled 6-3 that the International Emergency Economic Powers Act (IEEPA) doesn’t authorize the president to impose tariffs. Chief Justice John Roberts, writing for the majority, noted that the administration’s entire legal theory rested on two words–”regulate” and “importation”–separated by 16 others in the statute. “Those words,” he wrote, “cannot bear such weight.”
This was not a hard question. Any competent constitutional law professor could have told you the answer in April 2025, when the first challenges were filed. IEEPA contains no reference to tariffs or duties. No president had ever used it to impose them. The statutory text is clear. What took ten months was procedural choreography. Standing. Ripeness. Emergency docket management. Briefing schedules. The Court managing the optics and timing of its confrontation with the executive branch.
And then Trump routed around the whole thing in hours. A new 10% global tariff under the Trade Act of 1974 on Friday afternoon. Ten months of institutional deliberation, neutralized before the weekend was over1.
The lesson here isn’t about tariffs. It’s about what happens when the speed at which the world operates diverges from the speed at which the institutions governing it can respond. That divergence has been growing for years. AI is about to blow it wide open.
Slow by Design
What most people miss when they complain about institutional slowness is that the slowness is load-bearing. It’s institutional architecture.
Courts take months to issue major rulings because rapid-fire decisions on existential constitutional questions would undermine the perceived legitimacy and finality of the outcome. Legislatures move at a glacial pace because laws are supposed to reflect broad deliberation and input. Regulatory agencies run extended notice-and-comment periods because affected parties deserve time to respond. These cadences exist for a reason. Deliberation builds consensus. Consensus builds legitimacy. Legitimacy is what makes a ruling something people actually comply with rather than ignore.
The problem is that this architecture was designed for a world where the entities being governed also operated at human cognitive speed. The factory, the bank, the trading floor of 1977–when IEEPA was written–moved at a pace the institutional system could track. Regulators could observe, deliberate, and respond before the next major shift occurred. The feedback loop was slow, but it was roughly synchronized with the thing it was trying to govern.
That synchronization is breaking. AI accelerates analysis, decision-making, and execution for private actors while doing essentially nothing to speed up the institutional processes that constrain them. The regulated entity and the regulator now inhabit different temporal realities. And the gap is widening monotonically.
Where the Shear Forces Hit
This pattern is about to be repeated across every major institutional domain. Here’s what to watch.
Regulatory agencies. The FDA, SEC, and EPA operate on review cycles measured in months or years. Notice-and-comment rulemaking assumes public participation on a timescale of weeks. These cadences made sense when the industries being regulated also moved at human speed. They don’t make sense when the regulated entity can generate, analyze, and act on information in seconds. We’ve already seen this play out in crypto, where regulators were still debating taxonomies while entire market cycles came and went. AI accelerates this mismatch across every regulated industry. Agencies always regulate for the last war.
Legislatures. Congress’s power to set tariffs is constitutionally foundational. But Congress operates on a legislative cycle that can’t keep pace with executive action, let alone with technological and economic change. IEEPA was written for a world that no longer exists. Updating it requires the same glacial legislative process that produced it: committee hearings, markup, floor votes, conference, presidential signature. Meanwhile, AI accelerates the need for legislative updates by generating new technologies, market structures, and threat vectors faster than the legislative calendar can absorb them. The result is predictable: the executive branch fills the vacuum with executive orders and emergency powers, creating the exact constitutional tensions the tariff case illustrated. The Court spent 10 months resolving a tension that the legislature’s slowness created in the first place.
Credentialing and education. Medical schools, law schools, and engineering programs are 3-7 year pipelines designed to produce professionals whose knowledge is current at graduation. This model assumes a knowledge half-life measured in decades. If that half-life compresses to a year or two, which in many technical domains it already has, the entire credentialing apparatus breaks. Schools certify people on a body of knowledge that’s obsolete before they finish the program. And the accreditation bodies that govern curriculum changes are themselves institutionally governed, taking years to approve updates that respond to shifts that happened in months. The credential decouples from competence, and everyone keeps pretending it hasn’t.
The judiciary. Beyond the Supreme Court, consider the entire litigation system. Discovery in complex commercial cases takes years. Patent infringement trials routinely litigate technology that is three generations obsolete by the time judgment is entered. The remedy the court eventually orders may be economically meaningless because the market has moved on entirely. The system produces justice on a timeline that is increasingly decoupled from the timeline on which the relevant events actually matter. Sophisticated actors are already responding: arbitration, private ordering, and eventually smart contracts absorb the dispute resolution functions that courts are too slow to perform.
Corporate governance. Board meetings are quarterly. Strategic planning cycles are annual. Due diligence on major transactions takes months. These cadences assume the competitive landscape shifts slowly enough that periodic human deliberation can track it. When a competitor can identify an opportunity, analyze it, and execute in days, the quarterly board review is a postmortem. Companies that maintain legacy governance cadences will lose, systematically, to those that build AI-augmented continuous decision architectures.
The Lawyer Problem
The legal profession is worth examining in detail, not because it’s unique, but because it’s the clearest case study in why institutions can’t self-reform at the pace required.
Legal training is fundamentally backward-looking. The entire epistemology–case law, precedent, statutory interpretation–is built on the premise that the answer to today’s question lives in yesterday’s text. This is powerful for maintaining consistency in a slow-moving system. It is catastrophically ill-suited to designing a new one. When you ask a lawyer “what should the new institutional architecture look like,” the trained reflex is to search for an analog to existing doctrine. That’s not design thinking. It’s pattern-matching on a dataset that doesn’t contain the answer.
The profession’s core marketable skill–rhetorical sophistication, textual interpretation, argument construction–is also, as it happens, the skill most immediately automatable by large language models. Text interpretation and precedent application is pattern matching on language, which is precisely what AI does best. Meanwhile, the skill lawyers overwhelmingly lack–modeling the systems their rules are supposed to govern, reasoning from first principles about incentive structures and feedback loops–is the skill that remains valuable and hard to automate.
But the deepest problem is structural entrenchment. Lawyers operate within institutions but they also control them. Bar associations set licensing requirements. Lawyers dominate legislatures. Former lawyers sit on the bench. The people least equipped to design the transition from human-speed to AI-speed institutional governance are the ones with the most authority over whether that transition happens. The median BigLaw partner’s mental model of AI is “maybe it’ll help my associates draft memos faster,” not “the entire value proposition of what I do is about to be repriced from $1,500 an hour to approximately zero.”
This pattern–the governing class’s cognitive toolkit being fundamentally mismatched to the redesign task–isn’t limited to law. Regulators are trained in compliance, not mechanism design. Legislators are trained in deal-making, not systems architecture. Board directors are trained in periodic oversight, not continuous adaptive governance. Across every institutional domain, the people running the system were selected for skills optimized for the old speed. They are not going to be the ones who figure out what replaces it.
The Investment Implication
For those of us trying to position capital and analysis ahead of what’s coming, the framing is simple: every institution that can’t keep pace with AI-accelerated private action creates a gap between the de jure rules it enforces and the de facto reality of how markets and technologies operate. Those gaps are arbitrageable, until they become unsustainable and force correction.
This is the GPU infrastructure story I’ve been telling in this newsletter, generalized to the entire economy. The absence of forward curves in compute markets, the opacity of deal structures, the circular financing where Nvidia invests in customers who then buy Nvidia GPUs: all of this persists because the financial and regulatory institutions that would normally impose transparency and price discovery haven’t caught up to the asset class. Institutional lag creates market structure inefficiency. That’s been my entire beat, and it turns out the pattern scales.
Where should you look for the next discontinuities? Wherever regulatory or institutional assumptions embed human-speed timelines that AI is about to compress. Drug development timelines running into FDA review cadences that haven’t changed in decades. Financial product innovation outpacing SEC approval processes. Energy infrastructure permitting that takes years for technologies that can be deployed in months. In each case, the pattern is the same: the binding constraint on outcomes shifts from technological capability to institutional bandwidth, and the returns accrue to those who see the bottleneck before it breaks.
The question at issue here is whether the institutional infrastructure that governs economic life can adapt at anything approaching the pace of the technology it’s trying to govern. The tariff ruling gave us the answer in miniature: ten months of deliberation, routed around in an afternoon. Now multiply that pattern across every institution that binds and grinds, and ask yourself what you’re pricing in for the transition.
It is worth noting that Trump’s latest maneuver is itself being subjected to legal challenge, so this is not likely the final word in this debate. In any event, the question of whether Trump will be able to enact any of his tariffs without challenge is beyond the scope of this Substack post.

If you're an (old) science fiction fan, Frank Herbert posited a legally-sanctioned "Bureau of Sabotage" (BuSab) to deliberately slow an "over-efficient bureaucracy" that implements laws and regulations too fast for normal human processing and acceptance. Looks like reality takes the opposite path :)
Your point about different timescales is well made. Btw, I think we should introduce an energy tax (following the fee and dividend model) to slow down AI.