AI’s Next Battle Is Legal, Not Technical
Is consumer protection or national security more important?
Welcome to the hundreds of new subscribers who have joined over the past few weeks. In today’s free post, I take a look at the tension between prosecutors looking to rein in AI capabilities and national security-related concerns about AGI. It’s an exogenous risk normally ignored by investors.
And if you like what you read here and you’re not yet subscribed, consider subscribing. Most of my posts are free though deeper dives tend to be paid.
As large language models scale, two conflicting imperatives—consumer safety and national security—are poised to reshape the AI ecosystem. State prosecutors may drive chilling legal interventions in response to psychological harms, while national-security officials push to shield frontier models for strategic advantage. Investors face twin risks: reduced inference demand from constrained consumer models and exclusion from the most powerful systems. A bifurcated regime is plausible, but not inevitable, and international leakage adds further uncertainty. The long-term trajectory of AI governance will be shaped by how, and whether, these tensions are reconciled.
An uncomfortable tension is emerging between two policy imperatives: mitigating harm to vulnerable individuals and accelerating strategic national capabilities in artificial intelligence. These imperatives are not obviously reconcilable. At the margin, each demands different institutional responses, and the friction between them is growing more acute.
Recent headlines have drawn attention to the first concern. A story in the New York Times described a mentally unstable individual whose delusions were reinforced and escalated through interactions with ChatGPT. While tragic, such cases are not entirely unexpected. LLMs mirror and amplify user inputs. Their reinforcement-tuned outputs reflect back the structure, tone, and worldview of the user. For psychologically vulnerable individuals, this can amount to a kind of unintentional gaslighting, where the AI appears to validate pathological beliefs.
From a systems perspective, these incidents are tail risks that scale with usage. As LLM adoption expands to billions of users, even extremely low per-capita rates of psychological harm produce a non-trivial number of real-world injuries. In statistical terms, this is not a speculative problem; it is a distributional inevitability. No model is safe for all users at all times, and the promise of general-purpose intelligence guarantees some degree of entanglement with human fragility.
Ambitious Prosecutors Look to Make a Name for Themselves
This has important legal and regulatory consequences. In the absence of clear federal guardrails, ambitious state-level prosecutors and civil litigators may seize upon tragedies like the one reported by The New York Times to constrain frontier AI models. The first successful lawsuit or state attorney general settlement could open the door to a cascade of similar actions, especially if discovery reveals any internal knowledge of risks that were not adequately mitigated.
These kinds of legal interventions may represent exogenous shocks that many investors in the space do not currently price in. Venture and public equity markets tend to focus on competitive dynamics, product velocity, and macroeconomic conditions, rather than the discretionary actions of individual prosecutors. But history suggests that aggressive state actors, usually motivated by a combination of political ambition, public pressure, and regulatory vacuum, can move quickly and reshape market expectations overnight.
It is worth noting, however, that prosecutorial action alone rarely produces lasting regulatory transformation. State-level legal campaigns can create friction, suppress product development, and shift public perception, but unless they are absorbed by federal institutions, their effect tends to be fragmented. In this respect, state action functions more as a signaling mechanism or accelerant for federal consolidation than as a direct regulatory architecture. Without uptake by Congress or executive agencies, state-level outcomes remain uneven and reactive.
Investor Exposure in a Bifurcating Ecosystem
This tension between public safety enforcement and national-security exceptionalism introduces a category of investor risk that is rarely modeled but increasingly material: the risk of regulatory or prosecutorial interventions that abruptly shift the distribution of value in the AI stack.
There are at least two mechanisms by which this could occur.
First, if psychological safety concerns trigger a regime of constrained model capabilities—through regulatory throttling, license caps, or demand-side chilling effects—then the volume of inference demand will fall short of prevailing narratives. Many investment theses currently hinge on a simple extrapolation: more capable models will mean more queries, more applications, and more hardware consumption. But if regulators mandate narrower behavioral bands or penalize open-ended engagement, usage could plateau or even decline. This would dampen returns across inference-serving infrastructure, including GPUs, cloud platforms, and LLM app wrappers.
Second, if national-security imperatives accelerate the emergence of closed, defense-aligned development tracks, the most advanced models may become inaccessible to general-purpose commercial actors. These models would likely reside within SCIFs, be controlled under ITAR-like regimes, or fall under classification protocols that effectively remove them from the open market. In this world, most venture-backed or publicly traded companies would be locked out of the cutting edge. The investor-facing surface area of AI would be limited to older, less capable, and heavily regulated derivatives.
This is a different kind of risk than overvaluation or execution failure. It is structural exclusion: the gradual concentration of frontier capabilities into domains that are politically or legally inaccessible to ordinary capital. In that world, even successful investments in commercial AI firms may fail to capture the upside of true AGI-grade performance.
That said, concentration risk is not always a negative for capital. If national-security-aligned vendors emerge as the sole stewards of advanced models and gain privileged public market access (as defense contractors or quasi-utilities), they could offer highly concentrated upside. The result would not be value destruction, but value reallocation away from distributed commercial platforms and toward a few gatekept entities closely aligned with federal priorities.
Investors with exposure to consumer applications, inference platforms, or general-purpose LLMs may need to revisit their assumptions. The “up and to the right” model of continuous capability expansion available to the highest bidder could give way to a more gated, slower-moving regime in which access is mediated not by capital allocation but by federal authority.
Frontier AI is a National Security Imperative
At the same time, national-security officials view frontier AI development as a strategic imperative. U.S. policymakers across both parties appear increasingly aligned on the goal of maintaining superiority over adversaries such as China in advanced AI systems. LLMs are viewed not only as economic tools but as critical dual-use infrastructure. Their potential applications in intelligence analysis, autonomous systems, cyber defense, and strategic deterrence are taken seriously within the defense and intelligence communities.
Still, it is important to emphasize that the national-security override remains a working hypothesis rather than a codified policy regime. The Department of Defense has not yet articulated a unified doctrine on AGI-class systems. Inter-agency coordination remains uneven, with entities like the FTC, DHS, and Commerce Department holding overlapping and occasionally conflicting mandates. Institutional fragmentation could delay or complicate efforts to establish a coherent AI-industrial policy that reliably insulates frontier models from civil litigation or commercial restraint.
There is ample precedent for this kind of balancing act. In the 1990s, U.S. export controls on cryptographic software were eventually relaxed, not because the technology became safer, but because the national-security apparatus concluded that strong domestic cryptography was essential for long-term strategic advantage. A similar rationale could shape future decisions around advanced AI.
A Bifurcated World for AI
One plausible outcome is a bifurcation of the AI ecosystem. On one side, consumer-facing models might be subject to increasingly strict behavioral and safety controls, including identity gating, rate limiting, and potentially even licensure requirements for deployment above certain capabilities thresholds. These models would prioritize harm reduction, explainability, and regulatory compliance.
On the other side, national-security aligned models might continue to evolve in more insulated environments, governed by bespoke oversight regimes with less public visibility. The companies developing these models could operate under federal contracts or be granted security clearances. Their work might be partially classified or protected by export-control law. In such a scenario, safety becomes a closed-loop process aligned with state objectives rather than public consensus.
This divergence, if it emerges, would have important implications for the structure of the industry. Compliance costs could rise substantially, creating an economic moat for the largest players. Mid-size and open-source developers might struggle to remain competitive or even viable. The boundary between commercial and defense AI could blur. Firms that once focused on general-purpose assistants might find themselves operating in a hybrid capacity, balancing user-facing services with bespoke capabilities for national clients.
Political dynamics would likely evolve in response. Incidents of psychological harm could be invoked by critics as evidence of recklessness or insufficient oversight. Conversely, delays in model deployment might be cited by national-security advocates as unacceptable risks to American strategic posture. The regulatory response may not be purely technocratic; it could be shaped by the interaction of media narratives, prosecutorial incentives, and inter-agency power struggles.
Any U.S.-centric approach to regulation will also contend with transnational spillover. If domestic restrictions constrain the behavior of consumer-facing models, foreign labs may capture user demand by offering more permissive or powerful alternatives. Similarly, open-source model releases abroad may circumvent U.S. throttling regimes entirely. AI usage does not respect borders as cleanly as regulatory frameworks do. In a globalized environment, domestic safety interventions may have limited reach unless accompanied by export controls, multilateral agreements, or competitive models that make compliance attractive.
If these dynamics accelerate, a formal federal framework may eventually emerge that attempts to reconcile these tensions through centralization. Such a framework could include:
Mandatory registration or licensing of models above a certain training compute threshold.
Incident reporting and audit-trail requirements for safety failures.
Exemptions or indemnification clauses for models developed under direct federal supervision or deemed critical to national security.
Tight restrictions on the export or open publication of model weights at certain capability levels.
This framework would mirror other regimes used for dual-use technologies such as nuclear materials, cryptography, or aerospace components. It would not eliminate harm, but it would provide a bureaucratic structure within which trade-offs could be formally negotiated.
However, this is not a foregone conclusion. Much depends on the political salience of edge-case harms, the appetite of prosecutors to test novel legal theories, and the ability of national-security advocates to shape the regulatory agenda. If the harms are emotionally compelling enough, particularly in vulnerable populations, public demand for stronger safeguards could temporarily outweigh national-security considerations. Alternatively, if strategic tensions with adversaries intensify, the reverse could occur: courts and regulators might defer more frequently to federal priorities.
The future is uncertain. What is clear is that as LLMs become more powerful and more ubiquitous, the legal and institutional frameworks surrounding them will be pulled in opposite directions. Human vulnerability and geopolitical advantage are not naturally aligned concerns. How that tension is resolved, or managed, will shape the trajectory of AI regulation in the years to come.
Coda
If you enjoy this newsletter, consider sharing it with a colleague.
Most posts are public. Some are paywalled.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can: