This essay serves as a companion piece to my recent essay speculating about what AI looks like if J.D. Vance wins the 2028 presidential election. It is necessarily speculative, but if AI follows the course of other dual-use technologies, and all indications are that it will, then the speculation that follows below is directionally correct.
The Entrepreneur as Sovereign Actor
The entrepreneur has long stood as Silicon Valley’s central archetype: a mythic figure of sovereign agency, scaling ideas into institutions, outmaneuvering incumbents, and bending markets through code and capital. This founder-centric worldview has treated the state as either irrelevant or inert. The state is something to navigate around, not engage with. Regulation was a cleanup job. The real power resided in venture term sheets and product-market fit.
But that world is dying. Slowly at first, then all at once.
Artificial intelligence, particularly at the frontier, is no longer just a transformative general-purpose technology. It has crossed a threshold into the domain of sovereign interest. And once a technology implicates sovereignty, the nature of power changes. Founders no longer operate in a neutral market sandbox. They are navigating a contested terrain in which innovation is subordinated to jurisdiction.
I. The Dual-Use Trigger
AI is a textbook dual-use technology: equally capable of composing music or composing malware, writing marketing copy or synthetic propaganda, automating workflows or weapon systems. The same models that power productivity tools can guide drone swarms, execute cyberattacks, or simulate biological threats. The same infrastructure that personalizes ads can also surveil populations and wage memetic warfare.
Once a dual-use technology reaches a sufficient capability threshold, the state does not merely pay attention. It asserts itself. This isn’t speculative. It is historical pattern: nuclear physics, cryptography, satellites, biotech: all followed this arc. What begins in open research and commercial experimentation becomes classified, restricted, and ultimately absorbed into the architecture of state power.
AI has now crossed that line.
II. From Product to Infrastructure to Sovereignty
For most of its history, AI was treated as a product feature. A tool for incrementally improving software UX, enhancing ad targeting, automating customer support. But the emergence of foundation models and autonomous agent behaviors has changed that calculus. AI is no longer “just software.” It is infrastructure.
And once infrastructure becomes strategically essential, it becomes sovereign.
The U.S. does not allow strategic infrastructure to remain unregulated, borderless, or independently controlled. The internet, once imagined as a transnational commons, has fractured into national zones. Semiconductors have been absorbed into geoeconomic conflict. AI joins this lineage.
What was once a global playground of open research and rapid iteration is becoming a sovereign zone of classification, clearance, and national doctrine.
III. Silicon Valley's Cognitive Dissonance
This shift will cause acute cognitive dissonance in Silicon Valley. The core beliefs of the entrepreneurial class will be invalidated in real time:
Code is law
Permissionless innovation is sacred
Global scale trumps national borders
The state is a lagging indicator
These axioms will collapse.
The U.S. government, particularly through the Department of Commerce (BIS), the Department of Defense (DIU, DARPA), and the emerging National AI Initiative, is formalizing a doctrine that treats frontier AI as a strategic national asset. This might manifest through:
Model licensing requirements akin to weapons export controls
Export restrictions not just on chips, but on model weights
Classified research programs absorbing elite technical talent
Security clearance protocols for developers and researchers
Federal preemption over open-source deployment norms
The same researchers building today's SOTA models may find themselves behind a classified firewall tomorrow. VCs and founders raised in the era of blitzscaling and global SaaS deployment now face a new constraint: not market adoption, but sovereign permission.
IV. The Myth of the Sovereign Founder Collapses
The founder has long been treated as sovereign within their domain: dictating vision, architecting culture, choosing what to build and when to release. Governments were obstacles or customers, not adjudicators of legitimacy.
But, in a world of AI sovereignty, the founder myth collapses.
The state does not negotiate with startups. It compels.
When a model can spoof identities, automate cyberattacks, generate biothreats, or run autonomous weapons platforms, deployment is no longer a business decision. It’s a national security decision. You don’t get to ship because you passed an internal test. You ship if, and only if, the state permits you to.
For some founders, this is incomprehensible. For others, enraging. But for all, it is unavoidable.
V. Beyond the Binary: New Hybrid Actors
Not every founder will go quietly. Some will resist. Others will realign. And a third category, the most important, will mutate.
A new class of hybrid actors is emerging: part startup, part contractor, part shadow state. These firms, including Palantir, Anduril, and OpenAI, are not simply selling to the state. They are becoming infrastructure of the state.
They will operate under different constraints: secure facilities, cleared engineers, bespoke models for defense doctrine. These aren’t aberrations. They are prototypes for the post-founder firm. Agile enough to build, sovereign enough to survive.
The lines are blurring. “Public-private partnership” no longer describes a handshake across domains. It describes a fused architecture of control and capability.
VI. Flashpoints of Friction
Expect escalating confrontations between Silicon Valley and the U.S. government across five key domains:
Model Licensing
Any model above a given capability threshold (measured in compute, data, or capabilities) may require government audit and licensing prior to release.
Founders will compare it to the FDA. The state will compare it to missile launch codes.
Export Controls on Model Weights
Just as advanced chips are restricted, the export of powerful models, or even fine-tuned derivatives, may become a criminal act.
Talent Clearance
Working on frontier AI may require security clearances. Expect a bifurcation: cleared vs. non-cleared talent. Entire research communities may be resegregated by secrecy.
Open Source Clampdown
Open weights deployment may be legally constrained. What was once celebrated as a norm of transparency may soon be recast as a threat vector.
State Absorption of Frontier Labs
National AI labs could absorb or supersede today's frontier players, mirroring the fate of theoretical physics under Los Alamos. These labs won’t publish. They’ll execute doctrine.
VII. Strategic Realignment or Strategic Irrelevance
VCs and founders now face a strategic fork:
Realign: Pivot toward sovereign-aligned use cases, including defense, infrastructure, intelligence, and logistics. Accept security constraints. Seek contracts. Get cleared.
Resist: Cling to libertarian values and global deployment. Push open source. Preach decentralization. These actors will become outsiders to the frontier.
Route around: Relocate to “sovereign-lite” jurisdictions like the UAE or Singapore. But the U.S. will not remain passive. Legal action, talent restrictions, and asset seizure are all in play.
To assess viability under this new regime, founders should ask:
Can your model be abused by adversarial nation-states?
Would your current stack survive a classified compute requirement?
Do you have a path to operate inside the sovereign perimeter, or are you building outside the walls?
Those who cannot answer these questions will find their future foreclosed, not by competitors, but by compliance officers and executive orders.
VIII. The Open Source Collapse and the Global Implications
The greatest casualty of AI sovereignty may be the open-source ecosystem.
What was once a vibrant, global commons may now fracture under national mandates. Projects like Meta’s LLaMA or EleutherAI may be reclassified as risky exports. Publishing model weights may trigger legal or even criminal sanctions. And if frontier research is locked behind classified firewalls, the epistemic gap between cleared and non-cleared researchers will widen into a chasm.
Access to frontier capabilities may become a matter of geopolitical alignment, not merit. A world bifurcated into “sovereign insiders” and “civilian outsiders” is one where open innovation becomes a memory.
IX. The Return of Leviathan
All of this points to a deeper truth: the return of Leviathan. Not in Hobbes’ social contract sense, but as the reemergence of the state as the ultimate projector of force, orchestrator of infrastructure, and adjudicator of technological destiny.
AI reorganizes power. And when power is at stake, sovereignty reasserts itself.
The next phase of AI will not be shaped by GitHub commits or demo day pitches. It will be shaped by:
Executive orders
Defense procurement budgets
Classified capabilities
Multilateral treaties
The sovereign stack is being built. Founders must decide whether they will integrate, or be discarded.
X. Conclusion: The Fork Ahead
The myth of the sovereign founder is dead.
AI no longer belongs to the tinkerer in a garage, the blitzscaler, or the crypto-libertarian with a manifesto. It belongs to the sovereign, because it must. Its power is too great, its implications too wide-ranging, its risks too systemic.
This does not mean the end of innovation. But it does mean that innovation will now occur inside the perimeter of jurisdictional control.
Welcome to the age of post-founder AI.
The frontier still exists, but only for those who are invited inside.