The Convergence of AI and the State
It seems inevitable that the NatSec umbrella will envelop frontier AI
Welcome to the latest edition of Buy the Rumor; Sell the News. In today’s post, I take a look at the intersection of AI and the national security state. I’ve received a bunch of inbound on this topic, and, like any good market researcher, took that as a signal to investigate.
If you like what you read here, consider subscribing, and if you’re already subscribed, consider upgrading to a paid subscription. If you want to connect with me directly, my contact information is at the end of this post.
A funny thing happens on the road to transformative technology. First it’s a startup playground, full of missionaries, grifters, and hype cycles. Then, as it grows teeth, it enters a second phase, attracting regulators, lobbyists, and think tank white papers like flies to honey. But if it keeps going, if it truly alters the substrate of power, it doesn’t just become regulated by the state. It becomes part of the state.
That’s not the usual Silicon Valley story. Tech likes to imagine itself as a parallel society: a laboratory for independent experiments in commerce and culture, punctuated by an occasional slap on the wrist from D.C. It’s a nice fantasy. But the long arc of strategic technologies bends not toward libertarian utopias, but toward conscription.
I’ve been reading two papers that, when placed side by side, sketch this arc in unusually sharp relief. The first is by Leopold Aschenbrenner, a young by uncannily lucid AI strategist, titled Situational Awareness. It’s a kind of private intelligence briefing on the state of frontier AI, warning that the U.S. is sleepwalking into a superintelligence race with China, with catastrophic consequences if we bungle it. The second is a very different genre: a crisply lawyered white paper by Steptoe, a law firm that serves national-security and export-control clients, summarizing how the government is already extending its tentacles into AI via executive orders, export restrictions, and compliance regimes.
Individually, each is illuminating. Together, they chart a possible timeline: from today’s increasingly intricate regulatory scaffolding… to tomorrow’s direct government stewardship of superintelligence. Let’s unpack.
Where We Are Now: The Regulatory Squeeze
Steptoe’s document is, on the surface, a typical risk overview for boards and compliance officers. But hidden inside the bullet points and CFR citations is a revelation: the U.S. government is already treating AI, especially frontier foundation models, as a national security asset.
Export controls are tightening. GPUs above certain thresholds, the software weights of large models, even datasets are getting sucked into a regime originally designed for missile technology. Multilateral efforts (U.S., Japan, Netherlands) are coordinating on these restrictions, with explicit references to China.
Disclosure mandates are multiplying. EO 14110 and its implementing memos demand that companies building clusters above specific compute levels report details to the Department of Commerce. The same for IaaS customers that might spin up suspicious levels of power.
Critical infrastructure rules are next. The Department of Homeland Security is rolling out pilot programs that could convert into hard regulations, treating advanced AI systems like pipelines or power grids. If you run a major model, you might be about to acquire the obligations of a utility.
And of course, there’s CFIUS, OFAC, AML, ICTS, Team Telecom. Each acronym is a vector for oversight. Together, they constitute the expanding bureaucracy of a new kind of dual-use tech regime.
None of this means the government is building models itself—yet. But it’s clear that Washington already sees AI through the same prism it used for cryptography, semiconductors, and nuclear enrichment: an asset that must be monitored, licensed, and potentially denied to adversaries.
Steptoe’s lawyers are admirably sober. They talk about “evolving regulatory frameworks,” encourage compliance hygiene, and reassure boards that these are manageable burdens, provided you get ahead of them.
That’s the polite story.
Where We Might Be Headed: The Manhattan Phase
Aschenbrenner’s document dispenses with the politeness. It’s written like a classified brief for a wartime cabinet, not a client memo. His core thesis: we’re barreling toward artificial general intelligence, by which he means systems with strategic capabilities far beyond today’s GPTs. These systems will be able to automate scientific R&D, accelerate bioengineering, execute complex multi-domain operations, and hack critical infrastructure at superhuman speeds.
He projects this could happen in the second half of the 2020s. If so, it’s a qualitatively different game. Whoever controls such systems controls the future of warfare, biotech, intelligence gathering: essentially the full lattice of power. A lead of even six months could be irreversible.
In that world, the incremental regulate and disclose approach breaks down. The U.S. government wouldn’t simply require model weight audits or impose higher cybersecurity standards. It would have no choice but to run the labs itself, or at minimum integrate them so tightly into a Manhattan-Project-style consortium that the distinction between public and private blurs to irrelevance.
Aschenbrenner forecasts a future where:
There’s a government-run AGI lab operating inside secure facilities, with Senate-confirmed leadership and deep intelligence-community embeds.
Frontier hardware is pooled among a handful of allies (think a NATO for GPUs), with rigorous auditing of model weights and training runs.
Private AI labs are effectively subcontractors to the national security apparatus. Commercial applications still flourish, but only after the strategic layer is locked down.
His argument is that it’s insane to allow random tech founders to hold the nuclear button, just because they happened to raise a big Series D. Even a friendly CEO is vulnerable to espionage, pressure, or simple human error. Superintelligence is too volatile to trust to market incentives alone.
The Synthesis: From Compliance to Command
Taken together, Steptoe and Aschenbrenner sketch a pipeline:
Today: The apparatus is forming. Export controls, critical infrastructure designations, compute disclosures: all of these create the ledgers, inventories, and monitoring capabilities that would be prerequisites for deeper nationalization later.
Near future: As capabilities climb, these frameworks will tighten. Voluntary guidelines convert to mandatory standards. More compute gets designated as critical infrastructure. Compliance burdens evolve into operational dependencies.
Post-AGI threshold: If Aschenbrenner’s timelines prove correct, the U.S. would pivot, quickly, from regulating to owning. The model weights, the best training runs, the key talent pipelines all get folded into something that looks more like the Manhattan Project or the original Quebec Agreement: international but tightly controlled, with the Pentagon at the center.
A lot of founders, investors, and even mid-level policy people still think in “1990s internet” metaphors: sandbox experiments, startups outpacing regulators, open global trade in compute and data. But if AGI is even a fraction as powerful as these projections suggest, that model will end up as historically quaint as Standard Oil’s monopoly. The only open question is whether the U.S. orchestrates this transition smoothly, by co-opting and compensating private actors, or bungles it into a chaotic last-minute scramble.
What This Means for Operators and Allocators
If you’re in the trenches building, allocating capital, or structuring deals, some obvious (but under-discussed) implications follow:
Start treating your compliance stack as a strategic moat. Today it’s an annoying legal function. Soon, being able to prove cluster lineage, secure supply chains, and weight provenance might be the ticket to even operating at the frontier. Expect future licenses to be denied to anyone without impeccable compliance history.
Plan for partial nationalization scenarios. That might mean positioning your equity and IP structures so they can slot into a defense consortium, or so that your investors can be compensated in the event of a forced buy-in. If that sounds outlandish, go read the history of uranium mines in the 1940s.
Engage with standards bodies now. NIST and DHS guidelines may seem toothless today, but they’re the skeletal draft of future mandates. Being at the table means helping shape rules you’ll later have to live by.
Watch the geopolitics. The next phase will be multi-lateral. Chips from Taiwan, fabs in Arizona, rare earth policies from Australia, energy deals in the Gulf: all will start to look like moves in a directed AI supply chain game.
The Bottom Line
We are living through a transition: from an era where AI was an ungoverned Wild West to one where it’s slowly being brought under the control of the modern national security state. That shift is already underway through export controls, critical infrastructure designations, and disclosure mandates. But if AGI is real, and arrives on the timelines that some of the smartest insiders fear, it could culminate in something far more direct: a government-led sprint to secure the technology, with private industry as junior partners.
Investors and operators who understand this arc, and who position themselves accordingly, will be the ones still standing (and likely very wealthy) on the other side. Everyone else will be wondering how they failed to see the writing on the wall.
Coda
If you enjoy this newsletter, consider sharing it with a colleague.
Most posts are public. Some are paywalled.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:
I'm liking your articles and insights a lot Dave. You will likely get a new subscriber very shortly :-)
worth reading this article as a counterpoint/supplement to Dave's story:
https://natesnewsletter.substack.com/p/july-4th-silicon-cookout-how-us-china