The future of artificial intelligence will not unfold in a political vacuum. This essay explores one possible trajectory: what the world might look like if J.D. Vance wins the presidency in 2028. That outcome is far from guaranteed, but as of May 2025, the 2028 election is his to lose.
What follows should be read as high-variance speculation: not prophecy, but projection. Should the 2026 midterms favor Democrats, or Vance falter before the finish line, the power dynamics described here may take another form. But the underlying shift—the convergence of AI and state power—is already in motion. This essay simply traces one plausible path to its logical conclusion.
Introduction
Two forces are reshaping the future of artificial intelligence: the exponential scaling of AI infrastructure, and the political realignment of America around Vance-style industrial populism. These forces are converging on a single outcome: the power center of American innovation is shifting away from venture capital and Silicon Valley and toward Washington, DC and the infrastructural state1.
The venture-backed software era was optimized for low-capex, high-velocity consumer tools: apps, APIs, SaaS startups, and ad-driven engagement loops. It rewarded cleverness and iteration speed, not thermodynamic throughput. But AI at the frontier is not that. Training a frontier model in 2025 is a capital-intensive, resource-constrained, logistically complex operation that looks more like oil & gas or aerospace than consumer software. It requires thousands of H100s or equivalent chips, megawatts of power, data centers near cooling and fiber, and months of coordination across training, inference, and deployment pipelines. This is not a problem space that venture capital conventionally understands. It is one that utilities, regulators, and sovereigns do.
The scaling laws of modern AI are now colliding with physical reality. Compute capacity is no longer defined by code optimization or GPU pricing alone. It is bound by energy, land, chips, and cooling. Training throughput has become an energy allocation problem. Permits, grid integration, and political favor now determine who can train and who cannot. The bottlenecks are not in algorithms. They are in substations.
If this sounds like a structural shift in the innovation economy, that’s because it is. Power is migrating away from the application layer and toward the infrastructure layer. From San Fransciso to the rural counties that host power-hungry data centers. From VCs to DoE procurement officers. From startups to the state.
The Vance Effect
A JD Vance presidency in 2028 would likely accelerate this migration. Whether you view that prospect with hope or dread, the logic is consistent. Vance-style industrial populism is predisposed to treat AI as a strategic asset. From that perspective, it is intolerable that Ameria’s most powerful intelligence infrastructure is privately governed, ESG-aligned, and clustered in coastal urban centers. A nationalist administration will seek to assert state sovereignty over AI development, either explicitly or by exerting pressure through the energy and compute stack.
That means export controls on AI chips won’t just expand. They’ll harden into industrial policy. Foundation models will be treated like dual-use technologies, subject to licensing and pretraining audits. Permitting for data center expansion will be accelerated in red-state AI zones and slowed or blocked in jurisdictions deemed politically uncooperative. The DoE will partner with utilities to reserve power for national AI training needs. Small modular reactors (SMRs) will be fast-tracked near data center clusters. Inference access may eventually be gated by clearance, mission use, or geographic origin.
This isn’t speculation. The foundation is already visible. In a recent interview, an Anthropic researcher projected that by 2028, frontier AI could consume 20% of US electricity. This reflects the thermodynamic reality of large-scale training: marginal intelligence gains require vast amounts of energy. You can’t scale models by orders of magnitude beyond current frontier ones without major grid expansion. That makes energy a national security issue, and by extension, makes AI a utility to be regulated by the state.
It also makes compute a geopolitical choke point. Nvidia is already subject to escalating export controls. Access to high-end chips is being rationed globally. Chip supply is no longer a tech procurement issue; it is a vector of statecraft. And this logic will cascade. The same will become true of inference rights, training corpora, and even algorithmic architectures. As foundation models become strategic infrastructure, their governance will shift accordingly, away from open experimentation and toward sovereign control.
Effects on Silicon Valley
This has downstream implications for the startup world. Most AI startups today are wrappers: thin interfaces over foundation models. The best ones offer workflow integration or UX innovations. But structurally, they are downstream from the source of power. As models become more generally capable, wrapper functionality will be absorbed by the model. Further, foundation model providers themselves will come under pressure. The state will not tolerate private actors controlling critical infrastructure indefinitely.
The venture capital model, already under stress, is mismatched to this world. Infrastructure is expensive. Compute is rationed. Power is allocated via poliical processes. The marginal cost of intelligence is rising. This is not a world VCs were built to fund. It is a world designed for defense contractors, sovereign wealth funds, and regulated utilities.
The emerging AI stack looks less like a web app and more like a wartime supply chain. The relevant players are shifting:
Power: Utilities, SMR developers, DoE grid planners
Compute: GPU cloud operators, sovereign compute pools
Land: Rural counties, energy-rich municipalities, real asset allocations
Models: National labs, defense primes, Palantir-tier operators
Access: DoD, DoJ, and regulatory authorities
This is not a reversion to the mean. It is a hard fork.
AI Safety Becomes AI Patriotism
And it will not stop at infrastructure. The discourse around AI “alignment” will itself be reabsorbed into a sovereign framework. In Silicon Valley, alignment meant safety: models should behave in accordance with human desires. Under a Vance-style regime, alignment will mean loyalty: models should behave patriotically. The key question becomes not whether the model is safe for humanity, but whether it is safe for the nation-state. This reframing opens the door to domestic corpora requirements, access restrictions based on citizenship or mission, and the embedding of narrative control mechanisms into the inference layer.
This isn’t speculative fiction. It’s the natural conclusion of reclassifying general-purpose intelligence as strategic infrastructure. Once that reclassification happens, the moral language of open-source experimentation or decentralized access ceases to matter. What matters is control. And in the sovereign AI era, control resides not with founders, but with regulators.
We are already seeing the emergence of a new class of kingmakers: infra-native asset allocators and D.C. bureaucrats. The former are buying the substations, water rights, and land under which this new intelligence substrate will run. The latter are determining which projects get greenlit and which get buried. The next five years of AI development will be shaped less by model drops and more by DoE budgets and permitting queues.
Conclusion
If you want to see where this is going, look away from Silicon Valley. Look at the counties that host redundant power, rail, and dark fiber. Look at who gets exemptions from energy rationing. Look at which agencies get model access, and under what terms. The new AI regime is not about disruption. It is about consolidation. It will not be venture-scaled. It will be state-aligned.
The center of technological power is shifting. From code to compute. From cloud APIs to power purchase agreements. From San Francisco to Washington, DC. The sovereign infrastructure era has begun. The future will not be built by the people who can raise seed rounds. It will be built by those who can navigate federal procurement, negotiate transmission permits, and control the gates of inference.
This is where artificial intelligence lives now.
Back in 2018, Tyler Cowen wrote a prophetic piece for Bloomberg about the clash between Silicon Valley and Washington, DC. It is worth re-reading now, as the prospect of advanced AI has come to the fore.
Really interesting read… thank you! This appears to be a familiar pattern. We saw it in the 90s with defense contractors, the 2000s with telecom monopolies, and the 2010s with Silicon Valley entangled in gray state surveillance. This illusion of the private control of AI firms is increasingly shaped by public mandates, strategic funding, and soft censorship. If this is a New Deal for AI, it’s not being written by founders but instead by regulators, utilities, and asset allocators which I believe is being by driven Vance’s military background and historic IP theft.