No One Knows What the Future of AI Holds
Tyler Cowen and Dario Amodei offer very different views of the near-term future of AI in recent interviews
Tyler Cowen sees institutional friction. Dario Amodei sees an exponential about to land. The GPU financing layer is the scoreboard, and it’s sending mixed signals. They both think AI is transformative. That’s where the agreement ends.
In a recent conversation with Megan McArdle, Cowen laid out what I’d call the prosaic view: AI models will keep getting better and there’s no natural end to that process, but the binding constraints are institutional: slow-moving universities, imperfect rules designed for imperfect enforcement, the sheer frustration of getting these systems to work reliably in practice. The biggest risk isn’t the AI itself. It’s evil humans wielding it. His metaphor is instructive: managing AI is like training dogs or horses. You’re working with an alien intelligence. It takes time, patience, and a human who can be held accountable.
The same week, Dario Amodei sat down with Dwarkesh Patel and painted a radically different picture. He places himself at 50/50 odds that within one to two years we’ll have what he calls “a country of geniuses in a data center.” He’s 90% confident it happens within a decade. He points to Anthropic’s revenue trajectory ($0 to $1B in under two years, on pace for $9-10B in 2025) as evidence that the exponential hasn’t broken. He says the RL scaling laws look just like pre-training scaling laws looked three years ago. He finds it “absolutely wild” that people aren’t reacting to what he considers an obvious trajectory.
But neither of them address what I think is the most interesting question in this debate, which is What are the capital markets signaling? Because the GPU infrastructure financing layer—the asset-backed loans, the forward commitments, the debt markets—is making a real-time, real-money bet on which vision is correct. And the signals are contradictory in ways that should make everyone uncomfortable.
The Two Models
Cowen’s Institutional Friction Model
Cowen’s framework is built on diffusion economics. Technology doesn’t transform the world the moment it’s invented. It transforms the world when institutions figure out how to absorb it. Competitive dynamics prevent winner-take-all outcomes. Human judgment remains the bottleneck. You need people in the loop who can be held responsible. Universities can’t teach AI proficiency (or proficiently) because their faculty don’t know how to use it themselves. Regulatory frameworks designed for imperfect enforcement break down under near-perfect AI surveillance. These frictions are real and persistent.
This is a gradualist view. The railroad gets built, but it takes decades to reshape the economy.
Amodei’s Exponential Landing Model
Amodei’s framework is built on scaling laws and revealed revenue. The exponential hasn’t broken: log-linear scaling continues in RL just as it did in pre-training. The OSWorld benchmark went from 15% to 65-70% in 15 months. Nobel Prize-level intellectual capabilities by late 2026 or early 2027. Complete end-to-end software engineering automation in one to two years. Anthropic added “a few billion” in revenue in January 2025 alone. He sees trillions in annual revenue before 2030 and finds it “hard to see otherwise.”
Crucially, Amodei argues that diffusion lags capability by one to two years, not one to two decades. Unlike Cowen’s institutional friction model, adoption follows capability with a short delay, not a generational one.
This is a fundamentally discontinuous view. We’re near the end of an exponential, and the landing happens fast enough that most institutions won’t have time to adapt gracefully.
Both views have obvious weaknesses. Cowen’s institutional friction argument has been steamrolled before, as Blockbuster could tell you about Netflix’s diffusion timeline. But Amodei is transparently talking his book: if you’re raising capital at Anthropic’s scale, you need investors to believe the exponential continues.
What the GPU Markets Are Actually Pricing
The Two-Tier Signal
If you only looked at H100 spot prices, you’d think Cowen was right. The flagship chip of the last AI wave has collapsed from roughly $8/hour to under $3/hour, which is a 60%+ decline. Taken in isolation, this looks like cooling demand.
But that reading is completely wrong. H100 prices are falling because Blackwell is arriving. NVIDIA’s B200 delivers approximately 5x the inference throughput via FP4 Tensor Cores and packs 192GB of HBM3e versus H100’s 80GB. The newest RL scaling algorithms—the ones driving Amodei’s confidence—are optimized for Blackwell architecture. H100 isn’t getting cheaper because AI demand is weakening. It’s getting cheaper because it’s becoming last-gen hardware for frontier workloads.
The actual scarcity signal is at the frontier. Blackwell is sold out through mid-2026, with a backlog of 3.6 million units from hyperscalers alone. NVIDIA is extending H200 production as a bridge to fill the scarcity gap. This is a sign that demand is outrunning next-gen supply, not fading. B300 shipments have already begun, and Rubin architecture pre-orders are being taken for late 2026 or early 2027.
This is not a market that believes in institutional slowness. This is a market that believes if you don’t lock in next-generation compute now, you get left behind.
Follow the Capex, But Look Under the Hood
The hyperscaler capital expenditure numbers tell the same story at first glance. CreditSights initially projected ~$602B for the top five hyperscalers in 2026, already a 36% jump year-over-year. But after Q4 2025 earnings calls blew away expectations CreditSights revised that estimate to ~$750B, implying 67% year-over-year growth for the third consecutive year above 60%.
CNBC reported this week that just the top four (Amazon, Alphabet, Meta, Microsoft) have guided to over $600B combined. Capital intensity has reached historically extraordinary levels. Meta at 54% of revenue, Microsoft at 47%, Oracle at 86%.
Headline numbers can be misleading, of course. RBC’s recent analysis found that roughly 45% of the dollar growth in 2026 cloud capex may be attributable to soaring memory prices, not additional hardware volume. When RBC strips memory costs out, capex growth drops from the headline ~67% to about 40%.
That’s still aggressive spending. But the distinction matters: the hyperscalers may be spending dramatically more while deploying proportionally less new compute capacity than the headline numbers suggest. The capex surge is partly real expansion and partly component-price inflation. DRAM stocks have been ripping for a reason.
The Revenue Gap Is Still Real
Sequoia’s David Cahn framed the core tension in his widely-cited “$600B question”: the infrastructure buildout far outpaces current AI services revenue. AI-related services generated roughly $25B in 2025 against hundreds of billions in infrastructure spend.
For Amodei’s timeline to work, AI revenue needs to go from roughly $25B to north of $2T by 2030. Even by the standards of the fastest technology adoption curves in history, that’s aggressive.
The Physical Constraint
And then there’s power. PJM Interconnection, operator of the largest US grid, warned of a potential 60-gigawatt power shortfall over the next decade driven by data center demand. PJM’s latest capacity auction failed to procure enough supply to meet its reliability target for the June 2027 delivery year. It projects peak demand will grow by 32 gigawatts from 2024 to 2030, with all but 2 gigawatts coming from data centers.
Amodei likely hedges on capex because you physically cannot deploy the compute fast enough even if you can afford it.
The Bond Market Isn’t Buying It, But It Might Not Have the Right Information
The most underappreciated signal in this entire debate is interest rates.
If Amodei is right—if we’re genuinely 1-2 years from a “country of geniuses in a data center” and on track for trillions in new annual revenue before 2030—the implications for economic growth, productivity, and inflation would be staggering. An event of that magnitude would be the single largest positive productivity shock in the history of industrial civilization. Forward interest rates should reflect that.
They don’t.
The 10-year Treasury is sitting in the 4-4.5% range. Consensus forecasts from Schwab, LPL, Transamerica, and RBC Wealth Management project GDP growth of 2-2.5% for 2026, core inflation around 2.5-3%, and a Fed funds rate settling around 3-3.25% by year-end. A Fed research note published this week attributes recent increases in far-forward rates to real risk premiums and fiscal deficit concerns, not AI-driven growth expectations.
This is a bond market pricing in a boring, normal economy. Not a world about to experience AGI.
The bond market is the deepest, most liquid, most informationally efficient market on the planet. Trillions of dollars trade daily. The participants include every sovereign wealth fund, central bank, pension fund, and macro hedge fund on earth. These are people whose entire job is to price in the future. And they are pricing in 2% GDP growth.
If you stopped here, you’d conclude that the most sophisticated investors in the world are voting decisively for Cowen’s view. Case closed.
But Amodei would make a different argument, and it’s one worth taking seriously.
The Information Asymmetry Problem
Part of Amodei’s message in the Dwarkesh interview is his genuine surprise that what is treated as common knowledge inside the frontier labs—the imminence of transformative AI—is barely being discussed outside of them. He’s expressing disbelief that the rest of the world hasn’t updated on what he considers obvious evidence.
There’s a weak version and a strong version of this claim. The weak version is that bond traders simply haven’t heard the argument. That’s clearly false. Dario is on Dwarkesh, Sam Altman is on every podcast, Jensen Huang is saying “insane demand” on earnings calls. The narrative has been broadcast widely enough.
The strong version is more interesting: bond traders may hear the words but lack the technical context to assess them. When Amodei says “RL scaling laws look just like pre-training scaling laws did three years ago,” a fixed-income portfolio manager at PIMCO doesn’t know what to do with that sentence. They can’t evaluate whether the claim is true, whether it implies what Amodei says it implies, or whether there are undisclosed obstacles. They see a CEO talking his book and discount accordingly, which is rational behavior given their information set, but potentially wrong.
The information that would move the bond market isn’t a podcast interview. It’s downstream evidence that macro traders can evaluate natively: labor market disruption from AI displacement, sharp acceleration in corporate earnings attributable to AI productivity, changes in the velocity of money, visible GDP effects. These are the signals fixed-income desks are trained to read. And none of them have arrived yet.
Amodei’s argument, in essence, is that by the time those signals arrive, it will be too late to position for them. The macro evidence lags the capability curve. And the people who can see the capability curve—the researchers inside the labs, the engineers running the scaling experiments—have no mechanism to transmit that information as a price signal to the people who price the economy.
This creates a structural problem. The people with the most relevant private information about AI capability trajectories (frontier lab researchers, GPU infrastructure operators, large-scale compute buyers) and the people who price the macroeconomy (bond traders, central banks, macro hedge funds) are essentially operating in separate information ecosystems. They don’t share a common language, and critically, they don’t share a common market.
The Missing Transmission Mechanism
Consider how price discovery works in every other major commodity market. Oil has WTI and Brent: forward curves that let producers, refiners, airlines, and macro traders all express views about future supply and demand with real money at stake. The prices convey information both to energy market participants and to the entire economy. When oil futures spike, bond traders adjust their inflation forecasts, equity analysts revise earnings, and central banks take notice. The forward curve is a translation layer between domain-specific knowledge and macroeconomic pricing.
Electricity has locational marginal pricing. Natural gas has Henry Hub. Interest rates have the entire Treasury curve. Foreign exchange has a 24-hour global spot and forward market. Compute has none of this.
There’s no standardized forward curve for GPU compute. No mechanism for the people who can evaluate whether RL scaling laws imply AGI in 24 months to express that view as a price. No way for that price signal to propagate to the macro markets that would need to reprice everything else.
If such a market existed and showed sustained contango—forward compute prices rising above spot, implying that participants with domain expertise expect scarcity to intensify—that would be a signal bond traders could read. It would be legible in a way that a Dwarkesh interview is not, because it would represent real capital at risk from people with real information.
If, conversely, the compute forward curve showed backwardation—near-term scarcity easing into forward glut—that would be Cowen’s view expressed in price form. The institutional friction model, rendered as a curve.
The absence of this market is an information bottleneck. The Cowen-Amodei debate may be unresolvable, and the bond market may remain uninformed about the most important economic variable of the next decade, until we build a price discovery mechanism that connects the people who know what the compute scaling curves look like to the people who price the economy.
The Arbitrage Between Says and Does
The most interesting signal in this entire debate isn’t what Cowen or Amodei say. It’s the gap between what participants in the market say and what others do.
What Amodei says: 50/50 odds of AGI-level capabilities in one to two years. 90% within a decade.
What the GPU market says: Blackwell sold out 18 months forward. Capex estimates revised from $600B to $750B, though some of that increase is memory-price inflation, not real capacity expansion. Debt investors getting nervous; Oracle’s 5-year CDS has more than tripled since September.
What the bond market does: Predicitions of 2% GDP growth; normal economy; nothing to see here.
Two of the world’s most important markets are staring at the same phenomenon and reaching incompatible conclusions. The GPU market is pricing in fast capability progress. The bond market is pricing in a world where that progress doesn’t matter yet. Both cannot be right for very long.
I don’t know which of these worlds we’re heading into. I’m not sure anyone does, including Amodei, whose own hedging behavior suggests more uncertainty than his stated probabilities imply. What I do know is that the resolution will come from where real money meets real information: forward curves, credit spreads, and the shape of the yield curve. And right now, the most important commodity of the 21st century doesn’t have a forward curve at all.
What to Watch
For those of us tracking the capital flows rather than the conference talks, here’s what matters in the next 6-12 months:
The capex-to-revenue convergence. Sequoia’s $600B gap needs to start closing. Track quarterly acceleration in AI services revenue against the infrastructure spend. If the gap widens further even as capex gets revised up, the Cowen view gains strength.
The power bottleneck. PJM’s 60-gigawatt shortfall is not an abstraction. Watch which data center projects actually break ground versus which get delayed for grid interconnection. If power becomes the binding constraint Amodei can’t solve with capital, his timeline stretches regardless of capability progress.
Credit spreads on AI infrastructure debt. Oracle’s CDS tripling is an early warning. If spreads widen across the AI infrastructure complex, the debt markets are telling you they see timing risk.
The yield curve. If 10-year Treasuries start moving meaningfully above 5% with steepening driven by growth expectations rather than fiscal concerns, the bond market is starting to price in Amodei’s world.
Compute derivatives. The emergence of standardized, exchange-traded futures on GPU compute will eventually give us a forward curve, and potentially the transmission mechanism that connects AI capability knowledge to macroeconomic pricing. Backwardation means scarcity easing. Contango means scarcity intensifying. We’re not there yet. But when we are, it will settle arguments that podcasts never can.
The exponential may be real. The landing is uncertain. And the debate may remain unresolvable until the market infrastructure catches up to the technology it’s supposed to be pricing.
If you enjoy this newsletter, consider sharing it with a colleague.
I’m always happy to receive comments, questions, and pushback. If you want to connect with me directly, you can:

When it comes to the critical question 'are the current crop of LLMs going to progress to AGI in the short term' I tend to discount the views of everyone I've heard in the industry with skin in the game.
Apart from Demis Hassibis as he seems like a man who has enough money and isn't particularly motivated by having more of it. I think his view is that several more conceptual breakthroughs are needed to AGI (but none to highly useful semi-autonomous agents). When people say a conceptual breakthrough is needed it can be translated as 'we need to do something big, I don't know what it is, so I can't tell you when we will have done it.
A positive productivity shock can be disinflationary, which can pull yields down even if real growth is higher, yes? And if central banks believe productivity reduces inflation pressure, they may cut policy rates sooner, counteracting growth effects.