Discussion about this post

User's avatar
Tedd Hadley's avatar

> In AI 2027, OpenBrain (a fictional stand-in for OpenAI) operates 100 million H100-equivalent GPUs by 2027. At full load, that's 70 gigawatts of continuous power draw.

No, see https://ai-2027.com/research/compute-forecast

Not 100 million H100, *global* compute is 80M, but OpenBrain is using 18% of that == 14 million equivalent H100.

Not 70GW, they expect 5.4 GW for Leading US AI company by Dec 2027 (see section Power Requirements).

For Nvidia R100/200 (2027-2028) they're expecting 1.8X the efficiency of H100 and 6 times the speed. To match the speed of 14 million H100, they need 2.3 million R100/R200 which gives about 7.7 GW at peak 3300W capacity. So I imagine 5.4 GW figure is average, not max.

Expand full comment
MicaiahC's avatar

In the AMA I asked if they considered cooling and infrastructure as part of their costs, as confirmed here: https://www.astralcodexten.com/p/ama-with-ai-futures-project-team/comment/112167356. So it seems like the main objection here doesn't even apply, when you add T H's comment about how you were an order of magnitude off of claimed power consumption.

Expand full comment
1 more comment...

No posts