3 Comments
User's avatar
SwainPDX's avatar

You throw a lot of concepts of terms out there…and while I’m fascinated by the topic, I don’t always have definitions at my fingertips, and even when I do it can be hard to connect the dots.

E.g. Would be great to see an article that dives into the 4 bullet questions under ‘the new playbook’ section. (I know what an interconnect is, but not sure I follow what’s going on with that question, why it’s important, or even what ‘rare’ means in this case.)

But if you strictly write for peers, not for dilettantes like me, then I guess I’ll just have to muddle through.

Expand full comment
Dave Friedman's avatar

Appreciate this--really thoughtful comment. By “rare interconnect,” I mean high-performance, often hard-to-access networking topologies (like NVLink, PCIe, custom fabrics) that allow GPUs to communicate efficiently at scale. As AI workloads grow more latency-sensitive, these interconnects are becoming strategic bottlenecks, and most data centers weren’t built to handle them. These products are manufactured by companies including Nvidia, Broadcom, Intel, Marvell, AMD, etc.

Expand full comment
SwainPDX's avatar

Makes sense - (ok remember in my previous comment where I claimed I knew what an interconnect was? That might have been slightly exaggerated - LOL…)

I think your perspective on this topic is unique; appreciate your willingness to talk nuts and bolts. So I wouldn’t presume to ask you to dumb things down for the likes of me. Just count me as a reader who *wants* to gain a deeper understanding of the more technical aspects of AI economics…but who sometimes needs a helping hand along the way as I’m reading.

Keep up the good work!

Expand full comment