How Broadcom is quietly invading AI infrastructure
Briefly

While GPUs are at the forefront of AI infrastructure discussions, it is the interconnect fabrics that enable their effective use in large-scale model training. Broadcom has been developing technologies that support large interconnect architectures across different levels, from die-to-die communications to system networks. Unlike Nvidia, Broadcom operates on a merchant silicon model, providing chips to various clients secretly leveraging its technology, including major players like Google and Apple. This allows hyperscalers to focus on innovation rather than integration complexities, demonstrating a crucial evolution in AI infrastructure.
Developing and integrating interconnects is no small order. It's arguably the reason Nvidia is the powerhouse it is today.
Hyperscalers can concentrate on developing differentiated logic rather than figuring out how to stitch everything together with Broadcom's merchant silicon.
Broadcom deals in merchant silicon, selling its chips and intellectual property to anyone, allowing for widespread adoption in major AI infrastructures.
For a cluster of 128,000 accelerators, you might need 5,000 switches just for the compute fabric.
Read at Theregister
[
|
]