Broadcom introduced the Jericho4 switch, designed to connect GPUs in multiple datacenters up to 100 kilometers apart, facilitating AI model training. The Jericho4 features 51.2 Tb/s aggregate bandwidth and can be configured with eight hyper ports, supporting 144,000 GPUs at 800Gbps each. This switch addresses power constraints associated with large-scale AI training workloads. Broadcom positions Jericho4 as a unique solution for scaling beyond a single datacenter, potentially revolutionizing the current AI infrastructure architecture.
Broadcom's Jericho4 switch enables AI model developers to train on GPUs located across multiple datacenters, addressing power constraints in today's AI infrastructure.
Configured with up to eight 'hyper ports', each Jericho4 can link 144,000 GPUs at a staggering 115.2 petabits per second.
With 51.2 Tb/s of aggregate bandwidth, the Jericho4 is designed specifically for datacenter-to-datacenter interconnect, providing a new approach to AI training at scale.
Amir Sheffer states, 'If you're running a training cluster and you want to grow beyond the capacity of a single building, we're the only valid solution out there.'
Collection
[
|
...
]