Cisco unveils 102.4T Silicon One G300 switch chip
Briefly

Cisco unveils 102.4T Silicon One G300 switch chip
"According to Cisco fellow and SVP Rakesh Chopra, what really sets the G300 apart from the competition is its collective networking engine, which features a fully shared packet buffer and a path-based load balancer to mitigate congestion, improve link utilization and latency, and reduce time to completion. "There's no sort of segmentation of packet buffers, allowing packets to come in [and] be absorbed irrespective of the port. That means that you can ride through bursts better in AI workflows or front-end workloads," he said."
"The load-balancing agent "monitors the flows coming through the G300. It monitors congestion points and it communicates with all the other G300s in the network and builds sort of a global collective map of what is happening across the entire AI cluster," he added. This kind of congestion management isn't new by any means. Both Broadcom and Nvidia have implemented similar technologies in their own switches and NICs for this reason."
The Silicon One G300 is a 102.4 Tbps switch chip with 512 200 Gbps SerDes, enabling aggregation up to 1.6 Tbps per port. The massive radix allows support for up to 128,000 GPUs using 750 switches instead of 2,500. The same raw bandwidth figures are shared across competing 102.4 Tbps silicon from Broadcom and Nvidia. The G300's collective networking engine uses a fully shared packet buffer and a path-based load balancer to mitigate congestion, improve link utilization and latency, and reduce time to completion. A load-balancing agent monitors flows, detects congestion points, communicates across G300s, and builds a global collective map of cluster traffic. Cisco reports a 33 percent better link utilization and up to 28 percent faster training times versus packet-spraying-based approaches.
Read at Theregister
Unable to calculate read time
[
|
]