Silicon One is the engine under the hood of Cisco's AI story
Briefly

Silicon One is the engine under the hood of Cisco's AI story
"During Cisco Live in Amsterdam, Cisco today announced the new Silicon One G300 chip. Since the launch of Silicon One, this line of chips has played an increasingly important role in Cisco's story. With the new G300, it sets a new 102.4 Tbps standard for Silicon One and AI networks. Cisco is now fully committed to Silicon One. In the past, we sometimes wondered aloud what the launch of this line of proprietary network chips would actually bring. This was because the company seemed to have little focus on it. In recent years, however, things have really taken off."
"Just last October, we wrote about the P200, which is specifically designed to connect AI data centers to each other for so-called scale-across workloads. Today, Cisco is adding the G300 to Silicon One. Unsurprisingly, this is the successor to the G200 from 2023. What is surprising, however, is that Cisco has managed to double the throughput speed in just over two and half years. Whereas the G200 (and the more recent P200) delivered 'only' 51.2 Tbps, the G300 pushes packets through at 102.4 Tbps."
"With this bandwidth, Cisco aims to answer the questions that GPUs pose to the network. Ultimately, in today's AI world, it's all about generating as many tokens as possible. Above all, minimizing the number of cycles wasted by GPUs is critical. The network must not be a bottleneck. The G300 is set to become the foundation for building AI networks in the near future. In the words of Martin Lund, EVP of the Common Hardware Group at Cisco, who is responsible for Silicon One, among other things: "The network is becoming part of the compute itself." The G300 should enable organizations to deploy AI clusters operating at the gigawatt level, obviously for training, but also for inferencing and real-time agentic workloads."
Cisco's Silicon One G300 doubles prior chip throughput to 102.4 Tbps, succeeding the G200 and building on the P200's data-center interconnect focus. The G300 targets AI networking needs by preventing network-induced GPU stalls and maximizing token-generation efficiency. The design enables deployment of gigawatt-scale AI clusters for training, inference, and real-time agentic workloads. The platform frames the network as an integral part of compute to reduce wasted GPU cycles. The jump from 51.2 Tbps to 102.4 Tbps occurred in just over two and a half years, signaling a strong commitment to proprietary network silicon.
Read at Techzine Global
Unable to calculate read time
[
|
]