AMD's announcement of the Salina data processing unit (DPU) and Pollara 400 network interface card (NIC) showcases a major leap in high-performance networking aimed at enhancing data center efficiency.
The new DPU boasts double the performance capabilities and supports 400G throughput, which will facilitate more efficient data transfer rates within AI infrastructures, meeting increasing workload demands.
The rise of generative AI has turned networking into a critical bottleneck in data center operations, necessitating improvements in both data delivery to AI clusters and backend performance.
Andrew Buss from IDC emphasizes that AMD's latest networking products illustrate a maturity in the company's strategy to address the complexities of data center efficiency driven by AI demands.
Collection
[
|
...
]