AMD Reveals Fleet of Chips for Heavy AI Workloads
Briefly

"Our goal is to drive an open industry standard AI ecosystem so that everyone can add their innovation on top," said Lisa Su, AMD chair and CEO, at the company's Advancing AI 2024 presentation in San Francisco.
The Instinct MI325X accelerators speed up foundation model training, fine-tuning, and inferencing - the processes involved in today's rapidly-proliferating generative AI - and feature 256GB of HBM3E supporting 6.0TB/s.
AMD's CDNA 4 architecture enables the new line. The capacity and bandwidth of these accelerators out-perform the major competitor, the NVIDIA H200, AMD claims.
AMD primarily targets hyperscalers with this product. In particular, hyperscalers want to expand their AI-capable hardware in data centers and power heavy-duty cloud infrastructure.
Read at TechRepublic
[
|
]