SK Hynix says its HBM4 is ready for mass production
Briefly

SK Hynix says its HBM4 is ready for mass production
"giving chips like Nvidia's B300 or AMD's MI355X about 8 TB/s of aggregate memory bandwidth. With the move to HBM4, we'll see bandwidth jump considerably. At GTC in March, Nvidia revealed its Rubin GPUs would pack 288 GB of HBM4 and achieve 13 TB/s of aggregate bandwidth. AMD aims to cram even greater quantities of memory onto its upcoming MI400-series GPUs, which will power its first rack-scale system called Helios."
"On Friday, the South Korean memory giant announced that it had wrapped HBM4 development and was preparing to begin producing the chips in high volumes. High Bandwidth Memory (HBM) has become an essential component in high-end AI accelerators from the likes of Nvidia, AMD, and others. Both Nvidia's Rubin and AMD's Instinct MI400 families of GPUs, pre-announced earlier this year, rely on memory vendors having a ready supply of HBM4 in time for their debut in 2026."
SK Hynix completed HBM4 development and is preparing high-volume production to supply next-generation datacenter GPUs. HBM4 will enable much larger module capacities and far greater aggregate bandwidth compared with current HBM technologies that top out near 36 GB and about 1 TB/s per module. Nvidia's Rubin GPUs are expected to use 288 GB of HBM4 for roughly 13 TB/s aggregate bandwidth, while AMD's MI400-series aims for up to 432 GB and near 20 TB/s. SK Hynix increased I/O terminals to 2,048 versus HBM3e and reports more than 40% energy-efficiency gains. HBM power consumption rises significantly with larger modules.
Read at Theregister
Unable to calculate read time
[
|
]