Huawei challenges Nvidia with new AI chip technology
Briefly

Huawei challenges Nvidia with new AI chip technology
"HBM, or High-Bandwidth Memory, plays a crucial role in the operation of modern AI chips. By stacking DRAM layers vertically, signal paths become shorter and the chip's bandwidth increases significantly. This not only delivers higher performance, but also reduces energy consumption for data-intensive tasks such as training and applying large language models. Because the memory is placed directly next to the processor, unnecessary data movement is minimized."
"The first generation consists of two variants. The HiBL 1.0 has a bandwidth of 1.6 terabytes per second and a capacity of 128 gigabytes. This version will be used for the Ascend 950PR, which will be launched in the first quarter of next year. The memory supports a variety of low-precision data types, including FP8 and MXFP8, and is designed to provide improved vector computing power and double the number of interconnections."
Huawei developed in-house HBM memory to increase performance of its Ascend processors and compete with Nvidia. HBM stacks DRAM layers vertically to shorten signal paths, raise bandwidth, reduce energy consumption, and minimize unnecessary data movement for tasks like training and applying large language models. US sanctions motivated an internal solution to break dependency on foreign HBM suppliers and strengthen technological autonomy. The first generation includes HiBL 1.0 (1.6 TB/s, 128 GB) for Ascend 950PR and HiZQ 2.0 (4 TB/s, 144 GB) for Ascend 950DT, targeting vector computing, inference, and decoding. Huawei also presented SuperPod linking up to 15,488 Ascend-based cards and operating a supercluster of about one million cards.
Read at Techzine Global
Unable to calculate read time
[
|
]