NVIDIA Announces Next-Generation AI Superchip Blackwell
Briefly

Blackwell architecture is NVIDIA's largest GPU, packed with over 200 billion transistors, empowering faster training for large language models by up to 4 times compared to predecessors.
Jensen Huang revealed Blackwell's cutting-edge features at the GTC AI conference, emphasizing its ability to deliver the highest compute power ever on a single chip and support for training multimodality data.
Large models need more acceleration for training with multimodal data, not just text. This requires expanding model sizes, training data, and developing even bigger GPUs to meet escalating compute demands.
NVIDIA continues the tradition of naming its architectures after pioneers in science; Blackwell is named after mathematician David Harold Blackwell. The Blackwell architecture follows predecessors like Hopper and showcases advancements in GPU-CPU combinations.
Read at InfoQ
[
add
]
[
|
|
]