Oracle's claim of a 2.4 zettaFLOPS cluster is based on sparsity and lower precision (FP4), not reflecting true performance against traditional supercomputers.
FLOPS values can be misleading; without appropriate precision measurement, claiming 'zettascale' performance blurs the line between marketing and reality.
While lower precision computations allow faster inferencing, they compromise model accuracy. High-performance training typically requires 16-bit precision or higher.
Oracle’s Blackwell Supercluster, with 131,072 GPUs, could yield a notable HPC cluster if properly networked, even though its optimal use might not require full capacity.
Collection
[
|
...
]