
"Slurm has been around since 2002, when backers included Lawrence Livermore National Laboratory and France's Groupe Bull. Nvidia said this week it will "continue to develop and distribute Slurm as open source, vendor-neutral software, making it widely available to and supported by the broader HPC and AI community across diverse hardware and software environments." It added that "Nvidia will accelerate SchedMD's access to new systems - allowing users of Nvidia's accelerated computing platform to optimize workloads across their entire compute infrastructure.""
"Nvidia further played the open card, with the debut of its Nemotron 3 family of models, which it said delivered a "hybrid latent mixture-of-experts (MoE)" architecture in Nano, Super, and Ultra sizes. The firm also said the releases support its broader "sovereign AI efforts." Nemotron 3 NANO is a 30 billion-parameter model, activating up to 3 billion parameters at a time for "targeted" tasks."
Nvidia acquired SchedMD, the developer of the Slurm workload manager, and will continue to develop and distribute Slurm as open-source, vendor-neutral software. Slurm dates to 2002 and has been backed by institutions including Lawrence Livermore National Laboratory and Groupe Bull. Nvidia said it will accelerate SchedMD's access to new systems and enable workload optimization across Nvidia's accelerated computing platform while supporting heterogeneous hardware and software ecosystems. Nvidia introduced Nemotron 3 models with a hybrid latent mixture-of-experts (MoE) architecture in Nano, Super, and Ultra sizes. NANO activates up to 3 billion parameters at a time; Super and Ultra target higher-accuracy reasoning and complex applications with roughly 100 billion and 500 billion parameters respectively. Nvidia said NANO achieves four times the token throughput of its predecessor.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]