
"Rather than a custom Arm CPU, like the ones that Microsoft, AWS, and Google designed, Meta tells us the partnership will focus on optimizing the Arm-based silicon that it's already deploying. Like most hyperscalers and cloud providers, Meta is rolling out large quantities of Arm Neoverse cores across its AI datacenters; they just happen to be part of Nvidia's GB200 or GB300 NVL72 rack systems. Each of these racks is equipped with 72 Blackwell GPUs and 36 of Nvidia's Neoverse-V2-based Grace CPUs."
""Meta's AI ranking and recommendation systems - which power discovery and personalization across Meta's family of apps, including Facebook and Instagram - will leverage Arm's Neoverse-based datacenter platforms to deliver higher performance and lower power consumption compared to x86 systems," the release reads. The move from x86 boxes to Nvidia's rack systems presents an opportunity for Arm and Meta to optimize existing codebases and frameworks, like PyTorch or Facebook General Matrix Multiplication (FBGEMM) libraries, to take better advantage of the RISC architecture's extensions."
Meta partnered with Arm Holdings to optimize Meta software for Arm-based CPUs already deployed in AI datacenters. The work targets Arm Neoverse platforms integrated into Nvidia GB200 and GB300 NVL72 racks that combine 72 Blackwell GPUs with 36 Neoverse-V2-based Grace CPUs per rack. Optimization efforts will focus on adapting codebases and frameworks such as PyTorch and FBGEMM, migrating workloads from Intel/AMD AVX2 or AVX-512 to Arm vector extensions, and leveraging features like ExecuTorch and KleidiAI. Adoption of Nvidia Grace-Blackwell systems has driven Arm's datacenter CPU share to roughly 25% in Q2.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]