
"Dell Technologies announces improvements to its AI Data Platform to accelerate AI workloads. With new integrations for Nvidia hardware, faster object storage, and GPU-accelerated vector search, Dell aims to help organizations extract more value from distributed data. The integration with Nvidia cuVS brings GPU-accelerated hybrid search to the Dell AI Data Platform. This combines keyword and vector search for faster and more efficient results. According to Dell, IT teams get a fully integrated solution to deploy GPU-powered search immediately."
"PowerScale, Dell's NAS platform, will be integrated with the Nvidia GB200 and GB300 NVL72 systems. According to Dell, the PowerScale F710 has achieved Nvidia Cloud Partner certification and can scale to more than 16,000 GPUs. The platform is said to require up to five times less rack space, use 88 percent fewer network switches, and consume up to 72 percent less energy than comparable solutions."
"ObjectScale, which Dell describes as the fastest object platform in the industry, is now also available as a software-defined option on PowerEdge servers. This new variant is said to be up to eight times faster than the previous generation of all-flash object storage. A notable addition is support for S3 over RDMA, which will be available as a tech preview in December. Dell claims this delivers up to 230 percent higher throughput, 80 percent lower latency, and 98 percent lower CPU usage."
Dell upgrades its AI Data Platform to accelerate AI workloads with Nvidia integrations, faster object storage, and GPU-accelerated vector search. Nvidia cuVS integration provides GPU-accelerated hybrid search that combines keyword and vector search for faster, more efficient results and immediate deployment. PowerScale NAS will integrate with Nvidia GB200 and GB300 NVL72; the PowerScale F710 is certified and can scale beyond 16,000 GPUs while cutting rack space, network switches, and energy use. ObjectScale on PowerEdge offers up to eight times faster object performance; S3 over RDMA tech preview promises higher throughput, lower latency, and much lower CPU usage.
Read at Techzine Global
Unable to calculate read time
Collection
[
|
...
]