Google launches Parallelstore file storage at cloud AI training | Computer Weekly
Briefly

"This efficient data delivery maximises goodput to GPUs [graphics processing units] and TPUs [tensor processing units], a critical factor for optimising AI workload costs," said GCP product director Barak Epstein in a blog post.
"Parallelstore can also provide continuous read/write access to thousands of VMs [virtual machines], GPUs and TPUs, satisfying modest-to-massive AI and high-performance computing workload requirements."
"For the maximum Parallelstore deployment of 100TB (terabytes), throughput can scale to around 115GBps, three million read IOPS, one million write IOPS, and a minimum latency of near 0.3 milliseconds."
Read at ComputerWeekly.com
[
|
]