During the Cloud Next conference, Google introduced Ironwood, its seventh-generation TPU AI accelerator chip, optimized specifically for inference tasks. Launching later this year, Ironwood will be available in two configurations: a 256-chip cluster and a 9,216-chip cluster. It boasts 4,614 TFLOPs of computing power and 192GB of dedicated RAM, featuring an enhanced SparseCore for advanced data processing. Amid growing competition from Nvidia and others, Google highlights Ironwood's efficiency and capabilities, promising integration with its AI Hypercomputer to further enhance AI model performance.
Ironwood is our most powerful, capable, and energy-efficient TPU yet... and it's purpose-built to power thinking, inferential AI models at scale.
Ironwood represents a unique breakthrough in the age of inference, with increased computation power, memory capacity, networking advancements, and reliability.
Collection
[
|
...
]