Google, Anthropic seal gigawatt-scale TPU deal
Briefly

Google, Anthropic seal gigawatt-scale TPU deal
"The two companies announced the deal on Thursday, with Anthropic pitching it as "expanded capacity" that the company will use to meet surging customer demand and allow it to conduct "more thorough testing, alignment research, and responsible deployment at scale." Google's take on the deal is that it will enable Anthropic to "train and serve the next generations of Claude models," and involves "additional Google Cloud services, which will empower its research and development teams with leading AI-optimized infrastructure for years to come.""
""Anthropic's unique compute strategy focuses on a diversified approach that efficiently uses three chip platforms - Google's TPUs, Amazon's Trainium, and Nvidia's GPUs," the statement explains. "We remain committed to our partnership with Amazon, our primary training partner and cloud provider, and continue to work with the company on Project Rainier, a massive compute cluster with hundreds of thousands of AI chips across multiple U.S. data centers.""
Google will provide Anthropic access to up to one million tensor processing units (TPUs) and tens of billions of dollars in Google Cloud services to support Claude model development. Anthropic plans to use the capacity for training, serving, more thorough testing, alignment research, and responsible deployment at scale. Anthropic retains a multi-platform compute strategy using Google's TPUs, Amazon's Trainium, and Nvidia GPUs, and remains committed to Amazon as its primary training partner through Project Rainier, a massive compute cluster spanning multiple U.S. data centers. Google said Anthropic chose TPUs for price-performance and efficiency.
Read at Theregister
Unable to calculate read time
[
|
]