OpenAI is broadening its network of compute providers, looking to include partners like Oracle, CoreWeave, and Google. The company has tested Google's tensor processing units (TPUs), but it does not plan on deploying them at scale. This decision indicates continued efforts to reduce dependence on Nvidia hardware. OpenAI has a history of utilizing a variety of hardware, including Microsoft's and AMD's. Even as ties to Microsoft become less prominent, AMD remains a crucial hardware partner for OpenAI.
OpenAI is expanding its network of compute providers to include Oracle, CoreWeave, and reportedly even Google as it diversifies beyond Microsoft's infrastructure.
While OpenAI has experimented with Google's tensor processing units (TPUs), it has no plans to deploy them at scale, reaffirming its independence from big tech providers.
OpenAI's reliance on Nvidia hardware is being re-evaluated as it has been diversifying its hardware stack and running models on various systems over the years.
Despite softening ties to Microsoft, AMD continues to be a significant partner for OpenAI, with their hardware historically offering advantages in memory and bandwidth.
Collection
[
|
...
]