OpenAI just signed a bumper $38bn cloud contract with AWS - is it finally preparing to cast aside Microsoft?
Briefly

OpenAI just signed a bumper $38bn cloud contract with AWS - is it finally preparing to cast aside Microsoft?
"The $38 billion, multi-year deal will give OpenAI access to hundreds of thousands of Nvidia GPUs, with the ability to scale up as necessary, the companies noted. OpenAI will begin using AWS immediately, with all the capacity targeted to be deployed before the end of this year, and further expansion planned for next year. OpenAI noted that the infrastructure that AWS is setting up for OpenAI is designed to boost processing efficiency and performance. This will be organized into clusters of Nvidia GB200s and Nvidia GB300s via Amazon's EC2 UltraServers, all on the same network for low latency."
""Scaling frontier AI requires massive, reliable compute," said OpenAI CEO Sam Altman in a . "Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone." "As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions," said AWS chief executive Matt Garman. "The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads." OpenAI has had a close partnership with Microsoft since 2019, centered on billions in investment from the firm as well as Azure access. But that partnership lost exclusivity in January and reports of tensions around OpenAI's move to a new structure. That said, OpenAI continues to use Azure, and last week signed a $250 billion agreement to continue the partnership, but is at the same time expanding its options to avoid overreliance on one hyperscaler."
OpenAI will use AWS to scale AI systems for answering queries and for model training under a $38 billion multi-year agreement that grants access to hundreds of thousands of Nvidia GPUs. Deployment begins immediately with the companies targeting full capacity before year-end and further expansion planned next year. AWS will deliver clusters of Nvidia GB200 and GB300 GPUs via EC2 UltraServers on a unified low-latency network to support varied workloads from ChatGPT queries to future model training. The infrastructure aims to improve processing efficiency and performance. OpenAI retains Azure ties while diversifying cloud providers to avoid overreliance.
Read at IT Pro
Unable to calculate read time
[
|
]