
"OpenAI completes a remarkable quartet. After already signing billion-dollar deals with Microsoft, Google, and Oracle, it is now AWS's turn. The deal with Amazon Web Services is worth $38 billion and runs until 2032. OpenAI will have direct access to AWS capacity. CEO Sam Altman speaks of a "broad compute ecosystem" surrounding the AI company. In order to market AI in as many places and ways as possible, this complete embrace of hyperscale infrastructure would be necessary."
"AWS provides Amazon EC2 UltraServers with hundreds of thousands of Nvidia GB200 and GB300 chips. The infrastructure can also expand to tens of millions of CPUs for agentic workloads. AWS does not only provide the hardware. The advanced architecture optimizes the processing of AI calculations. By clustering GPUs via EC2 UltraServers on the same network, performance remains at low latency. Given that network speed is now more important to eliminate as a bottleneck than GPU power, this is a major advantage."
OpenAI and AWS signed a $38 billion partnership running until 2032 that grants OpenAI direct access to AWS compute capacity for inference and model training. OpenAI will use Amazon EC2 UltraServers with hundreds of thousands of Nvidia GB200 and GB300 GPUs and can scale to tens of millions of CPUs for agentic workloads. AWS’s clustered EC2 UltraServers keep GPUs on the same network to maintain low-latency performance, addressing network bottlenecks more than raw GPU power. Capacity must be fully rolled out by end of 2026, and AWS cites experience operating AI clusters exceeding 500,000 chips. An earlier, limited Bedrock integration prefaced the deeper collaboration.
 Read at Techzine Global
Unable to calculate read time
 Collection 
[
|
 ... 
]