
"There's this basic story that scale is the thing, and that's been true for a while, and it might still be true,"
"That's the bet that many of the US AI companies are making."
"tons of compute, tons of memory, and tons of CPUs,"
"great friend"
"icon in AI."
Major AI firms are pursuing large-scale data center expansion based on the premise that models will improve with more data and compute. Industry spending on AI infrastructure is projected at roughly $400 billion this year, while consumer demand for AI services remains near $12 billion annually. OpenAI has engaged strategically with hardware providers such as AMD while also relying heavily on Nvidia GPUs. The shift toward reasoning models increases pressure for efficiency, long-context capabilities, and substantial memory and CPU resources alongside GPU capacity.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]