Global AI computing will use 'multiple NYCs' worth of power by 2026, says founder
Briefly

"People will want more compute, not necessarily because of scaling laws, but because you're deploying these things now," said Thomas Graham, co-founder of optical computing startup Lightmatter.
"If you view training as R&D, inferencing is really deployment, and as you're deploying that, you're going to need large computers to run your models," said Graham.
Nvidia CEO Jensen Huang stated that 'scaling up' AI would require 'both more sophisticated training [of AI models], but also increasingly more sophisticated inference,' suggesting a need for exponential compute.
Graham's insight indicates the next stage of AI's demand for compute will focus on the deployment phase of trained AI models, which necessitates larger computer facilities.
Read at ZDNET
[
|
]