#hbm

[ follow ]
fromTheregister
2 days ago

Nvidia's context-optimized Rubin CPX GPUs were inevitable

Nvidia on Tuesday unveiled the Rubin CPX, a GPU designed specifically to accelerate extremely long-context AI workflows like those seen in code assistants such as Microsoft's GitHub Copilot, while simultaneously cutting back on pricey and power-hungry high-bandwidth memory (HBM). The first indication that Nvidia might be moving in this direction came when CEO Jensen Huang unveiled Dynamo during his GTC keynote in spring. The framework brought mainstream attention to the idea of disaggregated inference.
Artificial intelligence
Tech industry
fromMedium
4 weeks ago

SK Hynix Forecasts 30% Annual Growth in AI Memory Market Through 2030

SK Hynix forecasts the high-bandwidth memory market to grow about 30% annually through 2030, driven by robust AI demand and custom HBM development.
[ Load more ]