
"Every quarter, Nvidia CEO Jensen Huang is asked about the growing number of custom ASICs encroaching on his AI empire, and each time he downplays the threat, arguing that GPUs offer superior programmability in a rapidly changing environment. That hasn't stopped chip designer SiFive from releasing RISC-V-based core designs for use in everything from IoT devices to high-end AI accelerators, including Google's Tensor Processing Units (TPUs) and Tenstorrent's Blackhole accelerators."
"Last summer, SiFive revealed that its RISC-V-based core designs power chips from five of the "Magnificent 7" companies - those include Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla. And while we're sure many of those don't involve AI, it's understandable why analysts keep asking Huang about custom ASICs. While many of you will be familiar with SiFive for Meta and Microsoft's RISC-V development boards, the company's main business is designing and licensing core IP, similar to Brit chip designer Arm Holdings."
"This week at the AI Infra Summit, the RISC-V chip designer revealed its second generation of Intelligence cores, including new designs aimed at edge AI applications like robotics sensing, as well as upgraded versions of its X200 and X300 Series, and XM Series accelerator. All of these designs are based on an eight-stage dual-issue in-order superscalar processor architecture, which is the long way of saying they aren't designed for use in a general-purpose CPU like the one powering whatever device you're reading this on."
SiFive released second-generation Intelligence cores including designs for edge AI, robotics sensing, and upgraded X200, X300, and XM Series accelerators. The Intelligence cores are based on an eight-stage dual-issue in-order superscalar processor architecture and are intended as accelerator control units to feed tensor cores and MAC units with data. SiFive's business model focuses on designing and licensing RISC-V core IP for customers to integrate instead of creating custom ASICs. RISC-V core designs already power chips from five of the 'Magnificent 7.' The licensed cores aim to reduce customer investment in custom control logic while enabling programmability across changing AI workloads.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]