Benchmark raises $225M in special funds to double down on Cerebras | TechCrunch
Briefly

Benchmark raises $225M in special funds to double down on Cerebras | TechCrunch
"What sets Cerebras apart is the sheer physical scale of its processors. The company's Wafer Scale Engine, its flagship chip announced in 2024, measures approximately 8.5 inches on each side and packs 4 trillion transistors into a single piece of silicon. To put that in perspective, the chip is manufactured from nearly an entire 300-millimeter silicon wafer, the circular discs that serve as the foundation for all semiconductor production. Traditional chips are thumbnail-sized fragments cut from these wafers; Cerebras instead uses almost the whole circle."
"Benchmark declined to comment. Benchmark first bet on 10-year-old Cerebras when it led the startup's $27 million Series A in 2016. Since Benchmark deliberately keeps its funds under $450 million, the firm raised two separate vehicles, both called 'Benchmark Infrastructure,' according to regulatory filings. According to the person familiar with the deal, these vehicles were created specifically to fund the Cerebras investment."
"This architecture delivers 900,000 specialized cores working in parallel, allowing the system to process AI calculations without shuffling data between multiple separate chips (a major bottleneck in conventional GPU clusters). The company says the design enables AI inference tasks to run more than 20 times faster than competing systems. The funding comes as Cerebras, based in Sunnyvale, Calif., gains momentum in the AI infrastructure race."
Cerebras Systems raised $1 billion at a $23 billion valuation, nearly triple its valuation six months earlier. Tiger Global led the round while Benchmark Capital invested at least $225 million through specially raised vehicles. Benchmark originally led Cerebras's $27 million Series A in 2016. Cerebras produces very large processors: its Wafer Scale Engine measures about 8.5 inches per side and packs 4 trillion transistors, using nearly an entire 300-millimeter wafer. The architecture supplies 900,000 specialized cores in parallel, reduces inter-chip data movement, and the company reports AI inference can run more than 20 times faster than competitors.
Read at TechCrunch
Unable to calculate read time
[
|
]