$60B AI chip darling Cerebras almost died early on, burning $8M a month | TechCrunch
Briefly

$60B AI chip darling Cerebras almost died early on, burning $8M a month | TechCrunch
"“We were spending about $8 million a month,” founder CEO Andrew Feldman told TechCrunch of that period. “At this point, we had incinerated nearly $200 million trying to solve one technical problem.” Every few weeks, Feldman was forced to make the painful walk of shame to the board meeting to report another failure and more money burned. But he had no choice. Without a solution, Cerebras was dead anyway."
"It was founded with an idea that was simple on paper. The microprocessor industry had spent its entire 50+ years making CPUs faster and cheaper by cramming more transistors onto a silicon wafer and dicing wafers into ever tinier pieces. But AI required so much compute power, many chips had to be strung together and then forced to communicate with each other. Cerebras' founders believed turning a whole, even bigger wafer into one giant, powerful chip, would work faster."
"The problem was, no one had ever successfully done this before, for any reason, AI or not. Orchestrating that many microscopic electronic components onto a larger, but still thin, surface introduced compounding engineering problems. Once Cerebras crossed the first threshold of designing the mega chip and then manufacturing it with TSMC, the team hit the real roadblock. They couldn't solve “packaging.”"
"“Packaging.” This involves everything after manufacturing the silicon itself: adhering it to a motherboard, getting power to it, dealing with heating and cooling as well as the pipes that would deliver and return data, Feldman sai"
Cerebras Systems sells AI chips for inference to major customers such as OpenAI and AWS and completed a blockbuster IPO, reaching about $60 billion in value by week’s end. In 2019, when the company was three years old, it nearly failed while spending about $8 million per month and incinerating nearly $200 million to solve a single technical problem. The company aimed to turn a whole, larger wafer into one giant chip to avoid the limits of traditional CPU scaling and to reduce the need to connect many chips for AI compute. The effort faced compounding engineering problems, and after manufacturing with TSMC, the main roadblock became packaging, including mounting, power delivery, thermal management, and data routing.
Read at TechCrunch
Unable to calculate read time
[
|
]