
"Three years ago, Luminal co-founder Joe Fioti was working on chip design at Intel when he came to a realization. While he was working on making the best chips he could, the more important bottleneck was in software. "You can make the best hardware on earth, but if it's hard for developers to use, they're just not going to use it," he told me."
"Luminal's core business is simple: the company sells compute, just like neo-cloud companies like Coreweave or Lambda Labs. But where those companies focus on GPUs, Luminal has focused on optimization techniques that let the company squeeze more compute out of the infrastructure it has. In particular, the company focuses on optimizing the compiler that sits between written code and the GPU hardware - the same developer systems that caused Fioti so many headaches in his previous job."
"At the moment, the industry's leading compiler is Nvidia's CUDA system - an underrated element in the company's runaway success. But many elements of CUDA are open-source, and Luminal is betting that, with many in the industry still scrambling for GPUs, there will be a lot of value to be gained in building out the rest of the stack. It's part of a growing cohort of inference-optimization startups, which have grown more valuable as companies look for faster and cheaper ways to run their models."
Joe Fioti and co-founders Jake Stevens and Matthew Gunton launched Luminal to tackle software bottlenecks that limit hardware utility. The company raised $5.3 million in seed funding led by Felicis Ventures with angel investors including Paul Graham. Luminal sells compute like neo-cloud providers but concentrates on compiler and developer-system optimizations to extract more performance from existing GPU infrastructure. The company targets the layer between written code and GPU hardware, improving usability and efficiency. Luminal leverages open-source elements of CUDA and aims to build out the rest of the stack. The company joins a growing cohort focused on inference optimization amid tight GPU supply.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]