Multiverse Computing pushes its compressed AI models into the mainstream | TechCrunch
Briefly

Multiverse Computing pushes its compressed AI models into the mainstream | TechCrunch
"With private company defaults running at upwards of 9.2% - the highest rate in years - VC firm Lux Capital recently advised companies relying on AI to get their compute capacity commitments confirmed in writing. With financial instability rippling through the AI supply chain, Lux warned, a handshake agreement isn't enough."
"Smaller AI models that run directly on a user's own device - no data center, no cloud provider, no counterparty risk - are getting good enough to be worth considering. And Multiverse Computing is raising its hand."
"The CompactifAI app, which shares its name with Multiverse's quantum-inspired compression technology, is an AI chat tool in the vein of ChatGPT or Mistral's Le Chat. The difference is that Multiverse embedded Gilda, a model so small that it can run locally and offline, according to the company."
Private company defaults are rising to 9.2%, prompting concerns about AI supply chain stability. VC firms recommend written commitments for compute capacity rather than informal agreements. An alternative approach involves deploying smaller AI models directly on user devices, eliminating dependency on external infrastructure. Multiverse Computing, a Spanish startup, has developed compressed versions of models from OpenAI, Meta, DeepSeek, and Mistral AI. The company launched CompactifAI, a chat application featuring Gilda, a locally-runnable model that processes data offline on user devices. A system called Ash Nazg automatically routes between local and cloud processing based on device capabilities, providing flexibility while maintaining privacy.
Read at TechCrunch
Unable to calculate read time
[
|
]