"We are not expanding a lot of square footage, per se, but we're expanding our compute," Chan said on an episode of " The a16Z Podcast" that aired November 6, when talking about their investment in Biohub, a collection of biology labs the philanthropy has backed since 2016. "The researchers, they don't want employees working for them, they don't want space, they just want GPUs," Zuckerberg added. "In a sense, that's new lab space. It's much more expensive than wet lab space," said Chan, who is a pediatrician by training.
Microsoft declared on Monday that it will spend more than $7.9 billion on infrastructure for its AI strategy in the UAE from the start of 2026 to the end of 2029. The Redmond firm says that this will comprise more than $5.5 billion in capital expenses for expansion of its AI and cloud infrastructure, including new steps it plans to disclose in the UAE capital Abu Dhabi later this week, plus $2.4 billion in planned local operating expenses.
Valuations further differentiate the periods: dotcom tech stocks often traded at 150 to 180 times trailing earnings, while current AI frontrunners average around 40 times. Some AI hardware purchasers have reported improved return on ivested capital (ROIC), but results vary across the industry. This solid groundwork distinguishes AI from historical bubbles and primes leading companies like Nvidia for substantial expansion.
IBM Cloud Code Engine, the company's fully managed, strategic serverless platform, has introduced Serverless Fleets with integrated GPU support. With this new capability, the company directly addresses the challenge of running large-scale, compute-intensive workloads such as enterprise AI, generative AI, machine learning, and complex simulations on a simplified, pay-as-you-go serverless model. Historically, as noted in academic papers, including a recent Cornell University paper, serverless technology struggled to efficiently support these demanding, parallel workloads,
( Advanced Micro DevicesNASDAQ:AMD) shocked investors yesterday with a landmark multi-year agreement to supply OpenAI with 6 gigawatts of its Instinct graphics processing units (GPUs), starting with 1 gigawatt in the second half of 2026. The deal, which could generate tens of billions in annual revenue for AMD, includes a warrant allowing OpenAI to acquire up to 160 million shares - roughly 10% of the company - for a nominal fee, tied to deployment milestones and stock price targets up to $600 per share.
On Wednesday, Enterprise AI model-maker Cohere said it raised an additional $100 million - bumping its valuation to $7 billion - in an extension to a round announced in August. The August round was an oversubscribed $500 million round at a $6.8 billion valuation, the company said at the time.
It's a very broad question. But, to start, I think it's important to point out that this is in some respects an entirely new form of business logic and a new form of computing. And so, the first question becomes, if agentic systems are reasoning models coupled with agents that perform tasks by leveraging reasoning models, as well as different tools that have been allocated to them to help them accomplish their tasks ... these models need to run on very high-performance machinery.