gamers are probably going to feel left out since Nvidia seems to have decided renting cloud rigs to them is better than selling consumer hardware, small companies looking for AI chip compromises will be excited, and agentic AI is gonna be so hot that our Mann on the ground this week in San Jose isn't gonna need a jacket.
The British government is investing heavily in the national computing infrastructure. With an additional investment of approximately $49 million, the DAWN supercomputer at the University of Cambridge is being expanded. This is according to Neowin. This expansion will increase the total computing power of the system by a factor of six. The aim is to enable researchers and technology companies to compete more effectively with players from the United States and China.
"Many people in the PC industry said, well, if you want graphics, it's gotta be discrete graphics because otherwise people will think it's bad graphics," Macri said at last year's CES. "What Apple showed was consumers don't care what's inside the box. They actually care what the what the box looks like. They care about the screen, the keyboard, the mouse. They care about what it does."
AMD clarified those estimates are based on a comparison between an eight-GPU MI300X node and an MI500 rack system with an unspecified number of GPUs. The math works out to eight MI300Xs that are 1000x less powerful than X-number of MI500Xs. And since we know essentially nothing about the chip besides that it'll ship in 2027, pair TSMC's 2nm process tech with AMD's CDNA 6 compute architecture, and use HBM4e memory, we can't even begin to estimate what that 1000x claim actually means.
The new capabilities center on two integrated components: the Dynamo Planner Profiler and the SLO-based Dynamo Planner. These tools work together to solve the "rate matching" challenge in disaggregated serving. The teams use this term when they split inference workloads. They separate prefill operations, which process the input context, from decode operations that generate output tokens. These tasks run on different GPU pools. Without the right tools, teams spend a lot of time determining the optimal GPU allocation for these phases.
Scientists are showing that neuromorphic computers, designed to mimic the human brain, are not only useful for AI, but also for complex computational problems that normally run on supercomputers. This is reported by The Register. Neuromorphic computing differs fundamentally from the classic von Neumann architecture. Instead of a strict separation between memory and processing, these functions are closely intertwined. This limits data transport, a major source of energy consumption in modern computers. The human brain illustrates how efficient such an approach can be.