Nebul integrates Speedata chip for lightning-fast data processing
Briefly

Nebul integrates Speedata chip for lightning-fast data processing
"To support data-intensive workloads, Speedata introduced its Analytics Processing Unit (APU). Dutch neocloud Nebul will be the first to integrate the hardware for Apache Spark workloads. According to both parties, the speed gain is a hundredfold. The APU is now available in Nebul's sovereign cloud infrastructure. This IT infrastructure is aimed at organizations that want to run advanced analytics and AI data processing on a large scale, but must remain within national borders."
"The APU is designed for large amounts of data and advanced analytics workloads. It is a specialized ASIC that runs Apache Spark SQL natively. Complex queries, joins, aggregations, and transformations run directly on the silicon instead of an abstraction layer operating via memory. In standard benchmarks, the APU achieves up to 100 times better performance than CPUs and GPUs. That's no surprise: GPUs are primarily parallel data processors, and CPUs are designed to be jacks-of-all-trades."
"This specialization of the APU also translates into costs. Speedata CEO Adi Gelvan states that a customer was able to consolidate from 38 servers to 3, resulting in a 90 percent cost reduction. This demonstrates the significant overhead of the data layer, with a considerable portion of conventional hardware solely engaged in data-intensive tasks. However, bringing storage to compute requires little from the APU, which is designed to make precisely that step."
Speedata's Analytics Processing Unit (APU) is a specialized ASIC that runs Apache Spark SQL natively and targets large-scale data and advanced analytics workloads. Complex queries, joins, aggregations, and transformations execute directly on silicon rather than through memory-based abstraction layers, producing benchmark gains up to 100x compared with CPUs and GPUs. Nebul integrates the APU into a sovereign cloud infrastructure for organizations requiring large-scale analytics and AI processing within national borders. Demand for this infrastructure has tripled in the past year. A reported consolidation from 38 servers to 3 yielded a 90 percent cost reduction, reflecting reduced data-layer overhead.
Read at Techzine Global
Unable to calculate read time
[
|
]