#gpu-acceleration

[ follow ]
fromZDNET
1 week ago

SUSE Linux Enterprise Server 16 lands - with AI and EU support baked in

Lots of companies are announcing AI this and AI that, but few of them offer more than new AI lipstick on an old pig when you look at them closely. Then, there's what SUSE is doing with its release of SUSE Linux Enterprise Server 16 (SLES 16), available today. This new version is positioned as an AI-ready operating system tailored to the demands of today's hybrid cloud, data center, and edge computing environments.
EU data protection
Science
fromApp Developer Magazine
10 months ago

Linux Foundation unveils Newton to boost robot learning

Newton provides GPU-accelerated, extensible, high-fidelity physics simulation to accelerate generalist robotics research, enabling scalable, accurate training and testing of contact-rich behaviors.
Marketing tech
fromExchangewire
1 month ago

PubMatic Delivers 5x Faster, Smarter Advertising Decisions With NVIDIA

PubMatic integrated NVIDIA GPUs and software to cut ad inference latency to ~1 ms, increase AI processing up to 5×, and recover lost ad revenue.
Web development
fromFigma
1 month ago

Figma Rendering: Powered by WebGPU | Figma Blog

Migrating Figma's rendering backend from WebGL to WebGPU required modernizing interfaces, eliminating global GPU state, and enabled better performance, GPU parallelism, and clearer error handling.
Artificial intelligence
fromZDNET
1 month ago

You can get Nvidia's CUDA on three popular enterprise Linux distros now - why it matters

CUDA toolkit is now natively packaged in Rocky Linux, SUSE Linux, and Ubuntu, simplifying deployment and accelerating AI development on Nvidia GPUs.
Artificial intelligence
fromFast Company
2 months ago

This startup claims it just outran Nvidia on its own turf

DataPelago's Nucleus dramatically accelerates data processing across hardware, outperforming Nvidia's cuDF and exposing significant software-driven GPU performance limitations.
Data science
fromTalkpython
2 months ago

Accelerating Python Data Science at NVIDIA

RAPIDS enables zero-code GPU acceleration for pandas, scikit-learn, NetworkX, and other Python data libraries, delivering large speedups and scalable GPU-native workflows.
fromInfoQ
4 months ago

GPULlama3.java Brings GPU-Accelerated LLM Inference to Pure Java

The TornadoVM programming guide demonstrates how developers can utilize hardware-agnostic APIs, enabling the same Java source code to run identically on various hardware accelerators.
Java
Data science
fromHackernoon
6 months ago

Supercharge ML: Your Guide to GPU-Accelerated cuML and XGBoost | HackerNoon

GPU acceleration can greatly enhance traditional machine learning workflows
Focus on cuML, XGBoost, and dimensionality reduction techniques for efficient data processing
Python
fromHackernoon
6 months ago

Achieve 100x Speedups in Graph Analytics Using Nx-cugraph | HackerNoon

Leveraging GPU acceleration can significantly improve performance in graph analytics.
nx-cugraph provides a solution for enhancing NetworkX workflows on larger datasets.
Artificial intelligence
fromInfoQ
5 months ago

Google Enhances LiteRT for Faster On-Device Inference

LiteRT simplifies on-device ML inference with enhanced GPU and NPU support for faster performance and lower power consumption.
[ Load more ]