"A researcher might invent a new model architecture or a new performance optimization technique, but they may not be able to afford the high-performance computing resources required for a large-scale experiment."
"The fruits of this labor are expected to be open-sourced by researchers and developers so that they can benefit the machine learning ecosystem as a whole."
"Developing low-level application frameworks and kernels isn't a big ask for such a large company...This is why we've seen many Intel, AMD, and others gravitate toward frameworks like PyTorch or TensorFlow to hide the complexity associated with low-level coding."
"Researchers, on the other hand, are often more than willing to dive into low-level hardware if it means extracting additional performance, uncovering hardware-specific optimizations, or simply getting access to the compute necessary to move their research forward."
Collection
[
|
...
]