The latest release of Red Hat Enterprise Linux AI 1.2 brings several significant advancements to streamline the development, testing, and deployment of large language models, making generative AI model handling more efficient.
RHEL AI 1.2 integrates IBM Research's open-source Granite LLMs, employs the InstructLab alignment tools for better collaboration, and enhances knowledge access for LLMs through Retrieval-Augmented Generation (RAG) methodology.
With expanded hardware support, RHEL AI 1.2 is compatible with Lenovo ThinkSystem SR675 V3 servers and introduces AMD Instinct Accelerators for optimized training and inference capabilities.
The new version allows deployment on major cloud platforms including Azure and Google Cloud, alongside support for AWS and IBM Cloud, broadening accessibility for users.
Collection
[
|
...
]