OpenAI challenges rivals with Apache-licensed GPT-OSS models
Briefly

OpenAI has debuted its first open-weight language models, gpt-oss-120b and gpt-oss-20b, aimed at enhancing enterprise adoption through flexible deployment and lowering operational costs. These models promise competitive performance, with the larger variant achieving near parity with OpenAI's previous models while requiring only an 80 GB GPU. The architecture is designed for enterprise efficiency, optimizing performance with a mixture-of-experts setup. These models support large context windows and are available for unrestricted use under the Apache 2.0 license, promoting local customization by organizations.
OpenAI's new gpt-oss-120b and gpt-oss-20b models provide customizable, high-performance AI for enterprises, circumventing vendor lock-in and facilitating local deployment.
These models achieve competitive performance, with gpt-oss-120b nearing o4-mini performance using a single 80 GB GPU, while gpt-oss-20b runs effectively on edge devices.
The models utilize a mixture-of-experts architecture, activating billions of parameters efficiently, and support a large token context while being released under the Apache 2.0 license.
Open-weight models allow organizations to customize AI locally without vendor restrictions, providing an alternative to proprietary systems and challenging existing competitors in the field.
Read at Computerworld
[
|
]