OpenAI launches first open language models since GPT-2
Briefly

OpenAI launches first open language models since GPT-2
"OpenAI released its first open weights language models since GPT-2, GPT-OSS, in two sizes: 117 billion parameters matching o4-mini and a 21 billion parameters version similar to o3-mini."
"The models are under Apache 2.0 license, allowing users extensive freedom to utilize them without restrictions on user count or commercial applications."
"Trained primarily on English text with a focus on STEM, coding, and general knowledge, these models lack vision capabilities found in larger models like GPT-4o."
"GPT-OSS features a mixture of expert architecture with GPT-OSS-120B using 128 experts producing outputs and GPT-OSS-20B employing 32 experts for efficiency."
OpenAI introduced its first open weights language models, GPT-OSS, in two sizes: 117 billion parameters and 21 billion parameters. The 117 billion model performs similarly to the proprietary o4-mini, while the smaller version matches the o3-mini. OpenAI makes these models available under an Apache 2.0 license, allowing extensive flexibility for use. The models were trained on English text, focusing on STEM and coding. They use a mixture of expert architecture for efficient token generation, enhancing speed in comparison to dense models of similar size.
Read at Theregister
Unable to calculate read time
[
|
]