OpenAI released two open-weight language models, gpt-oss-120b and gpt-oss-20b, under an Apache 2.0 open source license. These models boast strong real-world performance at low costs, optimized for consumer hardware. The gpt-oss-120b model operates efficiently on a single 80 GB GPU, matching performance with OpenAI's o4-mini on core reasoning benchmarks. Similarly, gpt-oss-20b provides comparable results to OpenAI's o3-mini, with a capability to run on devices with just 16 GB memory. Nvidia's H100 GPUs trained these models, enabling high token processing rates.
OpenAI has released two open-weight language models, gpt-oss-120b and gpt-oss-20b, achieving strong real-world performance at low cost, optimized for consumer hardware.
The gpt-oss-120b model runs efficiently on a single 80 GB GPU, achieving near-parity with OpenAI's o4-mini on core reasoning benchmarks.
Open models can be checked by anyone, which helps improve quality, remove bugs, and tackles bias, especially when training data lacks diversity.
Open source in AI fosters global reach, accessibility, and can create de facto standards, promoting adoption in the tech landscape.
Collection
[
|
...
]