A new AI model, OpenThinker-32B, developed by the Open Thoughts consortium, consistently outperformed DeepSeek in key benchmarks, achieving a 90.6% accuracy on MATH500 and 61.6 on GPQA-Diamond. Though it lagged in programming compared to DeepSeek, its open-source nature promises future improvements. Remarkably, OpenThinker achieved these results with only 114,000 training examples, in contrast to DeepSeek's requirement for 800,000, highlighting its efficiency. The development demonstrates the potential of smaller teams to disrupt existing AI paradigms, emphasizing the cost-effectiveness of open-source innovations in the industry.
OpenThinker-32B has set a new standard in AI performance, outpacing DeepSeek in benchmarks while promoting the benefits of open-source development with significantly fewer training examples.
Collection
[
|
...
]