Chinese AI Model Promises Gemini 2.5 Pro-level Performance at One-fourth of the Cost | HackerNoon
Briefly

The latest edition of 'This Week in AI Engineering' highlights the innovative efforts by the Chinese startup MiniMax, which recently introduced their frontier-level open-weight reasoning model, MiniMax-M1. Notably, M1 boasts a remarkable accuracy of 86.0% on AIME 2024 while emphasizing cost-efficiency with training expenses amounting to just $534,700. Its hybrid architecture enables it to deliver high-quality reasoning at a lower computational cost, establishing MiniMax as a fierce competitor against established models like DeepSeek and Gemini. Also mentioned are advances from Google and the coding model Kimi-Dev-72B, expanding the AI landscape with powerful tools for developers.
MiniMax's M1 model stands out with its open-weight reasoning capabilities, scoring high on multiple benchmarks, including an impressive 86.0% accuracy on AIME 2024.
The hybrid architecture and attention mechanism of MiniMax-M1 allow it to achieve top-tier reasoning quality at a fraction of the compute cost compared to its competitors.
Compared to DeepSeek's training cost of $5.6 million, MiniMax optimized its model training to around $534,700, showcasing a remarkable focus on cost-effectiveness.
As a pioneering effort in AI engineering, the release of MiniMax-M1 demonstrates the potential of open-source models to challenge established heavyweights in the industry.
Read at Hackernoon
[
|
]