Mistral AI says its Small 3 model is a local, open-source alternative to GPT-4o mini
Briefly

Mistral AI has launched Small 3, a 24B-parameter open-source model designed for efficiency and optimized for low latency. Competing with larger models like Llama 3.3 70B and Qwen 32B, Small 3 showcases fast performance with better than 81% accuracy on the MMLU benchmark. The model's design allows it to excel in situations requiring immediate, accurate outputs, particularly in customer-facing applications such as virtual assistants. While human evaluators showed preference for Small 3 over various models, performance against Llama 3.3 and GPT-4o mini was more evenly matched, highlighting the model's competitive nature in the evolving AI landscape.
On Thursday, French lab Mistral AI launched Small 3, claimed to be the most efficient model in its category and optimized for low latency.
The 24B-parameter Small 3 is open-source and excels in scenarios requiring quick, accurate responses, achieving over 81% accuracy on the MMLU benchmark.
Read at ZDNET
[
|
]