Meta says Llama 3 beats most other models, including Gemini
Briefly

Llama 3 showed more diversity in answering prompts, had fewer false refusals, and could reason better. It understands more instructions and writes better code than before.
Meta claims Llama 3 outperformed similar models like Google's Gemma and Gemini, Mistral 7B, and Anthropic's Claude 3 in benchmarking tests, with the 8B version significantly outperforming Gemma 7B and Mistral 7B.
Benchmark testing AI models can be imperfect as the datasets used may have been part of the model's training, giving it prior knowledge. However, human evaluators marked Llama 3 higher than other models in real-world scenario emulating evaluations.
Read at The Verge
[
add
]
[
|
|
]