Meta debuts third-generation Llama large language model
Briefly

Meta's Llama 3 with 8 billion to 70 billion parameters is designed to rival much larger models, focusing on multilingual and multimodal features from smaller domain-specific models.
The efficiency and performance boost of Meta's Llama 3 are attributed to a tokenizer with 128,000 tokens, better datasets, and additional fine-tuning steps after training.
Llama 3 was pre-trained on over 15 trillion tokens, showcasing seven times larger dataset and four times more code than its predecessor, emphasizing data quality control.
Read at Theregister
[
add
]
[
|
|
]