Meta Releases Llama 3.3: A Multilingual Model with Enhanced Performance and Efficiency
Briefly

Llama 3.3 features a 128k-token context window and architectural improvements for efficiency, demonstrating strong performance in benchmarking tasks such as reasoning, coding, and multilingual support.
Llama 3.3 uses a combination of supervised learning and reinforcement learning from human feedback, ensuring robust performance across various tasks while emphasizing safety and helpfulness.
With 70 billion parameters, Llama 3.3 achieves significant benchmarks: 50.5% accuracy in GPQA reasoning, 88.4% pass rate in HumanEval coding, and an EM score of 91.1% in multilingual reasoning.
Meta's focus on safety is reflected in Llama 3.3's design, which includes robust refusal strategies for harmful prompts and aims for a balanced tone in user interactions.
Read at InfoQ
[
|
]