#data-training

[ follow ]
fromHackernoon
1 year ago

phi-3-mini: The 3.8B Powerhouse Reshaping LLM Performance on Your Phone | HackerNoon

Phi-3-mini is a 3.8 billion parameter language model trained on 3.3 trillion tokens, demonstrating competitive performance to models like Mixtral 8x7B and GPT-3.5.
Artificial intelligence
[ Load more ]