fromHackernoon1 year agophi-3-mini: The 3.8B Powerhouse Reshaping LLM Performance on Your Phone | HackerNoonPhi-3-mini is a 3.8 billion parameter language model trained on 3.3 trillion tokens, demonstrating competitive performance to models like Mixtral 8x7B and GPT-3.5.Artificial intelligence