Microsoft's Phi-3 shows the surprising power of small, locally run AI language models
Briefly

Microsoft introduces Phi-3-mini, a lightweight AI language model with 3.8 billion parameters, ideal for smartphones and laptops without internet connection.
Traditional large language models (LLMs) like Google's PaLM 2 and OpenAI's GPT-4 require heavy-duty data center GPUs, while Phi-3-mini is designed for consumer GPUs.
Microsoft's Phi-3-mini is part of the Phi series, following Phi-2 and Phi-1, and has a 4,000-token context window with plans for larger parameter versions.
Microsoft aims to release 7-billion and 14-billion parameter versions of Phi-3, claiming they will be more capable than the initial Phi-3-mini model.
Read at Ars Technica
[
add
]
[
|
|
]