Microsoft has released the lightweight AI model Phi-3 Mini, the first of three smaller Phi models planned for this year, with 3.8 billion parameters. Trained on a smaller dataset compared to large language models like GPT-4, it is now available on Azure, Hugging Face, and Ollama. Here are the details…
Microsoft’s AI model Phi-3 features
Phi-3 Mini has the same capacity as large language models like GPT-3.5 but with less resource consumption. Microsoft also plans to release Phi-3 Small, with 7 billion parameters, and Phi-3 Medium, with 14 billion parameters.
Parameters indicate how many complex instructions a model can understand. Microsoft released Phi-2 in December and claimed that this model delivered performance similar to larger models like Llama 2.
Phi-3 outperforms Phi-2 and provides responses close to those of larger models. Eric Boyd, corporate vice president of Microsoft Azure AI Platform, mentioned that Phi-3 Mini, with its small form factor, is as capable as GPT-3.5.
Small AI models are generally more cost-effective and perform better on personal devices. Microsoft sees Phi-3 as an ideal solution for companies working with smaller datasets. Boyd explained that Phi-3 was trained with the concept that “children learn from books and simple sentence structures.”
This model builds on the foundation that Phi-1 focused on coding, and Phi-2 developed logical thinking skills. While Phi-3 may not have extensive general knowledge, it could be a good choice for companies with specific applications.
The need for smaller datasets and less computational power makes Phi-3 cost-effective for many companies. This is a step aligned with Microsoft’s strategy to develop smaller, lighter, and more accessible AI models.
What do you think about this news? Share your opinions in the comments section below.