Meta unveiled its artificial intelligence model Llama 3.2 at the Connect 2024 event. The AI with multimodal capabilities challenges competitors such as GPT-4o with its image processing and voice detection capabilities.
What does Meta Llama 3.2 promise?
The new Llama 3.2 models have 11 billion and 90 billion parameter versions that can understand both text and images. These models can answer questions about data trends by extracting insights from graphs and charts. They can also identify important objects and details to caption images.
Meta also introduced Llama 3.2 models with 1 billion and 3 billion parameters optimized for developers and enterprises. These models are designed specifically for building personalized applications on messaging platforms.
Meta claims that Llama 3.2 rivals Anthropic and OpenAI models in tasks such as image recognition, and outperforms other conversational AI systems on language comprehension criteria.
Llama 3.2 can also synthesize human-like voices when responding. It will also have the ability to respond using the voices of famous names such as Dame Judi Dench, John Cena and Kristen Bell. In this context, artificial intelligence voices will be added to social media platforms such as Messenger and Instagram.
{{user}} {{datetime}}
{{text}}