Meta announced the release of Llama 3.1 405B on Tuesday. This large language model is designed to compete with the leading models from Anthropic, Google, and OpenAI.
This development is significant as Meta continues to make its models freely available, albeit with some restrictions. The company aims to demonstrate that it can rival the most considerable language models in the industry.
Llama 3.1 405B is Meta's largest text-based language model to date. It introduces support for eight additional languages and more oversized context windows, allowing it to handle more information within a user's prompt. The newly supported languages include French, German, Hindi, Italian, Portuguese, and Spanish, with more languages expected to be added soon. Additionally, Meta has adjusted its licensing terms to permit Llama's outputs to improve to improve. Smaller versions of Llama 3 have also been updated to version 3.1, incorporating enhanced language and context capabilities.
These new models are immediately available from Meta and Hugging Face. Individuals can test Llama 3.1 through WhatsApp and at Meta.ai, with the company encouraging users to challenge the model with complex math or coding problems.
Meta expressed confidence in its model's performance, stating in a blog post that their flagship model competes well with leading foundation models such as GPT-4, GPT-4o, and Claude 3.5 Sonnet. The company also highlighted that its smaller models are competitive with other open and closed models of a similar size.
CEO Mark Zuckerberg emphasized in an open letter that AI models should be widely accessible. He likened Meta's approach to the impact of Linux on corporate computing, which shifted from being dominated by custom, closed versions of Unix to an open-source ecosystem. Zuckerberg believes AI development will follow a similar trajectory, with open-source models rapidly catching up to their closed-source counterparts.
Looking ahead, Meta plans to integrate its AI capabilities into the Meta Quest headset starting next month. This experimental feature will replace the VR headset's voice command functionality, showcasing Meta's commitment to enhancing user experiences through AI.