Never one to miss an opportunity, Meta released its latest Llama 2 large language model (LLM), which, the company says, is outperforming other open-source chat models (including GPT) on helpfulness and safety, including sharing harmful or incorrect information. In fact, Meta partnered with Microsoft to enable developers using Microsoft tools to choose between Meta’s Llama and OpenAI’s GPT models when building their AI experiences.
Llama 2 will be made commercially available, free of charge, as an alternative to OpenAI’s GPT-3 and 4 LLM, Google’s LaMDA and PaLM LLM (the basis for Bard), Hugging Face’s BLOOM and XLM-RoBERTa, Nvidia’s NeMO LLM, XLNet, Co:here, and GLM-130B.
Meta is Sharing Three Versions of the Model:
- One is trained on 7 billion parameters
- One on 13b
- One is a 70b version
Microsoft had this to say about the relationship with Meta:
“Today, at Microsoft Inspire, Meta and Microsoft announced support for the Llama 2 family of large language models (LLMs) on Azure and Windows. Llama 2 is designed to enable developers and organizations to build generative AI-powered tools and experiences. Meta and Microsoft share a commitment to democratizing AI and its benefits and we are excited that Meta is taking an open approach with Llama 2.”
Microsoft has also invested $10 billion into OpenAI, and has already built GPT into most of its tools and platforms. Now, the company will integrate Llama 2 into various applications, which will see Microsoft take the lead in the race to own LLMs customer connections.
Adding Trust and Eliminating Harmful Outputs
Meta says the Llama 2 model includes significant significant training around ‘truthfulness’, ‘toxicity’, and’ bias.’ This is incredibly important as more studies point out that these GAI tools were trained on biased information.
In fact, if one considers that GAI tools ingested the entirety of the internet (up until recent years) to learn, and the information on the internet was created by humans who have inherent biases, one can assume GAI is naturally biased.
Plus, we know the vast majority of all published works reflect the cis-gender, white male view simply because the views of other genders and ethnicities were absent from publications for centuries. That alone creates a skewed and biased result.
More Truthfulness and Less Toxicity
However, based on this additional training, Meta says that Llama 2 Chat ’shows great improvement over the pre-trained Llama 2 in terms of truthfulness and toxicity.’ The company states:
“The percentage of toxic generations shrinks to effectively 0% for Llama 2-Chat of all sizes: this is the lowest toxicity level among all compared models. In general, when compared to Falcon and MPT, the fine-tuned Llama 2-Chat shows the best performance in terms of toxicity and truthfulness.”
If this is true, it could make Llama a far superior GAI tool. Even looking beyond bias and just considering accuracy, there are also significant risks in using ChatGPT’s outputs without checking and re-checking any and all references to ensure the AI didn’t simply make up facts it couldn’t find on its own.
I will continue to test Llama against other GAI tools and will let you know what I think. In the meantime, Microsoft Azure AI customers will be able to test Llama 2 with their own sample data, in order to test its performance in different contexts.
If you need assistance defining the right GAI tech stack to improve your own marketing, advertising, PR, or content creation, contact me today. Human Driven AI offers in-person and virtual workshops as well as comprehensive AI Transformation roadmaps.Read more: Meta Releases Llama 2 LLM to Power New AI Experiences
As AI continues infiltrating businesses, a new C-suite role is emerging – the Chief AI Officer. Here’s a look at what this role looks like.
Led by powerhouses like IBM, Meta, and AMD, the new AI Alliance has the potential to shape the responsible, ethical evolution of AI.
New study shows that the proper use of generative AI can deliver significant ROI across your entire marketing organization.