...
/Creating a Text-Based Chatbot with Meta’s Llama
Creating a Text-Based Chatbot with Meta’s Llama
Learn how to use Llama 3 to create a text-based chatbot with Gradio.
We'll cover the following...
Large language models come in all shapes and sizes, offering different capabilities and areas of expertise. For our use case, we could use any decent text-based LLM. Notice how we specifically mentioned text-based. Large language models have been improving at an astonishing rate. Some modern LLMs, such as Gemini and GPT-4o, can also process images as input. This is known as multimodality, where the input can have more than one modality, such as text and images.
Let’s try to get our educational chatbot working on text first. Groq offers quite a few models to choose from. At the time of writing this course, Groq offers the following models:
Meta Llama 3, Llama 3.1, Llama 3.2 and Llama 3.3
Qwen 2.5
DeekSeep distilled Llama and Qwen
Google Gemma 2
OpenAI Whisper v3
Mistral Mixtral-8-7b
Among these models, Meta’s Llama 3.2 is the newest and most up-to-date contender. Let’s proceed with that for now.
Using Llama 3.3 with Groq
Meta Llama 3.3 is an open-source family of large language models. This means that we can fine-tune, modify, and deploy these models anywhere. Llama 3.3 is offered in two variants: 70b-versatile and 70b-specdec. Groq currently offers both models with similar daily usage limits. Let’s test out the 70b-versatile variant.
from groq import Groq client = Groq() completion = client.chat.completions.create( model="llama-3.3-70b-versatile", messages=[ { "role": "user", "content": "Hello!" } ], temperature=1, max_tokens=1024, top_p=1, stream=True, stop=None, ) for chunk in completion: print(chunk.choices[0].delta.content or "", end="")
This was simple enough. All we had to do was to change the model parameter on line 5 to Llama 3.2 and ...